Search Results

Search found 2331 results on 94 pages for 'typing'.

Page 89/94 | < Previous Page | 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Solaris X86 AESNI OpenSSL Engine

    - by danx
    Solaris X86 AESNI OpenSSL Engine Cryptography is a major component of secure e-commerce. Since cryptography is compute intensive and adds a significant load to applications, such as SSL web servers (https), crypto performance is an important factor. Providing accelerated crypto hardware greatly helps these applications and will help lead to a wider adoption of cryptography, and lower cost, in e-commerce and other applications. The Intel Westmere microprocessor has six new instructions to acclerate AES encryption. They are called "AESNI" for "AES New Instructions". These are unprivileged instructions, so no "root", other elevated access, or context switch is required to execute these instructions. These instructions are used in a new built-in OpenSSL 1.0 engine available in Solaris 11, the aesni engine. Previous Work Previously, AESNI instructions were introduced into the Solaris x86 kernel and libraries. That is, the "aes" kernel module (used by IPsec and other kernel modules) and the Solaris pkcs11 library (for user applications). These are available in Solaris 10 10/09 (update 8) and above, and Solaris 11. The work here is to add the aesni engine to OpenSSL. X86 AESNI Instructions Intel's Xeon 5600 is one of the processors that support AESNI. This processor is used in the Sun Fire X4170 M2 As mentioned above, six new instructions acclerate AES encryption in processor silicon. The new instructions are: aesenc performs one round of AES encryption. One encryption round is composed of these steps: substitute bytes, shift rows, mix columns, and xor the round key. aesenclast performs the final encryption round, which is the same as above, except omitting the mix columns (which is only needed for the next encryption round). aesdec performs one round of AES decryption aesdeclast performs the final AES decryption round aeskeygenassist Helps expand the user-provided key into a "key schedule" of keys, one per round aesimc performs an "inverse mixed columns" operation to convert the encryption key schedule into a decryption key schedule pclmulqdq Not a AESNI instruction, but performs "carryless multiply" operations to acclerate AES GCM mode. Since the AESNI instructions are implemented in hardware, they take a constant number of cycles and are not vulnerable to side-channel timing attacks that attempt to discern some bits of data from the time taken to encrypt or decrypt the data. Solaris x86 and OpenSSL Software Optimizations Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor. AESNI is used internally in the kernel through kernel crypto modules and is available in user space through the PKCS#11 library. For OpenSSL on Solaris 11, AESNI crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "aesni engine." This is in lieu of the extra overhead of going through the Solaris OpenSSL pkcs11 engine, which accesses Solaris crypto and digest operations. Instead, AESNI assembly is included directly in the new aesni engine. Instead of including the aesni engine in a separate library in /lib/openssl/engines/, the aesni engine is "built-in", meaning it is included directly in OpenSSL's libcrypto.so.1.0.0 library. This reduces overhead and the need to manually specify the aesni engine. Since the engine is built-in (that is, in libcrypto.so.1.0.0), the openssl -engine command line flag or API call is not needed to access the engine—the aesni engine is used automatically on AESNI hardware. Ciphers and Digests supported by OpenSSL aesni engine The Openssl aesni engine auto-detects if it's running on AESNI hardware and uses AESNI encryption instructions for these ciphers: AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CFB128, AES-192-CFB128, AES-256-CFB128, AES-128-CTR, AES-192-CTR, AES-256-CTR, AES-128-ECB, AES-192-ECB, AES-256-ECB, AES-128-OFB, AES-192-OFB, and AES-256-OFB. Implementation of the OpenSSL aesni engine The AESNI assembly language routines are not a part of the regular Openssl 1.0.0 release. AESNI is a part of the "HEAD" ("development" or "unstable") branch of OpenSSL, for future release. But AESNI is also available as a separate patch provided by Intel to the OpenSSL project for OpenSSL 1.0.0. A minimal amount of "glue" code in the aesni engine works between the OpenSSL libcrypto.so.1.0.0 library and the assembly functions. The aesni engine code is separate from the base OpenSSL code and requires patching only a few source files to use it. That means OpenSSL can be more easily updated to future versions without losing the performance from the built-in aesni engine. OpenSSL aesni engine Performance Here's some graphs of aesni engine performance I measured by running openssl speed -evp $algorithm where $algorithm is aes-128-cbc, aes-192-cbc, and aes-256-cbc. These are using the 64-bit version of openssl on the same AESNI hardware, a Sun Fire X4170 M2 with a Intel Xeon E5620 @2.40GHz, running Solaris 11 FCS. "Before" is openssl without the aesni engine and "after" is openssl with the aesni engine. The numbers are MBytes/second. OpenSSL aesni engine performance on Sun Fire X4170 M2 (Xeon E5620 @2.40GHz) (Higher is better; "before"=OpenSSL on AESNI without AESNI engine software, "after"=OpenSSL AESNI engine) As you can see the speedup is dramatic for all 3 key lengths and for data sizes from 16 bytes to 8 Kbytes—AESNI is about 7.5-8x faster over hand-coded amd64 assembly (without aesni instructions). Verifying the OpenSSL aesni engine is present The easiest way to determine if you are running the aesni engine is to type "openssl engine" on the command line. No configuration, API, or command line options are needed to use the OpenSSL aesni engine. If you are running on Intel AESNI hardware with Solaris 11 FCS, you'll see this output indicating you are using the aesni engine: intel-westmere $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support If you are running on Intel without AESNI hardware you'll see this output indicating the hardware can't support the aesni engine: intel-nehalem $ openssl engine (aesni) Intel AES-NI engine (no-aesni) (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support For Solaris on SPARC or older Solaris OpenSSL software, you won't see any aesni engine line at all. Third-party OpenSSL software (built yourself or from outside Oracle) will not have the aesni engine either. Solaris 11 FCS comes with OpenSSL version 1.0.0e. The output of typing "openssl version" should be "OpenSSL 1.0.0e 6 Sep 2011". 64- and 32-bit OpenSSL OpenSSL comes in both 32- and 64-bit binaries. 64-bit executable is now the default, at /usr/bin/openssl, and OpenSSL 64-bit libraries at /lib/amd64/libcrypto.so.1.0.0 and libssl.so.1.0.0 The 32-bit executable is at /usr/bin/i86/openssl and the libraries are at /lib/libcrytpo.so.1.0.0 and libssl.so.1.0.0. Availability The OpenSSL AESNI engine is available in Solaris 11 x86 for both the 64- and 32-bit versions of OpenSSL. It is not available with Solaris 10. You must have a processor that supports AESNI instructions, otherwise OpenSSL will fallback to the older, slower AES implementation without AESNI. Processors that support AESNI include most Westmere and Sandy Bridge class processor architectures. Some low-end processors (such as for mobile/laptop platforms) do not support AESNI. The easiest way to determine if the processor supports AESNI is with the isainfo -v command—look for "amd64" and "aes" in the output: $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu Conclusion The Solaris 11 OpenSSL aesni engine provides easy access to powerful Intel AESNI hardware cryptography, in addition to Solaris userland PKCS#11 libraries and Solaris crypto kernel modules.

    Read the article

  • ASP.NET MVC 2 throws exception for ‘favicon.ico’

    - by nmarun
    I must be on fire or something – third blog in 2 days… awesome! Before I begin, in case you’re wondering, favicon.ico is the small image that appears to the left of your web address, once the page loads. In order to learn more about MVC or any thing for that matter, it’s better to look at the source itself. Since MVC is open source (at least some part of it is), I started looking at the source code that’s available for download. While doing so, I hit Steve Sanderson’s blog site where he explains in great detail the way to debug your app using ASP.NET MVC source code. For those who are not aware, Steve Sanderson’s book - Pro ASP.NET MVC Framework, is one of the best books to learn about MVC. Alrighty, I followed the article and I hit F5 to debug the default / unchanged MVC project. I put a breakpoint in the DefaultControllerFactory.cs, CreateController() method. To know a little more about this class and the method, read this. Sure enough, the control stopped at the breakpoint and I hit F5 again and the page rendered just fine. But then what’s this? The breakpoint was hit again, as if something else was being requested. I now hovered my mouse over the ‘controllerName’ parameter and it says – favicon.ico. This by itself was more than enough for me to raise my eye-brows, but what happened next just took the ground below my feet. Oh, oh, I’m sorry I’m just typing, no code, no image, so here are a couple of screen captures. The first one shows the request for the Home controller; I get ‘Home’ when I hover over the parameter: And here’s the one that shows the same for call for ‘favicon.ico’. So, I step through the code and when the control reaches line 91 – GetControllerInstance() method, I step in. This is when I had the ‘ground-losing’ experience. Wow, an exception is being thrown for this file and that too in RTM. For some reason MVC thinks, this as a controller and tries to run it through the MvcHandler and it hits this snag. So it seems like this will happen for any MVC 2 site and this did not happen for me in the previous version of MVC. Before I get to how to resolve it, here’s another way of reproducing this exception. Revert back all your changes that you did as mentioned in Steve’s blog above. Now, add a class to your MVC project and call it say, MyControllerFactory and let this inherit from DefaultControllerFactory class. (Read this for details on the DefaultControllerFactory class is and how it is used in a different context). Add an override for the CreateController() method and for the sake of this blog, just copy the same content from the DefaultControllerFactory class. The last step is to tell your MVC app to use the MyControllerFactory class instead of the default one. To do this, go to your Global.asax.cs file and add line 6 of the snippet below: 1: protected void Application_Start() 2: { 3: AreaRegistration.RegisterAllAreas(); 4:   5: RegisterRoutes(RouteTable.Routes); 6: ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); 7: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, you’re ready to reproduce the issue. Just F5 the project and when you hit the overridden CreateController() method for the second time, this is what it looks like for me: And continuing further gives me the same exception. I believe this is something that MS should fix, as not having ‘favicon.ico’ file will be common for most of the applications. So I think the when you create an MVC project, line 6 should be added by default by Visual Studio itself: 1: public class MvcApplication : System.Web.HttpApplication 2: { 3: public static void RegisterRoutes(RouteCollection routes) 4: { 5: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); 6: routes.IgnoreRoute("favicon.ico"); 7:   8: routes.MapRoute( 9: "Default", // Route name 10: "{controller}/{action}/{id}", // URL with parameters 11: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 12: ); 13: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } There it is, that’s the solution to avoid the exception altogether. I tried this both IE8 and Firefox browsers and was able to successfully reproduce the error. Hope someone will look at this issue and find a fix. Just before I finish up, I found another ‘bug’, if you want to call it, with Visual Studio 2008. Remember how you could change what browser you want your application to run in by just right clicking on the .aspx file and choosing ‘Browse with…’? Seems like that’s missing when you’re working with an MVC project. In order to test the above bug in the other browser, I had to load a classic ASP.NET project, change the settings and then run my MVC project. Felt kinda ‘icky’, for lack of a better word.

    Read the article

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • IntelliSense for Razor Hosting in non-Web Applications

    - by Rick Strahl
    When I posted my Razor Hosting article a couple of weeks ago I got a number of questions on how to get IntelliSense to work inside of Visual Studio while editing your templates. The answer to this question is mainly dependent on how Visual Studio recognizes assemblies, so a little background is required. If you open a template just on its own as a standalone file by clicking on it say in Explorer, Visual Studio will open up with the template in the editor, but you won’t get any IntelliSense on any of your related assemblies that you might be using by default. It’ll give Intellisense on base System namespace, but not on your imported assembly types. This makes sense: Visual Studio has no idea what the assembly associations for the single file are. There are two options available to you to make IntelliSense work for templates: Add the templates as included files to your non-Web project Add a BIN folder to your template’s folder and add all assemblies required there Including Templates in your Host Project By including templates into your Razor hosting project, Visual Studio will pick up the project’s assembly references and make IntelliSense available for any of the custom types in your project and on your templates. To see this work I moved the \Templates folder from the samples from the Debug\Bin folder into the project root and included the templates in the WinForm sample project. Here’s what this looks like in Visual Studio after the templates have been included:   Notice that I take my original example and type cast the Context object to the specific type that it actually represents – namely CustomContext – by using a simple code block: @{ CustomContext Model = Context as CustomContext; } After that assignment my Model local variable is in scope and IntelliSense works as expected. Note that you also will need to add any namespaces with the using command in this case: @using RazorHostingWinForm which has to be defined at the very top of a Razor document. BTW, while you can only pass in a single Context 'parameter’ to the template with the default template I’ve provided realize that you can also assign a complex object to Context. For example you could have a container object that references a variety of other objects which you can then cast to the appropriate types as needed: @{ ContextContainer container = Context as ContextContainer; CustomContext Model = container.Model; CustomDAO DAO = container.DAO; } and so forth. IntelliSense for your Custom Template Notice also that you can get IntelliSense for the top level template by specifying an inherits tag at the top of the document: @inherits RazorHosting.RazorTemplateFolderHost By specifying the above you can then get IntelliSense on your base template’s properties. For example, in my base template there are Request and Response objects. This is very useful especially if you end up creating custom templates that include your custom business objects as you can get effectively see full IntelliSense from the ‘page’ level down. For Html Help Builder for example, I’d have a Help object on the page and assuming I have the references available I can see all the way into that Help object without even having to do anything fancy. Note that the @inherits key is a GREAT and easy way to override the base template you normally specify as the default template. It allows you to create a custom template and as long as it inherits from the base template it’ll work properly. Since the last post I’ve also made some changes in the base template that allow hooking up some simple initialization logic so it gets much more easy to create custom templates and hook up custom objects with an IntializeTemplate() hook function that gets called with the Context and a Configuration object. These objects are objects you can pass in at runtime from your host application and then assign to custom properties on your template. For example the default implementation for RazorTemplateFolderHost does this: public override void InitializeTemplate(object context, object configurationData) { // Pick up configuration data and stuff into Request object RazorFolderHostTemplateConfiguration config = configurationData as RazorFolderHostTemplateConfiguration; this.Request.TemplatePath = config.TemplatePath; this.Request.TemplateRelativePath = config.TemplateRelativePath; // Just use the entire ConfigData as the model, but in theory // configData could contain many objects or values to set on // template properties this.Model = config.ConfigData as TModel; } to set up a strongly typed Model and the Request object. You can do much more complex hookups here of course and create complex base template pages that contain all the objects that you need in your code with strong typing. Adding a Bin folder to your Template’s Root Path Including templates in your host project works if you own the project and you’re the only one modifying the templates. However, if you are distributing the Razor engine as a templating/scripting solution as part of your application or development tool the original project is likely not available and so that approach is not practical. Another option you have is to add a Bin folder and add all the related assemblies into it. You can also add a Web.Config file with assembly references for any GAC’d assembly references that need to be associated with the templates. Between the web.config and bin folder Visual Studio can figure out how to provide IntelliSense. The Bin folder should contain: The RazorHosting.dll Your host project’s EXE or DLL – renamed to .dll if it’s an .exe Any external (bin folder) dependent assemblies Note that you most likely also want a reference to the host project if it contains references that are going to be used in templates. Visual Studio doesn’t recognize an EXE reference so you have to rename the EXE to DLL to make it work. Apparently the binary signature of EXE and DLL files are identical and it just works – learn something new everyday… For GAC assembly references you can add a web.config file to your template root. The Web.config file then should contain any full assembly references to GAC components: <configuration> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> <add assembly="System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <add assembly="System.Web.Extensions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </assemblies> </compilation> </system.web> </configuration> And with that you should get full IntelliSense. Note that if you add a BIN folder and you also have the templates in your Visual Studio project Visual Studio will complain about reference conflicts as it’s effectively seeing both the project references and the ones in the bin folder. So it’s probably a good idea to use one or the other but not both at the same time :-) Seeing IntelliSense in your Razor templates is a big help for users of your templates. If you’re shipping an application level scripting solution especially it’ll be real useful for your template consumers/users to be able to get some quick help on creating customized templates – after all that’s what templates are all about – easy customization. Making sure that everything is referenced in your bin folder and web.config is a good idea and it’s great to see that Visual Studio (and presumably WebMatrix/Visual Web Developer as well) will be able to pick up your custom IntelliSense in Razor templates.© Rick Strahl, West Wind Technologies, 2005-2011Posted in Razor  

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Getting your bearings and defining the project objective

    - by johndoucette
    I wrote this two years ago and thought it was worth posting… Some may think this is a daunting task and some may even say “what a waste of time” and want to open MS Project and start typing out tasks because someone asked for an estimate and a task list. Hell, maybe you even use Excel and pump out a spreadsheet with some real scientific formula for guessing how long it will take to code a bunch of classes. However, this short exercise will provide the basis for the entire project, whether small or large and be a great friend when communicating to anyone on your team or even your client. I call this the Project Brief. If you find yourself going beyond a single page, then you must decompose the sections and summarize your findings so there is a complete and clear picture of the project you are working on in a relatively short statement. Here is a great quote from the PMBOK (Project Management Body of Knowledge) relative to what a project is;   A project is a temporary endeavor undertaken to create a unique product, service or result. With this in mind, the project brief should encompass the entirety (objective) of the endeavor in its explanation and what it will take (goals) to create the product, service or result (deliverables). Normally the process of identifying the project objective is done during the first stage of a project called the Project Kickoff, but you can perform this very important step anytime to help you get a bearing. There are many more parts to helping a project stay on course, but this is usually the foundation where it can be grounded on. Through a series of 3 exercises, you should be able to come up with the objective, goals and deliverables on your project. Follow these steps, and in no time (about &frac12; hour), you will have the foundation of your project plan. (See examples below) Exercise 1 – Objectives Begin with the end in mind. Think about your project in business terms with a couple things to help you understand the objective; Reference the business benefit in terms of cost, speed and / or quality, Provide a higher level of what the outcome will look like (future sense) It should be non-measurable, that’s what the goals are all about The output should be a single paragraph with three sentences and take 10 minutes to write. *Typically, agreement must be reached on the objectives of the project before you would proceed to the next steps of the project. Exercise 2 – Goals A project goal is a statement that answers questions about who, what, why, where and when. A good project goal statement; Answers the five “W” questions for the project Is measurable in each of its parts Is published and agreed on by all the owners This helps the Project Manager receive confirmation on defining the project target. Using the established project objective done in the first exercise, think about the things it will take to get the job done. Think about tangible activities which are the top level tasks in a typical Work Breakdown Structure (WBS). The overall goal statement plus all the deliverables (next exercise) can be seen as the project team’s contract with the project owners. Write 3 - 5 goals in about 10 minutes. You should not write the words “Who, what, why, where and when, but merely be able to answer the questions when you read a goal. Exercise 3 – Deliverables Every project creates some type of output and these outputs are called deliverables. There are two classes of deliverables; Internal – produced for project team members to meet their goals External – produced for project owners to meet their expectations The list you enter here provides a checklist for the team’s delivery and/or is a statement of all the expectations of the project owners. Here are some typical project deliverables; Product and product documentation End product/system Requirements/feature documents Installation guides Demo/prototype System design documents User guides/help files Plans Project plan Training plan Conversion/installation/delivery plan Test plans Documentation plan Communication plan Reports and general documentation Progress reports System acceptance tests Outstanding bug list Procedures Risk and issue logs Project history Deliverables should go with each of the goals. Have 3-5 deliverables for each goal. When you are done, you will have established a great foundation for the clarity of your project. This exercise can take some time, but with practice, you should be able to whip this one out in 10 minutes as well, especially if you are intimate with an ongoing project. Samples  Objective [Client] is implementing a series of MOSS sites to support external public (Internet), internal employee (Intranet) and an external secure (password protected Internet) applications. This project will focus on the public-facing web site and will provide [Client] with architectural recommendations based on the current design being done by their design partner [Partner] and the internal Content Team. In addition, it will provide [Client] with a development plan and confidence they need to deploy a world class public Internet website. Goals 1.  [Consultant] will provide technical guidance and set project team expectations for the implementation of the MOSS Internet site based on provided features/functions within three weeks. 2.  [Consultant] will understand phase 2 secure password-protected Internet site design and provide recommendations.   Deliverables 1.1  Public Internet (unsecure) Architectural Recommendation Plan 1.2  Physical Site construction Work Breakdown Structure and plan (Time, cost and resources needed) 2.1  Two Factor authentication recommendation document   Objective [Client] is currently using an application developed by [Consultant] many years ago called "XXX". This application, although functional, does not meet their new updated business requirements and contains a few defects which [Client] has developed work-around processes. [Client] would like to have a "new and improved" system to support their membership management needs by expanding membership and subscription capabilities, provide accounting integration with internal (GL) and external (VeriSign) systems, and implement hooks to the current CRM solution. This effort will take place through a series of phases, beginning with envisioning. Goals 1. Through discussions with users, [Consultant] will discover current issues/bugs which need to be resolved which must meet the current functionality requirements within three weeks. 2. [Consultant] will gather requirements from the users about what is "needed" vs. "what they have" for enhancements and provide a high level document supporting their needs. 3. [Consultant] will meet with the team members through a series of meetings and help define the overall project plan to deliver a new and improved solution. Deliverables 1.1 Prioritized list of Current application issues/bugs that need to be resolved 1.2 Provide a resolution plan on the issues/bugs identified in the current application 1.3 Risk Assessment Document 2.1 Deliver a Requirements Document showing high-level [Client] needs for the new XXX application. · New feature functionality not in the application today · Existing functionality that will remain in the new functionality 2.2 Reporting Requirements Document 3.1 A Project Plan showing the deliverables and cost for the next (second) phase of this project. 3.2 A Statement of Work for the next (second) phase of this project. 3.3 An Estimate of any work that would need to follow the second phase.

    Read the article

  • Windows 8 Launch&ndash;Why OEM and Retailers Should STFU

    - by D'Arcy Lussier
    Microsoft has gotten a lot of flack for the Surface from OEM/hardware partners who create Windows-based devices and I’m sure, to an extent, retailers who normally stock and sell Windows-based devices. I mean we all know how this is supposed to work – Microsoft makes the OS, partners make the hardware, retailers sell the hardware. Now Microsoft is breaking the rules by not only offering their own hardware but selling them via online and through their Microsoft branded stores! The thought has been that Microsoft is trying to set a standard for the other hardware companies to reach for. Maybe. I hope, at some level, Microsoft may be covertly responding to frustrations associated with trusting the OEMs and Retailers to deliver on their part of the supply chain. I know as a consumer, I’m very frustrated with the Windows 8 launch. Aside from the Surface sales, there’s nothing happening at the retail level. Let me back up and explain. Over the weekend I visited a number of stores in hopes of trying out various Windows 8 devices. Out of three retailers (Staples, Best Buy, and Future Shop), not *one* met my expectations. Let me be honest with you Staples, I never really have high expectations from your computer department. If I need paper or pens, whatever, but computers – you’re not the top of my list for price or selection. Still, considering you flaunted Win 8 devices in your flyer I expected *something* – some sign of effort that you took the Windows 8 launch seriously. As I entered the 1910 Pembina Highway location in Winnipeg, there was nothing – no signage, no banners – nothing that would suggest Windows 8 had even launched. I made my way to the laptops. I had to play with each machine to determine which ones were running Windows 8. There wasn’t anything on the placards that made it obvious which were Windows 8 machines and which ones were Windows 7. Likewise, there was no easy way to identify the touch screen laptop (the HP model) from the others without physically touching the screen to verify. Horrible experience. In the same mall as the Staples I mentioned above, there’s a Future Shop. Surely they would be more on the ball. I walked in to the 1910 Pembina Highway location and immediately realized I would not get a better experience. Except for the sign by the front door mentioning Windows 8, there was *nothing* in the computer department pointing you to the Windows 8 devices. Like in Staples, the Win 8 laptops were mixed in with the Win 7 ones and there was nothing notable calling out which ones were running Win 8. I happened to hit up the St. James Street location today, thinking since its a busier store they must have more options. To their credit, they did have two staff members decked out in Windows 8 shirts and who were helping a customer understand Windows 8. But otherwise, there was nothing highlighting the Windows 8 devices and they were again mixed in with the rest of the Win 7 machines. Finally, we have the St. James Street Best Buy location here in Winnipeg. I’m sure Best Buy will have their act together. Nope, not even close. Same story as the others: minimal signage (there was a sign as you walked in with a link to this schedule of demo days), Windows 8 hardware mixed with the rest of the PC offerings, and no visible call-outs identifying which were Win 8 based. This meant that, like Future Shop and Staples, if you wanted to know which machine had Windows 8 you had to go and scrutinize each machine. Also, there was nothing identifying which ones were touch based and which were not. Just Another Day… To these retailers, it seemed that the Windows 8 launch was just another day, with another product to add to the showroom floor. Meanwhile, Apple has their dedicated areas *in all three stores*. It was dead simple to find where the Apple products were compared to the Windows 8 products. No wonder Microsoft is starting to push their own retail stores. No wonder Microsoft is trying to funnel orders through them instead of relying on these bloated retail big box stores who obviously can’t manage a product launch. It’s Not Just The Retailers… Remember when the Acer CEO, Founder, and President of Computer Global Operations all weighed in on how Microsoft releasing the Surface would have a “huge negative impact for the ecosystem and other brands may take a negative reaction”? Also remember the CEO stating “[making hardware] is not something you are good at so please think twice”? Well the launch day has come and gone, and so far Microsoft is the only one that delivered on having hardware available on the October 26th date. Oh sure, there are laptops running Windows 8 – but all in one desktop PCs? I’ve only seen one or two! And tablets are *non existent*, with some showing an early to late November availability on Best Buy’s website! So while the retailers could be doing more to make it easier to find Windows 8 devices, the manufacturers could help by *getting devices into stores*! That’s supposedly something that these companies are good at, according to the Acer CEO. So Here’s What the Retailers and Manufacturers Need To Do… Get Product Out The pivotal timeframe will be now to the end of November. We need to start seeing all these fantastic pieces of hardware ship – including the Samsung ATIV Smart PC Pro, the Acer Iconia, the Asus TAICHI 21, and the sexy Samsung Series 7 27” desktop. It’s not enough to see product announcements, we need to see actual devices. Make It Easy For Customers To Find Win8 Devices You want to make it easy to sell these things? Make it easy for people to find them! Have staff on hand that really know how these devices run and what can be done with them. Don’t just have a single demo day, have people who can demo it every day! Make It Easy to See the Features There’s touch screen desktops, touch screen laptops, tablets, non-touch laptops, etc. People need to easily find the features for each machine. If I’m looking for a touch-laptop, I shouldn’t need to sift through all the non-touch laptops to find them – at the least, I need to quickly be able to see which ones are touch. I feel silly even typing this because this should be retail 101 and I have no retail background (but I do have an extensive background as a customer). In Summary… Microsoft launching the Surface and selling them through their own channels isn’t slapping its OEM and retail partners in the face; its slapping them to wake the hell up and stop coasting through Windows launch events like they don’t matter. Unless I see some improvements from vendors and retailers in November, I may just hold onto my money for a Surface Pro even if I have to wait until early 2013. Your move OEM/Retailers. *Update – While my experience has been in Winnipeg, similar experiences have been voiced from colleagues in Calgary and Edmonton.

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • HAProxy: Display a "BADREQ" | BADREQ's by the thousands

    - by GruffTech
    My HAProxy Configuration. #HA-Proxy version 1.3.22 2009/10/14 Copyright 2000-2009 Willy Tarreau <[email protected]> global maxconn 10000 spread-checks 50 user haproxy group haproxy daemon stats socket /tmp/haproxy log localhost local0 log localhost local1 notice defaults mode http maxconn 50000 timeout client 10000 option forwardfor except 127.0.0.1 option httpclose option httplog listen dcaustin 0.0.0.0:80 mode http timeout connect 12000 timeout server 60000 timeout queue 120000 balance roundrobin option httpchk GET /index.html log global option httplog option dontlog-normal server web1 10.10.10.101:80 maxconn 300 check fall 1 server web2 10.10.10.102:80 maxconn 300 check fall 1 server web3 10.10.10.103:80 maxconn 300 check fall 1 server web4 10.10.10.104:80 maxconn 300 check fall 1 listen stats 0.0.0.0:9000 mode http balance log global timeout client 5000 timeout connect 4000 timeout server 30000 stats uri /haproxy HAProxy is running, and the socket is working... adam@dcaustin:/etc/haproxy# echo "show info" | socat stdio /tmp/haproxy Name: HAProxy Version: 1.3.22 Release_date: 2009/10/14 Nbproc: 1 Process_num: 1 Pid: 6320 Uptime: 0d 0h14m58s Uptime_sec: 898 Memmax_MB: 0 Ulimit-n: 20017 Maxsock: 20017 Maxconn: 10000 Maxpipes: 0 CurrConns: 47 PipesUsed: 0 PipesFree: 0 Tasks: 51 Run_queue: 1 node: dcaustin desiption: Errors show nothing from socket... adam@dcaustin:/etc/haproxy# echo "show errors" | socat stdio /tmp/haproxy adam@dcaustin:/etc/haproxy# However... My Error log is exploding with "badrequests" with the Error code cR. cR (according to 1.3 documentation) is The "timeout http-request" stroke before the client sent a full HTTP request. This is sometimes caused by too large TCP MSS values on the client side for PPPoE networks which cannot transport full-sized packets, or by clients sending requests by hand and not typing fast enough, or forgetting to enter the empty line at the end of the request. The HTTP status code is likely a 408 here. Correct on the 408, but we're getting literally thousands of these requests every hour. (This log snippet is an clip for about 10 seconds of time...) Jun 30 11:08:52 localhost haproxy[6320]: 92.22.213.32:26448 [30/Jun/2011:11:08:42.384] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10002 408 212 - - cR-- 35/35/18/0/0 0/0 "<BADREQ>" Jun 30 11:08:54 localhost haproxy[6320]: 71.62.130.24:62818 [30/Jun/2011:11:08:44.457] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 39/39/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 84.73.75.236:3589 [30/Jun/2011:11:08:45.021] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10008 408 212 - - cR-- 35/35/15/0/0 0/0 "<BADREQ>" Jun 30 11:08:55 localhost haproxy[6320]: 69.39.20.190:49969 [30/Jun/2011:11:08:45.709] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 37/37/16/0/0 0/0 "<BADREQ>" Jun 30 11:08:56 localhost haproxy[6320]: 2.29.0.9:58772 [30/Jun/2011:11:08:46.846] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10001 408 212 - - cR-- 43/43/22/0/0 0/0 "<BADREQ>" Jun 30 11:08:57 localhost haproxy[6320]: 212.139.250.242:57537 [30/Jun/2011:11:08:47.568] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 42/42/21/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55046 [30/Jun/2011:11:08:48.559] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 46/46/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55044 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10004 408 212 - - cR-- 45/45/24/0/0 0/0 "<BADREQ>" Jun 30 11:08:58 localhost haproxy[6320]: 74.79.195.75:55045 [30/Jun/2011:11:08:48.554] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10005 408 212 - - cR-- 44/44/24/0/0 0/0 "<BADREQ>" Jun 30 11:09:00 localhost haproxy[6320]: 68.197.56.2:52781 [30/Jun/2011:11:08:50.975] dcaustin dcaustin/<NOSRV> -1/-1/-1/-1/10000 408 212 - - cR-- 49/49/28/0/0 0/0 "<BADREQ>" From what I read on google, if i wanted to see what the bad requests are, I can show errors to the socket and it will spit them out. We do run a pretty heavily trafficed website and the percentage of "BADREQS" to normal requests is quite low, but I'd like to be able to get ahold of what that request WAS so I can debug it. stats # pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max, dcaustin,FRONTEND,,,64,120,50000,88433,105889100,2553809875,0,0,4641,,,,,OPEN,,,,,,,,,1,1,0,,,,0,45,0,128, dcaustin,web1,0,0,10,28,300,20941,25402112,633143416,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,1,,20941,,2,11,,30, dcaustin,web2,0,0,9,30,300,20941,25026691,641475169,,0,,0,3,0,0,UP,1,1,0,0,0,2208,0,,1,1,2,,20941,,2,11,,30, dcaustin,web3,0,0,10,27,300,20940,30116527,635015040,,0,,0,9,0,0,UP,1,1,0,0,0,2208,0,,1,1,3,,20940,,2,10,,31, dcaustin,web4,0,0,5,28,300,20940,25343770,643209546,,0,,0,8,0,0,UP,1,1,0,0,0,2208,0,,1,1,4,,20940,,2,11,,31, dcaustin,BACKEND,0,0,34,95,50000,83762,105889100,2553809875,0,0,,0,34,0,0,UP,4,4,0,,0,2208,0,,1,1,0,,83762,,1,43,,122, 88500 "Sessions" and 4500 errors. in the last 20 minutes.

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • System halts for a fraction of second after every 2-3 seconds

    - by iSam
    I'm using Windows 7 on my HP ProBook 4250s. The problem I face is that my system halts for a fraction of second after every 2-3 seconds. These jerks are not letting me concentrate or work properly. This happens even when I'm just typing in notepad while no other application is running. I tried to install every driver from HP's website and there's no item in device manager marked with yellow icon. Following are my system specs: Machine: HP ProBook 4250s OS: Windows 7 professional RAM: 2GB Processor: Intel Core i3 2.27GHz Following is my HijackThis Log: **Logfile of HijackThis v1.99.1** Scan saved at 9:34:03 PM, on 11/13/2012 Platform: Unknown Windows (WinNT 6.01.3504) MSIE: Internet Explorer v9.00 (9.00.8112.16450) **Running processes:** C:\Windows\system32\taskhost.exe C:\Windows\System32\rundll32.exe C:\Windows\system32\Dwm.exe C:\Windows\Explorer.EXE C:\Windows\System32\igfxtray.exe C:\Windows\System32\hkcmd.exe C:\Windows\System32\igfxpers.exe C:\Program Files\Synaptics\SynTP\SynTPEnh.exe C:\Program Files\PowerISO\PWRISOVM.EXE C:\Program Files\AVAST Software\Avast\AvastUI.exe C:\Program Files\Free Download Manager\fdm.exe C:\Windows\system32\wuauclt.exe C:\Program Files\Windows Media Player\wmplayer.exe C:\Program Files\Microsoft Office\Office12\WINWORD.EXE C:\HijackThis\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://bing.com/ R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: (no name) - {7473b6bd-4691-4744-a82b-7854eb3d70b6} - (no file) O2 - BHO: Adobe PDF Reader Link Helper - {06849E9F-C8D7-4D59-B87D-784B7D6BE0B3} - C:\Program Files\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll O2 - BHO: Babylon toolbar helper - {2EECD738-5844-4a99-B4B6-146BF802613B} - (no file) O2 - BHO: MrFroggy - {856E12B5-22D7-4E22-9ACA-EA9A008DD65B} - C:\Program Files\Minibar\Froggy.dll O2 - BHO: avast! WebRep - {8E5E2654-AD2D-48bf-AC2D-D17F00898D06} - C:\Program Files\AVAST Software\Avast\aswWebRepIE.dll O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Minibar BHO - {AA74D58F-ACD0-450D-A85E-6C04B171C044} - C:\Program Files\Minibar\Kango.dll O2 - BHO: Free Download Manager - {CC59E0F9-7E43-44FA-9FAA-8377850BF205} - C:\Program Files\Free Download Manager\iefdm2.dll O2 - BHO: HP Network Check Helper - {E76FD755-C1BA-4DCB-9F13-99BD91223ADE} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll O3 - Toolbar: (no name) - {98889811-442D-49dd-99D7-DC866BE87DBC} - (no file) O3 - Toolbar: avast! WebRep - {8E5E2654-AD2D-48bf-AC2D-D17F00898D06} - C:\Program Files\AVAST Software\Avast\aswWebRepIE.dll O4 - HKLM\..\Run: [IgfxTray] C:\Windows\system32\igfxtray.exe O4 - HKLM\..\Run: [HotKeysCmds] C:\Windows\system32\hkcmd.exe O4 - HKLM\..\Run: [Persistence] C:\Windows\system32\igfxpers.exe O4 - HKLM\..\Run: [SynTPEnh] %ProgramFiles%\Synaptics\SynTP\SynTPEnh.exe O4 - HKLM\..\Run: [PWRISOVM.EXE] C:\Program Files\PowerISO\PWRISOVM.EXE -startup O4 - HKLM\..\Run: [AdobeAAMUpdater-1.0] "C:\Program Files\Common Files\Adobe\OOBE\PDApp\UWA\UpdaterStartupUtility.exe" O4 - HKLM\..\Run: [AdobeCS6ServiceManager] "C:\Program Files\Common Files\Adobe\CS6ServiceManager\CS6ServiceManager.exe" -launchedbylogin O4 - HKLM\..\Run: [SwitchBoard] C:\Program Files\Common Files\Adobe\SwitchBoard\SwitchBoard.exe O4 - HKLM\..\Run: [ROC_roc_ssl_v12] "C:\Program Files\AVG Secure Search\ROC_roc_ssl_v12.exe" / /PROMPT /CMPID=roc_ssl_v12 O4 - HKLM\..\Run: [avast] "C:\Program Files\AVAST Software\Avast\avastUI.exe" /nogui O4 - HKLM\..\Run: [Wordinn English to Urdu Dictionary] "C:\Program Files\Wordinn\Urdu Dictionary\bin\Lugat.exe" -h O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files\Adobe\Reader 8.0\Reader\Reader_sl.exe" O4 - HKCU\..\Run: [Comparator Fast] "C:\Program Files\Interdesigner Software\Comparator Fast\ComparatorFast.exe" /STARTUP O4 - HKCU\..\Run: [Free Download Manager] "C:\Program Files\Free Download Manager\fdm.exe" -autorun O8 - Extra context menu item: Download all with Free Download Manager - file://C:\Program Files\Free Download Manager\dlall.htm O8 - Extra context menu item: Download selected with Free Download Manager - file://C:\Program Files\Free Download Manager\dlselected.htm O8 - Extra context menu item: Download video with Free Download Manager - file://C:\Program Files\Free Download Manager\dlfvideo.htm O8 - Extra context menu item: Download with Free Download Manager - file://C:\Program Files\Free Download Manager\dllink.htm O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~2\Office12\EXCEL.EXE/3000 O9 - Extra button: @C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll,-103 - {25510184-5A38-4A99-B273-DCA8EEF6CD08} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\NCLauncherFromIE.exe O9 - Extra 'Tools' menuitem: @C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\HPNetworkCheckPlugin.dll,-102 - {25510184-5A38-4A99-B273-DCA8EEF6CD08} - C:\Program Files\Hewlett-Packard\HP Support Framework\Resources\HPNetworkCheck\NCLauncherFromIE.exe O9 - Extra button: Research - {92780B25-18CC-41C8-B9BE-3C9C571A8263} - C:\PROGRA~1\MICROS~2\Office12\REFIEBAR.DLL O9 - Extra button: Change your facebook look - {AAA38851-3CFF-475F-B5E0-720D3645E4A5} - C:\Program Files\Minibar\MinibarButton.dll O10 - Unknown file in Winsock LSP: c:\windows\system32\nlaapi.dll O10 - Unknown file in Winsock LSP: c:\windows\system32\napinsp.dll O10 - Unknown file in Winsock LSP: c:\program files\common files\microsoft shared\windows live\wlidnsp.dll O10 - Unknown file in Winsock LSP: c:\program files\common files\microsoft shared\windows live\wlidnsp.dll O11 - Options group: [ACCELERATED_GRAPHICS] Accelerated graphics O11 - Options group: [INTERNATIONAL] International O13 - Gopher Prefix: O17 - HKLM\System\CCS\Services\Tcpip\..\{920289D7-5F75-4181-9A37-5627EAA163E3}: NameServer = 8.8.8.8,8.8.4.4 O17 - HKLM\System\CCS\Services\Tcpip\..\{AE83ED2F-EF14-4066-ACE2-C4ED07A68EAA}: NameServer = 9.9.9.9,8.8.8.8 O18 - Protocol: ms-help - {314111C7-A502-11D2-BBCA-00C04F8EC294} - C:\Program Files\Common Files\Microsoft Shared\Help\hxds.dll O18 - Protocol: skype4com - {FFC8B962-9B40-4DFF-9458-1830C7DD7F5D} - C:\PROGRA~1\COMMON~1\Skype\SKYPE4~1.DLL O18 - Protocol: wlpg - {E43EF6CD-A37A-4A9B-9E6F-83F89B8E6324} - C:\Program Files\Windows Live\Photo Gallery\AlbumDownloadProtocolHandler.dll O18 - Filter hijack: text/xml - {807563E5-5146-11D5-A672-00B0D022E945} - C:\PROGRA~1\COMMON~1\MICROS~1\OFFICE12\MSOXMLMF.DLL O20 - AppInit_DLLs: c:\progra~2\browse~1\23787~1.43\{16cdf~1\browse~1.dll c:\progra~2\browse~1\22630~1.40\{16cdf~1\browse~1.dll O20 - Winlogon Notify: igfxcui - C:\Windows\SYSTEM32\igfxdev.dll O23 - Service: avast! Antivirus - AVAST Software - C:\Program Files\AVAST Software\Avast\AvastSvc.exe O23 - Service: Google Software Updater (gusvc) - Google - C:\Program Files\Google\Common\Google Updater\GoogleUpdaterService.exe O23 - Service: HP Support Assistant Service - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\HP Support Framework\hpsa_service.exe O23 - Service: HP Quick Synchronization Service (HPDrvMntSvc.exe) - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\Shared\HPDrvMntSvc.exe O23 - Service: HP Software Framework Service (hpqwmiex) - Hewlett-Packard Company - C:\Program Files\Hewlett-Packard\Shared\hpqWmiEx.exe O23 - Service: HP Service (hpsrv) - Hewlett-Packard Company - C:\Windows\system32\Hpservice.exe O23 - Service: Intel(R) Management and Security Application Local Management Service (LMS) - Intel Corporation - C:\Program Files\Intel\Intel(R) Management Engine Components\LMS\LMS.exe O23 - Service: Mozilla Maintenance Service (MozillaMaintenance) - Mozilla Foundation - C:\Program Files\Mozilla Maintenance Service\maintenanceservice.exe O23 - Service: @%SystemRoot%\system32\qwave.dll,-1 (QWAVE) - Unknown owner - %windir%\system32\svchost.exe (file missing) O23 - Service: @%SystemRoot%\system32\seclogon.dll,-7001 (seclogon) - Unknown owner - %windir%\system32\svchost.exe (file missing) O23 - Service: Skype Updater (SkypeUpdate) - Skype Technologies - C:\Program Files\Skype\Updater\Updater.exe O23 - Service: Adobe SwitchBoard (SwitchBoard) - Adobe Systems Incorporated - C:\Program Files\Common Files\Adobe\SwitchBoard\SwitchBoard.exe O23 - Service: Intel(R) Management & Security Application User Notification Service (UNS) - Intel Corporation - C:\Program Files\Intel\Intel(R) Management Engine Components\UNS\UNS.exe O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - %PROGRAMFILES%\Windows Media Player\wmpnetwk.exe (file missing)

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • T60 Screen/LCD gets black after some minutes with a highpitched sound rising and fading

    - by edelwater
    Just now my T60 screen got "black" (so no display). On my second monitor: no problems so the VGA output works. Symptom: Screen blanks / no display, but it works on the second monitor Steps to reproduce: - boot - wait (it does not matter what you do you do not have to login or anything) - (now the monitor of the laptop slowly begins to make a ssssssssHHHHHHHHHHHHHHHHHWOEOEssssssss noise of about 10 seconds) - right after the sounds ends, the monitor gets black. Sometimes it seems to be the same each time. Software: Installed no new software before/after, running ZoneAlarm and antivirus. Other: It does not feel hot in any place, there don't seem to be running processes with strange behaviour. Warranty: Out of warranty What was I doing: Typing text on a website and doing some PHP coding in a text editor. What can I do here other than buy a new laptop? Does it sound familiar to known cases? Update 1: Exactly the same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-Screen-Blackout/m-p/288772 and the second poster (garyj), http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/Black-Screen-on-T60/m-p/235053#M48627 And here: "I have that same problem. I replaced the CCRL on mine and it works fine when the screen is not screwed in. Once the frame of the LCD screen (metal portion) touches the metal on the laptop which holds the screen the screen goes black. If the metal is touching the screen when you boot up it boots up with it being very dimmly lit. " from http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-screen-problems/m-p/205047#M44995 (it seems replacing the LCD display is no use, he tried it three times). Same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-black-screen/m-p/80604#M25914 Hmmm... not handy 3 or 4 months ago I ordered and installed a new fan. Now the LCD. Which does not seem the core issue but some electric issue so it seems replacing the LCD is not the thing to do here. If it is not the LCD that needs to be replaced (see other threads), which parts can I order to fix this? Is there any information which could lead me to identify the issue? I have read replacing the "inverter" AND the "backlightning" would that make sense? Update 2: I replaced the inverter with another inverter, but IO have the same problem. I DID notice that the inverter is the component that makes the sssssssssssssHHHHHHHHHH sound AND it becomes very hot in a few seconds. (So both the old and the test one) The problem is hmmm wat is then the thing that makes the inverter hot by (assumption) after which it shuts itself down. Is it either the input or the output? The output seems to me not, because the screen seems to function so it must be the electricity coming in. But what causes it to become so hot would it be the VGA card outputting some unusual high voltage seems unlikely? I am looking for the component to order / replace update 3: Great news. Ewendish gave me the hint to look in the BIOS. While I was in the BIOS I noticed that the screen did not switch off and there was not a high pitched sound. So I lowered some settings in the BIOS. I then noticed that with brightness turned to 0 (via FN End), it does not make a high pitched sound and does not turn off, with brightness turned up just three "stripes" it starts making the sound. So I could from now on work under lowest brightness modus or... see where the problem lies. So as stated below with either power management or display drivers / ATI Catalyst settings / Windows display settings. I'm trying to see where it lies, but I will google some first. Update 4: I wiped clean the Windows XP installation and installed Windows 7 on it. Unfortunately the problem remains: as soon as the brightness goes up the screen starts hissing. This means... back to original thought: it probably IS a hardware problem. Although ... again... if it is NOT the inverter, what is it? Could it be the backlightning component? I could try to switch that with a another T60... but this is quite tricky.

    Read the article

  • MVC2 and MVC Futures causing RedirectToAction issues

    - by Darragh
    I've been trying to get the strongly typed version of RedirectToAction from the MVC Futures project to work, but I've been getting no where. Below are the steps I've followed, and the errors I've encountered. Any help is much appreciated. I created a new MVC2 app and changed the About action on the HomeController to redirect to the Index page. Return RedirectToAction("Index") However, I wanted to use the strongly typed extensions, so I downloaded the MVC Futures from CodePlex and added a reference to Microsoft.Web.Mvc to my project. I addded the following "import" statement to the top of HomeContoller.vb Imports Microsoft.Web.Mvc I commented out the above RedirectToAction and added the following line: Return RedirectToAction(Of HomeController)(Function(c) c.Index()) So far, so good. However, I noticed if I uncomment out the first (non Generic) RedirectToAction, it was now causing the following compile error: Error 1 Overload resolution failed because no accessible 'RedirectToAction' can be called with these arguments: Extension method 'Public Function RedirectToAction(Of TController)(action As System.Linq.Expressions.Expression(Of System.Action(Of TController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Data type(s) of the type parameter(s) cannot be inferred from these arguments. Specifying the data type(s) explicitly might correct this error. Extension method 'Public Function RedirectToAction(action As System.Linq.Expressions.Expression(Of System.Action(Of HomeController))) As System.Web.Mvc.RedirectToRouteResult' defined in 'Microsoft.Web.Mvc.ControllerExtensions': Value of type 'String' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Action(Of mvc2test1.HomeController))'. Even though intelli-sense was showing 8 overloads (the original 6 non-generic overloads, plus the 2 new generic overloads from the Futures assembly), it seems when trying to complie the code, the compiler would only 'find' the 2 non-gneneric extension methods from the Futures assessmbly. I thought this might be an issue that I was using conflicting versions of the MVC2 assembly, and the futures assembly, so I added MvcDiaganotics.aspx from the Futures download to my project and everytyhing looked correct: ASP.NET MVC Assembly Information (System.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM (2.0.50217.0) Full name: System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 Code base: file:///C:/WINDOWS/assembly/GAC_MSIL/System.Web.Mvc/2.0.0.0__31bf3856ad364e35/System.Web.Mvc.dll Deployment: GAC-deployed ASP.NET MVC Futures Assembly Information (Microsoft.Web.Mvc.dll) Assembly version: ASP.NET MVC 2 RTM Futures (2.0.50217.0) Full name: Microsoft.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null Code base: file:///xxxx/bin/Microsoft.Web.Mvc.DLL Deployment: bin-deployed This is driving me crazy! Becuase I thought this might be some VB issue, I created a new MVC2 project using C# and tried the same as above. I added the following "using" statement to the top of HomeController.cs using Microsoft.Web.Mvc; This time, in the About action method, I could only manage to call the non-generic RedirectToAction by typing the full commmand as follows: return Microsoft.Web.Mvc.ControllerExtensions.RedirectToAction<HomeController>(this, c => c.Index()); Even though I had a "using" statement at the top of the class, if I tried to call the non-generic RedirectToAction as follows: return RedirectToAction<HomeController>(c => c.Index()); I would get the following compile error: Error 1 The non-generic method 'System.Web.Mvc.Controller.RedirectToAction(string)' cannot be used with type arguments What gives? It's not like I'm trying to do anything out of the ordinary. It's a simple vanilla MVC2 project with only a reference to the Futures assembly. I'm hoping that I've missed out something obvious, but I've been scratching my head for too long, so I figured I'd seek some assisstance. If anyone's managed to get this simple scenario working (in VB and/or C#) could they please let me know what, if anything, they did differently? Thanks!

    Read the article

  • System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse request failed with HTTP status 40

    - by John Galt
    I am trying to make some enhancements to a production web app. After quite a bit of unit testing on my WinXP IIS 5.1 development machine, everything works on my localhost so I used the Visual Studio 2008 PUBLISH dialog on my Dev PC to push the following projects to a staging server: the primary web app the "primary" webservice (the home page tries to invoke this WS) a "secondary" webservice (not yet a problem because home page does not invoke this WS) I get the following when I try to browse to the home page of the web app typing this into my browser: link text Server Error in '/zVersion2' Application. The request failed with HTTP status 404: Not Found. Description: An unhandled exception occurred during the execution of the current web request.Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.WebException: The request failed with HTTP status 404: Not Found. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [WebException: The request failed with HTTP status 404: Not Found.] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431289 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 ProxyZipeeeService.WSZipeee.Zipeee.GetMessageByType(Int32 iMsgType) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\ProxyZipeeeService\ProxyZipeeeService\Web References\WSZipeee\Reference.vb:2168 Zipeee.frmZipeee.LoadMessage() in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:43 Zipeee.frmZipeee.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\johna\My Documents\Visual Studio 2008\Projects\Zipeee\frmZipeee.aspx.vb:33 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3607; ASP.NET Version:2.0.50727.3082 Here is a bit of the corresponding source code: Public wsZipeee As New ProxyZipeeeService.WSZipeee.Zipeee Dim dsStandardMsg As DataSet Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load If Not Page.IsPostBack Then LoadMessage() End If End Sub Private Sub LoadMessage() Dim iCnt As Integer Dim iValue As Integer dsStandardMsg = wsZipeee.GetMessageByType(BizConstants.MsgType.Standard) End Sub I suspect I may have configured things incorrectly on the staging server. The staging server is Win Server 2003 ServicePack 2 running IIS 6.0. When I published the primary site and the 2 webservices on the staging server called MOJITO I created the physical directories for each on the D drive. Then using INETMGR, I configured the following virtual directories: zVersion2 zVersion2wsSQL zVersion2wsEmergency All of the above are configured to use a new application pool I setup and named zVersion2aspNet20. The default web site for this machine MOJITO is configured to use ASP.NET 1.1 and the IP address is set to (All Unassigned). The production versions of the latter 2 webservices run on the MOJITO machine (named ZipeeeService and EmergencyService respectively). Can my staging versions of the above webservices (named zVersion2wsSQL and zVersion2wsEmergency respectively) co-exist on the same web server with the same IP address? Please note that when I test the zVersion2wsSQL webservice independently (from INETMGR right-mouse and click Browse) it works as expected (i.e. presenting all the methods of the webservice) like this snippet: GetMessageByType MessageName="Get_x0020_Message_x0020_By_x0020_Type" I can test this webmethod by clicking on it and it presents the Test dialog (because it takes a simple datatype and I am invoking it on localhost (i.e. MOJITO): **Get Message By Type** **Test** To test the operation using the HTTP POST protocol, click the 'Invoke' button. Parameter Value iMsgType: _______ [INVOKE button] SOAP 1.1 ....etc. I fear I may have rambled with too much information so I will stop but I hope someone can help me as I cannot understand why this request results in a "not found". Thanks.

    Read the article

  • .NET 4.0 Dynamic object used statically?

    - by Kevin Won
    I've gotten quite sick of XML configuration files in .NET and want to replace them with a format that is more sane. Therefore, I'm writing a config file parser for C# applications that will take a custom config file format, parse it, and create a Python source string that I can then execute in C# and use as a static object (yes that's right--I want a static (not the static type dyanamic) object in the end). Here's an example of what my config file looks like: // my custom config file format GlobalName: ExampleApp Properties { ExternalServiceTimeout: "120" } Python { // this allows for straight python code to be added to handle custom config def MyCustomPython: return "cool" } Using ANTLR I've created a Lexer/Parser that will convert this format to a Python script. So assume I have that all right and can take the .config above and run my Lexer/Parser on it to get a Python script out the back (this has the added benefit of giving me a validation tool for my config). By running the resultant script in C# // simplified example of getting the dynamic python object in C# // (not how I really do it) ScriptRuntime py = Python.CreateRuntime(); dynamic conf = py.UseFile("conftest.py"); dynamic t = conf.GetConfTest("test"); I can get a dynamic object that has my configuration settings. I can now get my config file settings in C# by invoking a dynamic method on that object: //C# calling a method on the dynamic python object var timeout = t.GetProperty("ExternalServiceTimeout"); //the config also allows for straight Python scripting (via the Python block) var special = t.MyCustonPython(); of course, I have no type safety here and no intellisense support. I have a dynamic representation of my config file, but I want a static one. I know what my Python object's type is--it is actually newing up in instance of a C# class. But since it's happening in python, it's type is not the C# type, but dynamic instead. What I want to do is then cast the object back to the C# type that I know the object is: // doesn't work--can't cast a dynamic to a static type (nulls out) IConfigSettings staticTypeConfig = t as IConfigSettings Is there any way to figure out how to cast the object to the static type? I'm rather doubtful that there is... so doubtful that I took another approach of which I'm not entirely sure about. I'm wondering if someone has a better way... So here's my current tactic: since I know the type of the python object, I am creating a C# wrapper class: public class ConfigSettings : IConfigSettings that takes in a dynamic object in the ctor: public ConfigSettings(dynamic settings) { this.DynamicProxy = settings; } public dynamic DynamicProxy { get; private set; } Now I have a reference to the Python dynamic object of which I know the type. So I can then just put wrappers around the Python methods that I know are there: // wrapper access to the underlying dynamic object // this makes my dynamic object appear 'static' public string GetSetting(string key) { return this.DynamicProxy.GetProperty(key).ToString(); } Now the dynamic object is accessed through this static proxy and thus can obviously be passed around in the static C# world via interface, etc: // dependency inject the dynamic object around IBusinessLogic logic = new BusinessLogic(IConfigSettings config); This solution has the benefits of all the static typing stuff we know and love while at the same time giving me the option of 'bailing out' to dynamic too: // the DynamicProxy property give direct access to the dynamic object var result = config.DynamicProxy.MyCustomPython(); but, man, this seems rather convoluted way of getting to an object that is a static type in the first place! Since the whole dynamic/static interaction world is new to me, I'm really questioning if my solution is optimal or if I'm missing something (i.e. some way of casting that dynamic object to a known static type) about how to bridge the chasm between these two universes.

    Read the article

  • Modern Java alternatives

    - by Ralph
    I'm not sure if stackoverflow is the best forum for this discussion. I have been a Java developer for 14 years and have written an enterprise-level (~500,000 line) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for the stackoverflow readerships' opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. If anyone has a suggestion for another JVM-based language that can be used for enterprise level development, please list it. Also, it is not my intent to start a flame war. Thanks, Ralph

    Read the article

  • java - BigDecimal

    - by Mk12
    I was trying to make my own class for currencies using longs, but Apparently I should use BigDecimal (and then whenever I print it just add the $ sign before it). Could someone please get me started? What would be the best way to use BigDecimals for Dollar currencies, like making it at least but no more than 2 decimal places for the cents, etc. The api for BigDecimal is huge, and I don't know which methods to use. Also, BigDecimal has better precision, but isn't that all lost if it passes through a double? if I do new BigDecimal(24.99), how will it be different than using a double? Or should I use the constructor that uses a String instead? EDIT: I decided to use BigDecimals, and then use: private static final java.text.NumberFormat moneyt = java.text.NumberFormat.getCurrencyInstance(); { money.setRoundingMode(RoundingMode.HALF_EVEN); } and then whenever I display the BigDecimals, to use money.format(theBigDecimal). Is this alright? Should I have the BigDecimal rounding it too? Because then it doesn't get rounded before it does another operation.. if so, could you show me how? And how should I create the BigDecimals? new BigDecimal("24.99") ? Well, after many comments to Vineet Reynolds (thanks for keeping coming back and answering), this is what I have decided. I use BigDecimals and a NumberFormat. Here is where I create the NumberFormat instance. private static final NumberFormat money; static { money = NumberFormat.getCurrencyInstance(Locale.CANADA); money.setRoundingMode(RoundingMode.HALF_EVEN); } Here is my BigDecimal: private final BigDecimal price; Whenever I want to display the price, or another BigDecimal that I got through calculations of price, I use: money.format(price) to get the String. Whenever I want to store the price, or a calculation from price, in a database or in a field or anywhere, I use (for a field): myCalculatedResult = price.add(new BigDecimal("34.58")).setScale(2, RoundingMode.HALF_EVEN); .. but I'm thinking now maybe I should not have the NumberFormat round, but when I want to display do this: System.out.print(money.format(price.setScale(2, RoundingMode.HALF_EVEN); That way to ensure the model and things displayed in the view are the same. I don't do: price = price.setScale(2, RoundingMode.HALF_EVEN); Because then it would always round to 2 decimal places and wouldn't be as precise in calculations. So its all solved now, I guess. But is there any shortcut to typing calculatedResult.setScale(2, RoundingMode.HALF_EVEN) all the time? All I can think of is static importing HALF_EVEN... EDIT: I've changed my mind a bit, I think if I store a value, I won't round it unless I have no more operations to do with it, e.g. if it was the final total that will be charged to someone. I will only round things at the end, or whenever necessary, and I will still use NumberFormat for the currency formatting, but since I always want rounding for display, I made a static method for display: public static String moneyFormat(BigDecimal price) { return money.format(price.setScale(2, RoundingMode.HALF_EVEN)); } So values stored in variables won't be rounded, and I'll use that method to display prices.

    Read the article

  • which control in vs08 aspx c# will be able to take html tags as input and display it formatted accor

    - by user287745
    i asked a few questions regarding this and got answers that indicate i will have to make an costum build editor using ,markdown, .... to achieve something like stackoverflow.com/questions/ask page. there is time limitation and the requirements are that much, i have to achieve 1) allow users to using html tags when giving input 2) save that complete input to sql db 3) display the data in db withing a contol which renders the formatting as the tags specify. i am aware label and literal controls support html tags, the problem is how to allow the user to input the textbox does not seem the support html tags? thank you Need help in implementing the affect of the WRITE A NEW POST On stackoverflow.com <%--The editor--% <asp:UpdatePanel ID="UpdatePanel7" runat="server"> <ContentTemplate> <asp:Label ID="Label4" runat="server" Text="Label"></asp:Label> <div id="textchanging" onkeyup="textoftextbox('TextBox3'); return false;"> <asp:TextBox ID="TextBox3" runat="server" CausesValidation="True" ></asp:TextBox> </div> <%-- OnTextChanged="textoftextbox('TextBox3'); return false;" gives too man literals error. --%> <asp:Button ID="Button6" runat="server" Text="Post The Comment, The New, Write on Wall" onclick="Button6_Click" /> <asp:Label ID="Label2" runat="server" Text="Label"></asp:Label> <asp:TextBox ID="TextBox4" runat="server" CausesValidation="True"></asp:TextBox> <asp:Literal ID="Literal2" runat="server" Text="this must change" here comes the div <div id="displayingarea" runat="server">This area will change</div> </ContentTemplate> </asp:UpdatePanel> <%--The editor ends--%> protected void Button6_Click(object sender, EventArgs e) { Label3.Text = displayingarea.InnerHtml; } As you see, I have implemented the effect of typing text in textbox which appears as a reflection in the div WITH FULLY FORMATTED TEXT ACCORDIG TO THE HTML TAGS USED The textbox does not allow html! Well the text box doesnot have too, script just extracts what is typed, within letting the server know. The html of the div tag also changes as typed in textbox. Now, There is a “post” button within an update plane a avoid full post back of page What I needis when the button is clicked on the value withing the div tag “innerhtml” is passed on to the label. Yes I know the chnages are only made on the client side, So when the button click event occurs the server is not aware of the new data within the div tag, Therefore the server assign the original html withing the div to the lab. Need help to overcome this, How is it that what we enter text in the textbox press the button with coding like label1.text=textbox1.text; and it works even in update panel, but the aabove code for extracting innerhtml typed at users end similar to yping in textbox does not work?

    Read the article

  • Log a user in to an ASP.net application using Windows Authentication without using Windows Authentic

    - by Rising Star
    I have an ASP.net application I'm developing authentication for. I am using an existing cookie-based log on system to log users in to the system. The application runs as an anonymous account and then checks the cookie when the user wants to do something restricted. This is working fine. However, there is one caveat: I've been told that for each page that connects to our SQL server, I need to make it so that the user connects using an Active Directory account. because the system I'm using is cookie based, the user isn't logged in to Active Directory. Therefore, I use impersonation to connect to the server as a specific account. However, the powers that be here don't like impersonation; they say that it clutters up the code. I agree, but I've found no way around this. It seems that the only way that a user can be logged in to an ASP.net application is by either connecting with Internet Explorer from a machine where the user is logged in with their Active Directory account or by typing an Active Directory username and password. Neither of these two are workable in my application. I think it would be nice if I could make it so that when a user logs in and receives the cookie (which actually comes from a separate log on application, by the way), there could be some code run which tells the application to perform all network operations as the user's Active Directory account, just as if they had typed an Active Directory username and password. It seems like this ought to be possible somehow, but the solution evades me. How can I make this work? Update To those who have responded so far, I apologize for the confusion I have caused. The responses I've received indicate that you've misunderstood the question, so please allow me to clarify. I have no control over the requirement that users must perform network operations (such as SQL queries) using Active Directory accounts. I've been told several times (online and in meat-space) that this is an unusual requirement and possibly bad practice. I also have no control over the requirement that users must log in using the existing cookie-based log on application. I understand that in an ideal MS ecosystem, I would simply dis-allow anonymous access in my IIS settings and users would log in using Windows Authentication. This is not the case. The current system is that as far as IIS is concerned, the user logs in anonymously (even though they supply credentials which result in the issuance of a cookie) and we must programmatically check the cookie to see if the user has access to any restricted resources. In times past, we have simply used a single SQL account to perform all queries. My direct supervisor (who has many years of experience with this sort of thing) wants to change this. He says that if each user has his own AD account to perform SQL queries, it gives us more of a trail to follow if someone tries to do something wrong. The closest thing I've managed to come up with is using WIF to give the user a claim to a specific Active Directory account, but I still have to use impersonation because even still, the ASP.net process presents anonymous credentials to the SQL server. It boils down to this: Can I log users in with Active Directory accounts in my ASP.net application without having the users manually enter their AD credentials? (Windows Authentication)

    Read the article

  • Getting 500 Error when trying to access Rails application through Apache2

    - by cojones
    Hey, I'm using Apache2 as proxy and mongrel_cluster as server for my Rails applications. When I try to access it by typing in the url I get a 500 "Internal Server Error" but when try to locally access the website with "lynx http://localhost:8200" it works. This is my config: <Proxy balancer://sportfreundewitold_cluster> BalancerMember http://127.0.0.1:8200 BalancerMember http://127.0.0.1:8201 </Proxy> # httpd [example.org] dmn entry BEGIN. <VirtualHost x.x.x.x:80> <IfModule suexec_module> SuexecUserGroup vu2025 vu2025 </IfModule> ServerAdmin [email protected] DocumentRoot /var/www/virtual/example.org/htdocs/current/public ServerName example.org ServerAlias www.example.org example.org *.example.org vu2025.admin.roughneck-media.de Alias /errors /var/www/virtual/example.org/errors/ RedirectMatch permanent ^/ftp[\/]?$ http://admin.roughneck-media.de/ftp/ RedirectMatch permanent ^/pma[\/]?$ http://admin.roughneck-media.de/pma/ RedirectMatch permanent ^/webmail[\/]?$ http://admin.roughneck-media.de/webmail/ RedirectMatch permanent ^/ispcp[\/]?$ http://admin.roughneck-media.de/ ErrorDocument 401 /errors/401.html ErrorDocument 403 /errors/403.html ErrorDocument 404 /errors/404.html ErrorDocument 500 /errors/500.html ErrorDocument 503 /errors/503.html <IfModule mod_cband.c> CBandUser example.org </IfModule> # httpd awstats support BEGIN. # httpd awstats support END. # httpd dmn entry cgi support BEGIN. ScriptAlias /cgi-bin/ /var/www/virtual/example.org/cgi-bin/ <Directory /var/www/virtual/example.org/cgi-bin> AllowOverride AuthConfig #Options ExecCGI Order allow,deny Allow from all </Directory> # httpd dmn entry cgi support END. <Directory /var/www/virtual/example.org/htdocs/current/public> # httpd dmn entry PHP support BEGIN. # httpd dmn entry PHP support END. Options -Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> # httpd dmn entry PHP2 support BEGIN. <IfModule mod_php5.c> php_admin_value open_basedir "/var/www/virtual/example.org/:/var/www/virtual/example.org/phptmp/:/usr/share/php/" php_admin_value upload_tmp_dir "/var/www/virtual/example.org/phptmp/" php_admin_value session.save_path "/var/www/virtual/example.org/phptmp/" php_admin_value sendmail_path '/usr/sbin/sendmail -f vu2025 -t -i' </IfModule> <IfModule mod_fastcgi.c> ScriptAlias /php5/ /var/www/fcgi/example.org/ <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI -MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> <IfModule mod_fcgid.c> Include /etc/apache2/mods-available/fcgid_ispcp.conf <Directory /var/www/virtual/example.org/htdocs> FCGIWrapper /var/www/fcgi/example.org/php5-fcgi-starter .php Options +ExecCGI </Directory> <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> # httpd dmn entry PHP2 support END. Include /etc/apache2/ispcp/example.org.conf RewriteEngine On # Make sure people go to www.myapp.com, not myapp.com RewriteCond %{HTTP_HOST} ^myapp\.com$ [NC] RewriteRule ^(.*)$ http://www.myapp.com$1 [R=301,L] # Yes, I've read no-www.com, but my site already has much Google-Fu on # www.blah.com. Feel free to comment this out. # Uncomment for rewrite debugging #RewriteLog logs/myapp_rewrite_log #RewriteLogLevel 9 # Check for maintenance file and redirect all requests RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] # Rewrite index to check for static RewriteRule ^/$ /index.html [QSA] # Rewrite to check for Rails cached page RewriteRule ^([^.]+)$ $1.html [QSA] # Redirect all non-static requests to cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://mongrel_cluster%{REQUEST_URI} [P,QSA,L] # Deflate AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \\bMSIE !no-gzip !gzip-only-text/html # Uncomment for deflate debugging #DeflateFilterNote Input input_info #DeflateFilterNote Output output_info #DeflateFilterNote Ratio ratio_info #LogFormat '"%r" %{output_info}n/%{input_info}n (%{ratio_info}n%%)' deflate #CustomLog logs/myapp_deflate_log deflate </VirtualHost> # httpd [example.org] dmn entry END. Does anyone know what could be wrong with it?

    Read the article

  • Error when opening .tar.gz via Shell to install Apache Maven

    - by adamsquared
    Thank you in advance for the help. My Goal: To install apache maven per its websites instructions (http://maven.apache.org/download.html), in order to install the JUNG package according to its install instructions (http://sourceforge.net/apps/trac/jung/wiki/JUNGManual), so I can use the JUNG classes in various Java GUIs. The Problem: I get an error message when I try to extract the apache-maven .gz (install?) file in shell. Background: I'm trying to install the JUNG (http://jung.sourceforge.net/index.html) package to my system's Java, so I can write object-oriented code using various GUIs (Ecliplse, Dr. Java) using the classes in JUNG. I don't understand how the building/installing process works, and how I can get what I build/install to work on various GUIs and the command line. I'm new to shell and the command line, and mostly have experience using a simple IDE (DrJava, Python IDLE, R GUI) to write and compile object-oriented code. Machine: Mac OSX 10.5.8 32-bit. The Instructions: For the maven building Extract the distribution archive, i.e. apache-maven-3.0.4-bin.tar.gz to the directory you wish to install Maven 3.0.4. These instructions assume you chose /usr/local/apache-maven. The subdirectory apache-maven-3.0.4 will be created from the archive. ... for the JUNG installation Appendix: How to Build JUNG This is a brief intro to building JUNG jars with maven2 (the build system that JUNG currently uses). First, ensure that you have a JDK of at least version 1.5: JUNG 2.0+ requires Java 1.5+. Ensure that your JAVA_HOME variable is set to the location of the JDK. On a Windows platform, you may have a separate JRE (Java Runtime Environment) and JDK (Java Development Kit). The JRE has no capability to compile Java source files, so you must have a JDK installed. If your JAVA_HOME variable is set to the location of the JRE, and not the location of the JDK, you will be unable to compile. Get Maven Download and install maven2 from maven.apache.org: http://maven.apache.org/download.html At time of writing (early December 2009), the latest version was maven-2.2.1. Install the downloaded maven2 (there are installation instructions on the Maven website). Follow the installation instructions and confirm a successful installation by typing 'mvn --version' in a command terminal window. Get JUNG ... What I Did: I downloaded the file apache-maven-2.2.1-bin.tar.gz. The JUNG website specified to use apache maven 2. I wanted to stick to the recommended installation instructions, but I couldn't get to /usr on my GUI (i've noticed you click on the MacHD symbol on the desktop its missing several directories/folders that you can see using the shell using the ls command at root directory I couldn't find a way to access the file using my mac GUI. Therefore, I used the shell to navigate to the root directory and then to /usr/local, and used the mkdir command to make the directory apache-maven and entered it. I then moved the file using the mv command. Next I tried extracting the file using tar -zxvf apache-maven-2.2.1-bin.tar.gz. The Error Message: tar: apache-maven-2.2.1/direcoryandfile: Cannot open: No such file or directory ... apache-maven-2.2.1/lib/ext: Cannot mkdir: No such file or directory apache-maven-2.2.1/lib/ext/README.txt tar: apache-maven-2.2.1/lib/ext/README.txt: Cannot open: No such file or directory tar: Error exit delayed from previous errors From what I can tell the archive file is missing some directories or something. I tried deleting the file, redownloading the .tar.gz file from a different mirror and repeating the process. Same result. Thanks again for the help

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94  | Next Page >