Search Results

Search found 40698 results on 1628 pages for 'jon works'.

Page 104/1628 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Top 10 OTN Tech Articles for 2012

    - by Bob Rhubart
    It takes a special kind of IT pro to risk additional carpal tunnel damage to pound out a technical article after spending the day wrestling with a keyboard in dealing with other duties. That kind of dedication is noteworthy, even more so if people actually take the time to read the resulting article. So if you know any of the authors listed below, skip the handshake and give them a congratulatory slap on the back for all that time spent torturing their tendons. Their hard work has earned a place on this list of  the Top 10 most popular OTN articles published in 2012.  Getting Started with Java SE Embedded on the Raspberry Pi by Bill Courington and Gary Collins How Dell Migrated from SUSE Linux to Oracle Linux by Jon Senger, Aik Zu Shyong, and Suzanne Zorn Exploring Oracle SQL Developer by Przemyslaw Piotrowski Getting Started with Oracle Unbreakable Enterprise Kernel Release 2 by Lenz Grimmer How to Get Started (FAST!) with JavaFX 2 and Scene Builder by Mark Heckler How to Use Oracle VM VirtualBox Templates by Yuli Vasiliev How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Hardening an Oracle Linux Server by Lenz Grimmer and James Morris How To Configure Browser-based SSO with Kerberos/SPNEGO and Oracle WebLogic Server by Abhijit Patil How to Create a Local Yum Repository for Oracle Linux by Jared Greenwald Of course, OTN has a great many articles covering a broad range of topics of interest to Java developers, DBAs, sysadmins, solution architects, and everybody else who works keeping the IT world running. You'll find them here. If you have suggestions for topics or technologies you'd like to see covered, please let us know. And if you have insight and expertise to share, why not write your own article? Click here to learn how to get published on OTN.

    Read the article

  • Only One Month to OpenWorld-San Francisco!

    - by Stephen Slade
    From around the world, the city is expecting 50,000+ guests to flock to this annual extravaganza.  Over 2,000 sessions will focus on Oracle’s latest product offerings, customer case studies, panels of experts and a variety of other hardware, technology, middleware and applications. For those interested  in the latest capabilities delivered by Oracle’s supply chain applications, the ‘Focus-On’ documents are now avaiable to help guide you in your schedule builder. Schedule builder allows the capability to create a personalized agenda for the sessions you wish to attend, such as: Monday October 1, 2012 TIME TITLE LOCATION  3:15 pm –4:15 pm General Session: Supply Chain Management—Strategy, Update, and Roadmap Richard Jewell, Senior Vice President, Applications Development, Oracle Moscone West Level 2 Room 3014 Tuesday October 2, 2012 TIME TITLE LOCATION  10:15 am –11:15 am Oracle Fusion Supply Chain Management: Overview, Strategy, Customer Experiences, and Roadmap Jon Chorley, CSO & VP, Product Strategy, Oracle Moscone West  Level 2 Room 2006 There is an exciting lineup of about 100 supply chain sessions at OpenWorld. Contact your sales rep or Oracle Partner to obtain a copy of the most current Focus-On document, segmented by pillars such as Manufacturing, Maintenance/EAM, Value Chain Planning, Value Chain Execution, Procurement and Agile/Product Lifecycle Management.  They will provide you with a better informed view to schedule your time in San Francisco.

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • Round-twice error in .NET's Double.ToString method

    - by Jeppe Stig Nielsen
    Mathematically, consider for this question the rational number 8725724278030350 / 2**48 where ** in the denominator denotes exponentiation, i.e. the denominator is 2 to the 48th power. (The fraction is not in lowest terms, reducible by 2.) This number is exactly representable as a System.Double. Its decimal expansion is 31.0000000000000'49'73799150320701301097869873046875 (exact) where the apostrophes do not represent missing digits but merely mark the boudaries where rounding to 15 resp. 17 digits is to be performed. Note the following: If this number is rounded to 15 digits, the result will be 31 (followed by thirteen 0s) because the next digits (49...) begin with a 4 (meaning round down). But if the number is first rounded to 17 digits and then rounded to 15 digits, the result could be 31.0000000000001. This is because the first rounding rounds up by increasing the 49... digits to 50 (terminates) (next digits were 73...), and the second rounding might then round up again (when the midpoint-rounding rule says "round away from zero"). (There are many more numbers with the above characteristics, of course.) Now, it turns out that .NET's standard string representation of this number is "31.0000000000001". The question: Isn't this a bug? By standard string representation we mean the String produced by the parameterles Double.ToString() instance method which is of course identical to what is produced by ToString("G"). An interesting thing to note is that if you cast the above number to System.Decimal then you get a decimal that is 31 exactly! See this Stack Overflow question for a discussion of the surprising fact that casting a Double to Decimal involves first rounding to 15 digits. This means that casting to Decimal makes a correct round to 15 digits, whereas calling ToSting() makes an incorrect one. To sum up, we have a floating-point number that, when output to the user, is 31.0000000000001, but when converted to Decimal (where 29 digits are available), becomes 31 exactly. This is unfortunate. Here's some C# code for you to verify the problem: static void Main() { const double evil = 31.0000000000000497; string exactString = DoubleConverter.ToExactString(evil); // Jon Skeet, http://csharpindepth.com/Articles/General/FloatingPoint.aspx Console.WriteLine("Exact value (Jon Skeet): {0}", exactString); // writes 31.00000000000004973799150320701301097869873046875 Console.WriteLine("General format (G): {0}", evil); // writes 31.0000000000001 Console.WriteLine("Round-trip format (R): {0:R}", evil); // writes 31.00000000000005 Console.WriteLine(); Console.WriteLine("Binary repr.: {0}", String.Join(", ", BitConverter.GetBytes(evil).Select(b => "0x" + b.ToString("X2")))); Console.WriteLine(); decimal converted = (decimal)evil; Console.WriteLine("Decimal version: {0}", converted); // writes 31 decimal preciseDecimal = decimal.Parse(exactString, CultureInfo.InvariantCulture); Console.WriteLine("Better decimal: {0}", preciseDecimal); // writes 31.000000000000049737991503207 } The above code uses Skeet's ToExactString method. If you don't want to use his stuff (can be found through the URL), just delete the code lines above dependent on exactString. You can still see how the Double in question (evil) is rounded and cast.

    Read the article

  • JOIN THE ORACLE Fusion Middleware Summer Camps

    - by mseika
    JOIN THE ORACLE Fusion Middleware Summer Camps For Specialized partners who are working on following projects & opportunities, we offer these advanced summer camps: - BPM Suite 11 - ADF 11g - WebCenter Portal - WebLogic 12c - SOA Suite 11g - ADF for BPM Suite 11 - WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria! Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) - ADF 11g advanced training by Grant Ronald and Frank Nimphius - WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata - WebLogic 12c training by Cosmin Tudor Munich: Monday, July 16th 11:00 AM - Wednesday July 18th 16:00 PM (CET) - ADF for BPM Suite 11g advanced training by David Read - WebCenter Sites 11g advanced training by Product Management & PTS Cost: Free of charge, cancelation or no-show fee 2.000€ Bootcamps are limited to 20 persons first come first serve For details and registration please visit Lisbon registration page: & Munich registration page Quotes summer camps 2011 “From zero to hero with this BPM workshop” Steven Boon, Ordina Linkedin “This is the training that prepares for real projects and POCs” Jon Petter Hjulstad, eVita – blog & twitter SOA & BPM Partner Community registration Please first login at http://partner.oracle.com and then visit: http://www.oracle.com/goto/emea/soa. If you have any questions please contact the Oracle Partner Business Center. If you have questions please feel free to contact us any time! Best regards Jürgen KressOracle EMEA SOA & BPM Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • Install VirtualBox Guest Additions "Feisty Fawn"

    - by codebone
    I am trying to install the virtualbox guest additions into a feisty fawn (7.04) VM. The problem I am running into is that when I Click install guest additions, or place the iso in the drive manually, the disk never shows up in the machine. I even manually mounted and went to the mounted directory and it was empty. When using the file browser, doubling clicking on the cdrom yields Unable to mount the selected volume. mount: special device /dev/hdc does not exist. Secondly, I tried installing via repository... but according to the system, I have no internet connection. But, I can browse the web using firefox just fine (in the guest). Here is my /etc/network/interfaces file... code auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I also tryed setting a static ip... auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.253 netmask 255.255.255.0 gateway 192.168.1.1 The reason I am trying to get this particular installation running is because it is part of a book, "Hacking: The Art of Exploitation" (Jon Erickson), and is fully loaded wilh tools and such to go along exactly with the book. I appreciate any effort into finding a solution for this! Thanks

    Read the article

  • Oracle Database 12c and Oracle Solaris 11 DTrace

    - by Larry Wake
    As you may have heard, Oracle Database 12c is now available for Oracle Solaris and Oracle Linux. Among other things, that means we now have the opportunity to share some of the cool things the Oracle Database and Oracle Solaris engineering teams have been doing together. And here's a good one: In this screencast, Jon Haslam describes how on Oracle Solaris 11, DTrace is now integrated into Oracle Database V$ views to provide a top-to-bottom picture of a database transaction I/O -- from storage devices, through the Oracle Solaris kernel, up to Oracle Database 12c itself: With this end-to-end view, you can easily identify I/O outliers -- transactions that are taking an unusually long time to complete -- and use this comprehensive data to identify and mitigate storage system problems that were previously extremely hard to debug. This is a great example of the power of DTrace, which is just about to celebrate its 10th anniversary in the wild. The screencast has some nice examples of DTrace's power on its own, as well as diving into the DTrace/Oracle Database 12c synergy. There's more, of course.  Over on the OTN Garage blog, Rick Ramsey has put together a nice compendium of ways the OS makes the database scream, and Ginny Henningsen's written an article on the same topic.  And, we've also got an OTN page that digs further into Oracle Database / Oracle Solaris synergies.

    Read the article

  • Book Review: Professional ASP.Net MVC4

    - by Sam Abraham
    The past few weeks have been particularly busy as I continue to dedicate a bigger portion of my free time to refreshing my memory and enhancing my knowledge of best practices pertaining to technologies we plan on using for a major upcoming project. In this blog post, I will be providing a brief overview of my latest reading “Professional ASP.Net MVC4” by Jon Galloway, Phil Haack, Brad Wilson and K. Scott Allen. This book is a must read for web developers looking to enhance their MVC expertise with best practices and tips shared from recognized industry experts. This book takes the reader on a 16-chapter long journey towards being a better ASP.NET MVC developer with chapter 16 putting all information covered in practical context by dissecting the implementation of Nuget.org, a real-life open-source, ASP.NET MVC project.  All code samples referenced in this book are conveniently accessible via NuGet, a free, open-source Library package manager that installs as a Visual Studio Extension. Chapters 2, 3 and 4 thoroughly cover MVC’s various components: Controllers “C”, Views “V” and Models “M” respectively. Chapter 5 covers additional extension methods (Helpers) provided to speed and ease the use of common HTML elements such as forms, textboxes, grids, to name a few… Chapter 6 tackles built-in validation while providing examples and use cases on implementing custom validation that plugs into the MVC framework. Chapters 7 thru 13 discusses the latest on Membership, Ajax, Routing, NuGet and the ASP.Net Web API. Chapters 12 (Dependency Injection) and 13 (Unit Testing) demonstrate a big competitive advantage of MVC with its ease of test-ability and plug-ability. Chapters 14 and 15 targets the advanced developer showcasing how to extend MVC to customize and replace every piece in the framework.In conclusion, I strongly recommend Professional ASP.NET MVC 4 as an excellent read for both developers already using MVC as well as those getting started with the framework.   Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.  You can access my reviews of books I recently read: Professional ASP.NET Design Patterns Professional WCF 4.0 Inside Windows Communication Foundation Inside Microsoft SQL Server 2008 series

    Read the article

  • On developing deep programming knowledge

    - by Robert Harvey
    Occasionally I see questions about edge cases and other weirdness on Stack Overflow that are easily answered by the likes of Jon Skeet and Eric Lippert, demonstrating a deep knowledge of the language and its many intricacies, like this one: You might think that in order to use a foreach loop, the collection you are iterating over must implement IEnumerable or IEnumerable<T>. But as it turns out, that is not actually a requirement. What is required is that the type of the collection must have a public method called GetEnumerator, and that must return some type that has a public property getter called Current and a public method MoveNext that returns a bool. If the compiler can determine that all of those requirements are met then the code is generated to use those methods. Only if those requirements are not met do we check to see if the object implements IEnumerable or IEnumerable<T>. That's cool stuff to know. I can understand why Eric knows this; he's on the compiler team, so he has to know. But what about those who demonstrate such deep knowledge who are not insiders? How do mere mortals (who are not on the C# compiler team) find out about stuff like this? Specifically, are there methods these folks use to systematically root out such knowledge, explore it and internalize it (make it their own)?

    Read the article

  • How to schedule dynamic function with cron job?

    - by iBrazilian2
    I want to know how I can schedule a dynamic(auto populated data) function to auto run everyday at saved time? Let's say I have a form that once the button is clicked it sends the data to the function, which the posts the data. I simply want to automate that so that I don't have to press the button. <ul> <?php foreach($Class->retrieveData as $data) { <form method="post" action=""> <li> <input type="hidden" name="name">'.$data['name'].'<br/> <input type="hidden" name="description">'.$data['description'].'<br/> <input type="submit" name="post_data" value="Post"> </li> </form> } ?> </ul> Now, the form will pass the data to the function. if(isset($_POST['post_data'])) // if post_data button is clicked then it runs myFunction() { myFunction(); } myFunction() { $name = $_POST['name']; $description = $_POST['description']; } I tried doing the following but the problem is that Cron Job can only run the whole .php file, and I am retrieving the saved time to run from MySQL. foreach($Class->getTime() as $timeData) { $timeHour = $timeData['timeHour']; $timeMinute = $timeData['timeMinute']; $hourMin = date('H:i'); $timeData = ''.$timeHour.':'.$timeMinute.''; if($hourMin == $timeData) { run myFunction. } } $hourMin is the current hour:minute which is being matched against a saved time to auto run from Mysql. So if $hourMin == $timeData then the function will run. How can I run Cron Job to auto run myFunction() if the $hourMin equals $timeData? So... List 1 = is to be runned at 10am List 2 = is to be runned at 12pm List 3 = is to be runned at 2pm The 10am, 12pm, 2pm is the $timeHour and $timeMinute that is retrieved from MySQL but based on each list id's. EDIT @randomSeed, 1) I can schedule cron jobs. 2) $name and $description will all be arrays, so the following is what I am trying to accomplish. $name = array( 'Jon', 'Steven', 'Carter' ); $description = array( 'Jon is a great person.', 'Steven has an outgoing character.', 'Carter is a horrible person.' ); I want to parse the first arrays from both $name and $description if the scheduled time is correct. In database I have the following postDataTime table +----+---------+----------+------------+--------+ | iD | timeDay | timeHour | timeMinute | postiD | +--------------------------------------+--------+ | 1 | * | 9 | 0 | 21 | |----|---------|----------|------------|--------| | 2 | * | 10 | 30 | 22 | |----|---------|----------|------------|--------| | 3 | * | 11 | 0 | 23 | +----|---------+----------+------------+--------+ iD = auto incremented on upload. timeDay = * is everyday (cron job style) timeHour = Hour of the day to run the script timeMinute = minute of the hour to run script postiD = this is the id of the post that is located in another table (n+1 relationship) If it's difficult to understand.. if(time() == 10:30(time from MySQL postiD = 22)) { // run myFunction with the data that is retrieved for that time ex: $postiD = '22'; $name = 'Steven'; $description = 'Steven has an outgoing character.'; // the above is what will be in the $_POST from the form and will be // sent to the myFunction() } I simply want to schedule everything according to the time that is saved to MySQL as I showed at the very top(postDataTime table). (I'd show what I have tried, but I have searched for countless hours for an example of what I am trying to accomplish but I cannot find anything and what I tried doesn't work.). I thought I could use the exec() function but from what it seems that does not allow me to run functions, otherwise I would do the following.. $time = '10:30'; if($time == time()) { exec(myFunction()); }

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • sftp Bad message - (badly formatted packet or protocol incompatibility)

    - by culter
    I have two servers connected through SFTP. When I'm trying to upload file DONATE_SPLATNOSTSFRB-1503_20120315.xls.gpg via WinSCP, it works fine, but when I change file name to DONATE_SPLATNOSTSFRB-1503_20120315.gpg it sometimes upload to server and sometimes not. When It's uploaded, I have problems to delete it. I get this error message: Bad message - (badly formatted packet or protocol incompatibility) Error code: 5 Error message from server: Bad Message Request code: 13 Others files works fine e.g.: DONATE_PREDSFRB-0212_20120315.gpg Thank you.

    Read the article

  • Initial selected Item in an UITabBar without a UITabBarController

    - by juan.xavier
    Hello, How do I set the default selected UITabBarItem in an UITabBar that is within an UIView, not an UITabBarController? Just to clarify, the UIView does implement the protocol and the didSelectItem method works. At run-time, the tabbar works and the tabbaritems selected when the user touches them. My problem is setting the default selected item. If i use [myTabbar setSelectedItem] within the didSelectItem method it works. But outside of it, it doesn't (for example, in the viewDidLoad method of my UIView). Thanks!

    Read the article

  • MYOB odbc connection problem

    - by Inam Jameel
    Hi guys, i recently got a prebuild application which uses MYOB odbc connection to myob file. the odbc connection works perfectly in that application i uses the same odbc connection string in other application but it failed to open in that application. the connection string is perfectly identical but it wont works the new application. Server explorer in the visual studio 2008 connects as well with the same connection string. is it a trusted application issue? because my new application is digitally signed at the moment OdbcConnection odbc = new OdbcConnection("Driver=MYOAU0901;TYPE=MYOB; UID=Administrator; PWD=; DATABASE=C:\\Premier125\\Clearwtr.MYO; NETWORK_PROTOCOL=NONET; DRIVER_COMPLETION=DRIVER_NOPROMPT;;KEY=****"); odbc.Open(); the key used in the connection string is also valid for sure kindly help me i have to deliver a prototype in 2 days the same connection string is works in one application but not in other application whats the problem?

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • SQL 2000: Intermittent Error 7399 with OLE DB Provider for Microsoft Jet

    - by Tim Lara
    I am using SQL Server 2000 on Windows Server 2003 SP2 and have set up a linked server to point at an Access 97 database using the OLE DB Provider 4.0 for Microsoft Jet. The problem I am having sounds almost exactly like the one described in this Microsoft KB article, except that the error I am getting is intermittent: http://support.microsoft.com/kb/814398 The SQL Server is running under the Local System account (which I don't have authority to change), and the Access 97 .mdb file that the linked server points to is on a Win XP Pro machine on the same LAN as the SQL Server machine, inside of a shared folder with permissions set to "Everyone" and "Full Control". Now, if the linked server connection never worked, it would make more sense that the problem is merely a permissions issue with the Local System account as the KB article above suggests, but the maddening thing is that sometimes the connection works just fine. When it fails, the error message is always the same: Error 7399: OLE DB provider 'Microsoft.Jet.OLEDB.4.0' reported an error. [OLE/DB provider returned message: Unspecified error] OLE DB error trace [OLE/DB Provider 'Microsoft.Jet.OLEDB.4.0' IDBInitialize::Initialize returned 0x80004005: ]. Also, not only does the linked server setup occasionally work just fine on this one particular SQL Server, what is supposed to be exactly the same setup on 25 other servers works just fine EVERY TIME! Obviously, something in the non-working setup must not be exactly the same, but I'm having trouble figuring out where to look for the differences since the error message SQL Server returns is so vague. I know our sysadmins have had numerous issues with Active Directory replication across our domain, so my best guess is that there is some sort of odd group policy corruption going on, but I thought I'd ask here to see if I might be overlooking something more straightforward. Any ideas on how to further isolate the error would be greatly appreciated! For the record, here is a list of things I've already tried: Rebooting the SQL Server machine. Fixes the issue temporarily, then the error returns within a minute or two of startup. (This is why I suspect a rogue group policy that is slow to apply fouling things up.) Importing all database objects from the Access 97 mdb into a new, clean mdb file. Makes no difference. Moving the Access 97 mdb file to a local directory on the SQL Server machine instead of accessing it via a share on the Win XP Pro LAN machine. This works, but does not solve the problem because the mdb needs to be on the client machine for performance reasons and the ability to work "stand alone". Plus, the same shared folder access works fine on all other servers / clients on my network. Compared all the SQL Server, Windows Server, etc versions to a known working setup and everything appears to be the same.

    Read the article

  • Fedora 10 setting up HPLIP for HP Deskjet F4488

    - by Shyam
    I had set up a newly purchased HP F4488 using HPLIP, I installed it on my Fedora 10 desktop and it worked fine for the past 2 days. Suddenly, it is failing to print. The scanner works and the photocopier works. When a print job is initialized there is an error and the printer gets disabled. What is the problem? Is there anything that I should fix additionally to make it work?

    Read the article

  • How is the best way to write a SOAP 1.2 Client with Delphi Win32

    - by Cesar Romero
    So far, no Delphi version supports SOAP 1.2 clients or server. I have tried for weeks to make it works, but every time a new problem, with VS/C# I could do the same, and make works in 3 days, but I need to do with Delphi 2009. "I write a new version using Rem Objects SDK,", but the result was not better that I had with Delphi SOAP library. But I'm wondering what choice else do I have, which library/component full support SOAP 1.2? I found a message from Bruneau, suggesting Pocket SOAP http://www.pocketsoap.com/pocketsoap/ I dont know how this works, Ill investigate and see what I can do.

    Read the article

  • Laptop LCD sometimes stops working on reboot. Please help.

    - by J Ringle
    I have a Gateway P-6831FX Laptop with Vista Ultimate. The Laptop LCD will sometimes not come on after I reboot the computer. I don't even close the lid and it happens. It isn't dim, it doesn't come on at all. No posting of CMOS (BIOS), nothing. Please note... this happens sometimes, not every time. Frustrating! When plugged into an external monitor, which works fine, Vista display properties can't even "sense" the laptop LCD. I try to enable the laptop LCD for dual display, turning on the laptop LCD, and it does nothing. It's like the laptop LCD is not even there. Manually taking a magnet in my hand to the laptop lid sensing switch (the sensor that turns off display/sleep mode when you close lid), sometimes causes the LCD backlight to "turn on" but not display any images. By "turn on" I mean I can see the screen backlight turn on to a 'dark gray' screen instead of pitch black. Subsequent reboot the laptop display is not working again! Here are the facts: Only happens at random and only after a reboot. Waking from Sleep mode isn't a problem. Pressing F4 function key for dual display does nothing when this happens. Closing lid doesn't seem to be related. (unless it is only after reboot.) using external magnet from laptop screen sensor sometimes triggers backlight to turn on but reboot back to square one with no LCD display. an external display always works fine. I have taken apart LCD, checked all wires and ribbons for loose connections or damage. I have replaced the Inverter. It doesn't seem to be heat related as I can put in sleep mode and resume fine when very hot. (external monitor works fine too). Sometimes the screen works fine as if there is not a problem at all. Even after a reboot... This is random. Any ideas out there? If it is a bad part... which one? The LCD seems to be fine. What are the odds of 2 bad inverters? The backlight is fine. The LCD wires/ribbons seem to be fine. I am at a loss. No warranty left and Gateway tech support is clueless. Thanks for any feedback that might help.

    Read the article

  • MacBook hangs when restarting to Vista through BootCamp

    - by John
    I have Vista Pro installed on my Macbook and while it mostly works well, sometimes when I select Windows from the boot screen after turning the thing on, the screen just goes black and it all stops. If I hold the power button until it powers down and repeat, it works. It's not getting as far as Windows at all, I think, because if Windows was previously shut down to Hibernate, it still restores fine... and I'm pretty sure if Windows' startup had hung and been force-rebooted my session would be borked?

    Read the article

  • ASPX throws "404 The resource cannot be found"

    - by Diegoeche
    I'm deploying a website under a virtual directory using IIS. For some strange reason, Default.html works, but Default.aspx throws a 404. I have tried these: There's another virtual directory that contains an older version of the application and that one just works. I checked the properties of each virtual directory and they looked the same. I checked that the root didn't had any extra backslashes

    Read the article

  • Calling IPrincipal.IsInRole on Windows 7

    - by adrianbanks
    We use NTLM auth in our application to determine whether a user can perform certain operations. We use the IPrincipal of their current Windows login (in WinForms applications), calling IsInRole to check for specific group memberships. To check that a user is a local administrator on the machine, we use: AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal); ... bool allowed = Thread.CurrentPrincipal.IsInRole(@"Builtin\Administrators") This works if the current user is the Administrator user, or is another user that is a member of the Builtin\Administrators group. In our testing on Windows 7, we have found that this no longer works as expected. The Administrator user still works fine, but any other user that is a member of the Builtin\Administrators group returns false for the IsInRole call. What could be causing this difference? I have a gut feeling that a default setting has changed somewhere (possible in gpedit), but cannot find anything that looks like the culprit.

    Read the article

  • window login when web application is using network share in IIS 6

    - by James
    Hi, I have installed a web application which is configured using a network drive But i am keep getting a pop up asking for credentials looking in the event log, the network logon is set to my domain/account which looks fine however caller user name is empty (not sure if this is an issue) the application works fine when i use a local drive the application also runs fine when i set "connect as" user the application also works fine when a share on the local machine is used!! direct asses using the unc path is not a problem Please advise what i can do or should check Thanks and Regards, James

    Read the article

  • MSSQL2008: DTC Transaction - Internal abort

    - by Teutales
    Hi all, I write a small own replication - a trigger which fires an DTC INSERT to another server (one reason for my own "replication": while trigger is running it calculates some data, another: it works from an express version to an express version). When I do the initial insert from the same Host with the windows authentification it works fine. But there is a webserver on another host, which uses the sqlserver login (for testing sa). When this Host do the initial insert I get a Internal abort after the entlisting and creating phase in the DTCTransaction EventClass (Profiler). The magic is: When I first fire it from the same Host with the windows authentification, I can fire it from the webserver and it works fine. But I just have to wait some minutes and it won't work. Where is my error in reasoning... Thanks! Greetz Teutales Here is my initial server script: EXEC master.dbo.sp_addlinkedserver @server = @Servername, @srvproduct=N'SQL Server' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = @Servername, @locallogin = NULL , @useself = N'False', @rmtuser = @Serverlogin, @rmtpassword = @Serverpwd

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >