Search Results

Search found 1441 results on 58 pages for 'troubleshooting'.

Page 54/58 | < Previous Page | 50 51 52 53 54 55 56 57 58  | Next Page >

  • Adventures in Windows 8: Solving activation errors

    - by Laurent Bugnion
    Note: I tagged this article with “MVVM” because I got a few support requests for MVVM Light regarding this exact issue. In fact it is a Windows 8 issue and has nothing to do with MVVM Light per se… Sometimes when you work on a Windows 8 app, you will get a very annoying issue when starting the app. In that case, the app doesn’t not even start past the Splash screen. Putting a breakpoint in App.xaml.cs doesn’t help because the app doesn’t even reach that point! So what exactly is happening? Well when a Windows 8 app starts, the system is performing a few check first. One of the checks, for instance, is to see if an app with the same package ID is already available. The package ID is a unique value set in the package manifest. In the Solution Explorer, double click on Package.appxmanifest. This opens the manifest in a special editor Click on the Packaging tab See the GUID under Package Name. This is the unique ID I am talking about. If there is a conflict (i.e. if an app is already installed with the exact same ID), Windows will warm the user that the app is already installed. However when you are in the process of developing an app, you install and uninstall the same app many many times (every time that you start in Visual Studio), and sometimes some issues arise, for instance failing to uninstall the app before starting the new instance of the same app. First step if you get such an error When the application fails to start past the splash screen, the first step is to identify what kind of error happened. In my experience the “already installed” is by far the most frequent (in fact I never had another such error), but it can be something else. An annoying thing is that the popup that shows the error is usually started below the Windows 8 app, and so you don’t even see it! This is especially true if you run this in the Simulator. In that case, do the following: Press on the Simulator’s home button, then press on the Desktop tile on the Start menu. The error popup should be shown on the desktop. If your applications runs on the Local machine, you also do the same and press the Windows button, and then from the Start menu press the Desktop tile. Deployment error in Studio Sometimes the same error causes Visual Studio to fail launching the application at all with a deployment error. This is a better case, because at least it is clear that there is an issue. In that case, write down the code that is shown in the Error window (for instance 0x80073D05 in the example below). Once you have the error code, go to the “Troubleshooting packaging, deployment, and query of Windows Store apps” page and look up the code in question. In my case, the error was “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED”, “An error occurred while deleting the package's previously existing application data.” Solving the “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED” issue Update: Before trying the below, you can also try the simple steps: Exit Visual Studio Go to the Start menu Locate your app’s tile. It should be visible in the Start menu directly, towards the far end on the right. Right click the tile and select Uninstall from the App Bar. Restart Visual Studio and try again. Sometimes it helps. If it doesn’t, then try the following: In order to solve the case where Windows, for any reason, fails to delete the existing application before starting the new instance, follow the steps: Open the Package.appxmanifest in Visual Studio Open the Packaging tab. Change the Package name. For tests you can just try to change the last character of the GUID, though I would recommend creating a brand new GUID. Press Start Type GUID Start the GUID Generator application Select Registry Format Press Copy. Paste the new GUID in place of the Package Name in Visual Studio Important: don’t forget to remove the curly brackets at the beginning and at the end of the newly pasted GUID. Then you just have to cross your fingers and start the application again… If it works, celebrate. if it doesn’t work… well at this point I am not sure so good luck with that ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • 11gR2 ??????????

    - by Allen Gao
    ???????11gR2 GI????????????????????,?10g????,???????GI?????????????1.Ocssd.bin:????????10g??????????,???????(Node Monitoring)????(Group Management)?????????????“??????????”????????2.Cssdagent.bin/Cssdmonitor.bin:?2????11gR2??????????????ocssd.bin??????(Local HeartBeat),??????1??????????????????ocssd.bin???????????,????????ocssd.bin????????,??????,???????????10g??oclsomon/oclsvmon(?????????????)?oprocd????,????11gR2???????—rebootless restart,?????????11.2.0.2????????????,????????????(????????)??????ocssd.bin?????,??????????????,??????????GI stack?????,??GI stack??????????(short disk I/O timeout)??graceful shutdown,????????,??,????????????????????????11gR2 ??????????????1.Ocssd.log2.Cssdagent ? cssdmonitor logs<GI_home>/log/<node_name>/agent/ohasd/oracssdagent_root/oracssdagent_root.log<GI_home>/log/<node_name>/agent/ohasd/oracssdmonitor_root_root/oracssdmonitor_root.log3.Cluster alert log<GI_home>/log/<node_name>/alert<node_name>.log4.OS log5.OSW ?? CHM ????,??????????????????1.???????????????????????????????,??????10g???????????????????????????GI alert log ??,?????node2?2012-08-15 16:30:06.554 [cssd(11011) ]CRS-1612:Network communication with node node1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.510 seconds2012-08-15 16:30:13.586 [cssd(11011) ]CRS-1611:Network communication with node node1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.470 seconds2012-08-15 16:30:18.606 [cssd(11011) ]CRS-1610:Network communication with node node1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 2.450 seconds2012-08-15 16:30:21.073 [cssd(11011) ]CRS-1632:Node node1 is being removed from the cluster in cluster incarnation 2363798322012-08-15 16:30:21.086 [cssd(11011) ]CRS-1601:CSSD Reconfiguration complete. Active nodes are node2 .?????????????node1?????????????????,???????, node2?? node1 ?????????node1 ???,???node1 ???????????????(????os log ??OSW ????),??node1 ???????node2??node1?????????,????node1??????????,???reconfiguration,????????????,????????????,?11.2.0.2??,??rebootless restart???,node eviction ????????GI stack??,????????????,???node2?node1?????????,node1?ocssd.bin??????(????ocssd.log??)??node1???????????????,??node1??????GI node eviction????2.???????????????,?????10g???????,???????????3.??ocssd.bin ????Cssdagent/Cssdmonitor.bin????????????,??????,????,????oracssdagent_root.log ?oracssdmonitor_root.log ????????2012-07-23 14:09:58.506: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 75% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 21091 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……2012-07-23 14:10:02.704: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 90% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 25291 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……???????????????timeout???28 ???(misscount – reboot time)?4.?????????????????? ??????????????????????,????ocssd.bin??????,?????????????,?????????????ocssd.bin??,????????os???????????OSW??,???? ??????? cpu ???Linux OSWbb v5.0 node1SNAP_INTERVAL 30CPU_COUNT 8OSWBB_ARCHIVE_DEST /osw/archiveprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa……zzz ***Mon Aug 30 17:55:21 CST 2012158  6 4200956 923940   7664 19088464    0    0  1296  3574 11153 231579  0 100  0  0  0zzz ***Mon Aug 30 17:55:53 CST 2012135  4 4200956 923760   7812 19089344    0    0     4    45  570 14563  0 100  0  0  0zzz ***Mon Aug 30 17:56:53 CST 2012126  2 4200956 923784   8396 19083620    0    0   196  1121  651 15941  2 98  0  0  0?????????????,????10g??????11gR2????????????????,??????,????????Note 1050693.1 : Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

    Read the article

  • IIS7 MVC deploy - 404 not found on actions that accept "id" parameter.

    - by majkinetor
    Hello. Once deployed parts of my web-application stop working. Index-es on each controller do work, and one form posting via Ajax, Login works too. Other then that yields 404. I understand that nothing particular should be done in integrated mode. I don't know how to proceed with troubleshooting. Some info: App is using default app pool set to integrated mode. WebApp is done in net framework 3.5. I use default routing model. Along web.config in root there is web.config in /View folder referencing HttpNotFoundHandler. OS is Windows Server 2008. Admins issued aspnet_regiis.exe -i IIS 7 Any help is appreciated. Thx. EDIT: I determined that only actions that accept ID parameter don't work. On the contrary, when I add dummy id method in Home controller of default MVC app it works. My Views/Web.config <?xml version="1.0"?> <configuration> <system.web> <httpHandlers> <add path="*" verb="*" type="System.Web.HttpNotFoundHandler"/> </httpHandlers> <!-- Enabling request validation in view pages would cause validation to occur after the input has already been processed by the controller. By default MVC performs request validation before a controller processes the input. To change this behavior apply the ValidateInputAttribute to a controller or action. --> <pages validateRequest="false" pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <controls> <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" /> </controls> </pages> </system.web> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <handlers> <remove name="BlockViewHandler"/> <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler"/> </handlers> </system.webServer> </configuration>

    Read the article

  • Getting uninitialized constant error when trying to run tests

    - by Ethan
    I just updated all my gems and I'm finding that I'm getting errors when trying to run Test::Unit tests. I'm getting the error copied below. That comes from creating new, empty Rails project, scaffolding a simple model, and running rake test. Tried Googling "uninitialized constant" and TestResultFailureSupport. The only thing I found was this bug report from 2007. I'm using OS X. These are the gems that I updated right before the tests stopped working: $ sudo gem outdated Password: RedCloth (4.2.1 < 4.2.2) RubyInline (3.8.1 < 3.8.2) ZenTest (4.1.1 < 4.1.3) bluecloth (2.0.4 < 2.0.5) capistrano (2.5.5 < 2.5.8) haml (2.0.9 < 2.2.1) hoe (2.2.0 < 2.3.2) json (1.1.6 < 1.1.7) mocha (0.9.5 < 0.9.7) rest-client (1.0.2 < 1.0.3) thoughtbot-factory_girl (1.2.1 < 1.2.2) thoughtbot-shoulda (2.10.1 < 2.10.2) Has anyone else seen this issue? Any troubleshooting suggestions? UPDATE On a hunch I downgraded ZenTest from 4.1.3 back to 4.1.1 and now everything works again. Still curious to know if anyone else has seen this or has any interesting comments or insights. $ rake test (in /Users/me/foo) /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/helpers/users_helper_test.rb" "test/unit/user_test.rb" /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:105:in `const_missing': uninitialized constant Test::Unit::TestResult::TestResultFailureSupport (NameError) from /usr/local/lib/ruby/gems/1.8/gems/test-unit-2.0.2/lib/test/unit/testresult.rb:28 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:158:in `require' from /usr/local/lib/ruby/gems/1.8/gems/test-unit-2.0.2/lib/test/unit/ui/testrunnermediator.rb:9 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:158:in `require' ... 6 levels... from /usr/local/lib/ruby/1.8/test/unit/autorunner.rb:214:in `run' from /usr/local/lib/ruby/1.8/test/unit/autorunner.rb:12:in `run' from /usr/local/lib/ruby/1.8/test/unit.rb:278 from /usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 /usr/local/bin/ruby -I"lib:test" "/usr/local/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/functional/users_controller_test.rb"

    Read the article

  • Setting up NCover for NUnit in FinalBuilder

    - by Lasse V. Karlsen
    I am attempting to set up NCover for usage in my FinalBuilder project, for a .NET 4.0 C# project, but my final coverage output file contains no coverage data. I am using: NCover 3.3.2 NUnit 2.5.4 FinalBuilder 6.3.0.2004 All tools are the latest official as of today. I've finally managed to coax FB into running my unit tests under NCover for the .NET 4.0 project, so I get Tests run: 184, ..., which is correct. However, the final Coverage.xml file output from NCover is almost empty, and looks like this: <?xml version="1.0" encoding="utf-8"?> <!-- saved from NCover 3.0 Export url='http://www.ncover.com/' --> <coverage profilerVersion="3.3.2.6211" driverVersion="3.3.2" exportversion="3" viewdisplayname="" startTime="2010-04-22T08:55:33.7471316Z" measureTime="2010-04-22T08:55:35.3462915Z" projectName="" buildid="27c78ffa-c636-4002-a901-3211a0850b99" coveragenodeid="0" failed="false" satisfactorybranchthreshold="95" satisfactorycoveragethreshold="95" satisfactorycyclomaticcomplexitythreshold="20" satisfactoryfunctionthreshold="80" satisfactoryunvisitedsequencepoints="10" uiviewtype="TreeView" viewguid="C:\Dev\VS.NET\LVK.IoC\LVK.IoC.Tests\bin\Debug\Coverage.xml" viewfilterstyle="None" viewreportstyle="SequencePointCoveragePercentage" viewsortstyle="Name"> <rebasedpaths /> <filters /> <documents> <doc id="0" excluded="false" url="None" cs="" csa="00000000-0000-0000-0000-000000000000" om="0" nid="0" /> </documents> </coverage> The output in FB log is: ... ***************** End Program Output ***************** Execution Time: 1,5992 s Coverage Xml: C:\Dev\VS.NET\LVK.IoC\LVK.IoC.Tests\bin\Debug\Coverage.xml NCover Success My configuration of the FB step for NCover: NCover what?: NUnit test coverage Command: C:\Program Files (x86)\NUnit 2.5.4\bin\net-2.0\nunit-console.exe Command arguments: LVK.IoC.Tests.dll /noshadow /framework:4.0.30319 /process=single /nothread Note: I've tried with and without the /process and /nothread options Working directory: %FBPROJECTDIR%\LVK.IoC.Tests\bin\Debug List of assemblies to profile: %FBPROJECTDIR%\LVK.IoC.Tests\bin\Debug\LVK.IoC.dll Note: I've tried just listing the name of the assembly, both with and without the extension. The documentation for the FB step doesn't help, as it only lists minor sentences for each property, and fails to give examples or troubleshooting hints. Since I want to pull the coverage results into NDepend to run build-time analysis, I want that file to contain the information I need. I am also using TestDriven, and if I right-click the solution file and select "Test with NCover", NCover-explorer opens up with coverage data, and if I ask it to show me the folder with coverage files, in there is an .xml file with the same structure as the one above, just with all the data that should be there, so the tools I have is certainly capable of producing it. Has anyone an idea of what I've configured wrong here?

    Read the article

  • HTTP 400 Bad Request error attempting to add web reference to WCF Service

    - by c152driver
    I have been trying to port a legacy WSE 3 web service to WCF. Since maintaining backwards compatibility with WSE 3 clients is the goal, I've followed the guidance in this article. After much trial and error, I can call the WCF service from my WSE 3 client. However, I am unable to add or update a web reference to this service from Visual Studio 2005 (with WSE 3 installed). The response is "The request failed with HTTP status 400: Bad Request". I get the same error trying to generate the proxy using the wsewsdl3 utility. I can add a Service Reference using VS 2008. Any solutions or troubleshooting suggestions? Here are the relevant sections from the config file for my WCF service. <system.serviceModel> <services> <service behaviorConfiguration="MyBehavior" name="MyService"> <endpoint address="" binding="customBinding" bindingConfiguration="wseBinding" contract="IMyService" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> </service> </services> <bindings> <customBinding> <binding name="wseBinding"> <security authenticationMode="UserNameOverTransport" /> <mtomMessageEncoding messageVersion="Soap11WSAddressingAugust2004" /> <httpsTransport/> </binding> </customBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="MyCustomValidator" /> </serviceCredentials> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="MyRoleProvider" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> </system.serviceModel>

    Read the article

  • How can I troubleshoot an APPCRASH in Internet Explorer?

    - by Schnapple
    I'm writing an ActiveX control using the firebreath framework (hi taxilian!) and while it technically works, I'm running into a weird issue that appears to be unique to me. I've followed the instructions to create a simple plugin and then I ran it in Internet Explorer 8 on Windows 7 x64 (firebreath sets up a test page for the control). But as soon as I try to test it (clicking on a link that fires off JavaScript to interact with the control), IE crashes. Hard. "Internet Explorer has stopped working" style. If I try the control in Firefox (the resulting registered DLL can also be called as a Firefox plugin using a MIME type), it works fine. If I try it on my XP box, it works fine. I emailed the DLL and the testing page to a coworker in the next cube who is like me also running Windows 7 x64 and it works for him just fine as well, so it's not something unique to Windows 7 or x64. When it crashes I get this message: Problem signature: Problem Event Name: APPCRASH Application Name: iexplore.exe Application Version: 8.0.7600.16385 Application Timestamp: 4a5bc69e Fault Module Name: RPCRT4.dll Fault Module Version: 6.1.7600.16385 Fault Module Timestamp: 4a5bdb3b Exception Code: c0000005 Exception Offset: 000220b1 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 Which tells me nothing extremely useful. I can have it attach to a debugger but it just tells me a long list of DLL's, none of which are the ActiveX control in question. It's almost like it's not even getting there. I did a sfc /scannow yesterday to see if anything on my system is corrupt and nothing came up as wrong. I tried various different security levels in IE, but nothing seems to have any effect. As this is a development machine there has been all matter of crap installed on it, so I figure it's bound to be something I've installed since October (when Win7 was released) but I cannot figure out what it is. I presume the information it's giving me when I attach to Visual Studio is useful somehow but I don't know how to interpret it. Admittedly I'm mainly a C#/.NET developer who's a bit out of his element with C/C++ and troubleshooting native code, but does anyone have any advice on how to proceed on figuring out why this very simple ActiveX control crashes IE on my machine and nowhere else?

    Read the article

  • NSDocument Subclass not closed by NSWindowController?

    - by Nathan Douglas
    Okay, I'm fairly new to Cocoa and Objective-C, and to OOP in general. As background, I'm working on an extensible editor that stores the user's documents in a package. This of course required some "fun" to get around some issues with NSFileWrapper (i.e. a somewhat sneaky writing and loading process to avoid making NSFileWrappers for every single document within the bundle). The solution I arrived at was to essentially treat my NSDocument subclass as just a shell -- use it to make the folder for the bundle, and then pass off writing the actual content of the document to other methods. Unfortunately, at some point I seem to have completely screwed the pooch. I don't know how this happened, but closing the document window no longer releases the document. The document object doesn't seem to receive a "close" message -- or any related messages -- even though the window closes successfully. The end result is that if I start my app, create a new document, save it, then close it, and try to reopen it, the document window never appears. With some creative subclassing and NSLogging, I managed to figure out that the document object was still in memory, and still attached to the NSDocumentController instance, and so trying to open the document never got past the NSDocumentController's "hmm, currently have that one open" check. I did have an NSWindowController and NSDocumentController instance, but I've purged them from my project completely. I've overridden nearly every method for NSDocument trying to find out where the issue is. So far as I know, my Interface Builder bindings are all correct -- "Close" in the main menu is attached to "performClose:" of the First Responder, etc, and I've tried with fresh unsullied MainMenu and Document xibs as well. I thought that it might be something strange with my bundle writing code, so I basically deleted it all and started from scratch, but that didn't seem to work. I took out my init method overrides, and that didn't help either. I don't have the source of any simple document apps here, so I didn't try the next logical step (to substitute known-working code for mine in the readfromurl and writetourl methods). I've had this problem for about sixteen hours of uninterrupted troubleshooting now, and needless to say, I'm at the end of my rope. If I can't figure it out, I guess I'm going to try the project from scratch with a lot more code and intensity based around the bundle-document mess. Any help would be greatly appreciated.

    Read the article

  • Reporting Services as PDF through WebRequest in C# 3.5 "Not Supported File Type"

    - by Heath Allison
    I've inherited a legacy application that is supposed to grab an on the fly pdf from a reporting services server. Everything works fine up until the point where you try to open the pdf being returned and adobe acrobat tells you: Adobe Reader could not open 'thisStoopidReport'.pdf' because it is either not a supported file type or because the file has been damaged(for example, it was sent as an email attachment and wasn't correctly decoded). I've done some initial troubleshooting on this. If I replace the url in the WebRequest.Create() call with a valid pdf file on my local machine ie: @"C:temp/validpdf.pdf") then I get a valid PDF. The report itself seems to work fine. If I manually type the URL to the reporting services report that should generate the pdf file I am prompted for user authentication. But after supplying it I get a valid pdf file. I've replace the actual url,username,userpass and domain strings in the code below with bogus values for obvious reasons. WebRequest request = WebRequest.Create(@"http://x.x.x.x/reportServer?/reports/reportNam&rs:format=pdf&rs:command=render&rc:parameters=blahblahblah"); int totalSize = 0; request.Credentials = new NetworkCredential("validUser", "validPass", "validDomain"); request.Timeout = 360000; // 6 minutes in milliseconds. request.Method = WebRequestMethods.Http.Post; request.ContentLength = 0; WebResponse response = request.GetResponse(); Response.Clear(); BinaryReader reader = new BinaryReader(response.GetResponseStream()); Byte[] buffer = new byte[2048]; int count = reader.Read(buffer, 0, 2048); while (count > 0) { totalSize += count; Response.OutputStream.Write(buffer, 0, count); count = reader.Read(buffer, 0, 2048); } Response.ContentType = "application/pdf"; Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Expires = 30; Response.AddHeader("Content-Disposition", "attachment; filename=thisStoopidReport.pdf"); Response.AddHeader("Content-Length", totalSize.ToString()); reader.Close(); Response.Flush(); Response.End();

    Read the article

  • Help with strange memory behavior. Looking for leaks both in my brain and in my code.

    - by BastiBechtold
    I spent the last few days trying to find memory leaks in a program we are developing. First of all, I tried using some leak detectors. After fixing a few issues, they do not find any leaks any more. However, I am also monitoring my application using perfmon.exe. Performance Monitor reports that 'Private Bytes' and 'Working Set - Private' are steadily rising when the app is used. To me, this suggests that the program is using more and more memory the longer it runs. Internal resources seem to be stable however, so this sounds like leaking to me. The program is loading a DLL at runtime. I suspect that these leaks or whatever they are occur in that library and get purged when the library is unloaded, hence they won't get picked up by the leak detectors. I used both DevPartner BoundsChecker and Virtual Leak Detector to look for memory leaks. Both supposedly catch leaks in DLLs. Also, the memory consumption is increasing in steps and those steps roughly, but not exactly, coincide with certain GUI actions I perform in the application. If these were errors in our code, they should get triggered every single time the actions are performed and not just most of the time. Whenever I am confronted with so much strangeness, I begin to question my basic assumptions. So I turn to you, who know everything, for suggestions. Is there a flaw in my assumptions? Do you have an idea of how to go about troubleshooting a problem like this? Edit: I am currently using Microsoft Visual C++ (x86) on Windows 7 64. Edit2: I just used IBM Purify to hunt for leaks. First of all, it lists a full 30% of the program as leaked memory. This can not be true. I guess it is identifying the whole DLL as leaked or something like that. However, if I search for new leaks every few actions, it reports leaks that correspond with the size increase reported by Performance Monitor. This could be a lead to a leak. Sadly, I am only using the trial version of Purify, so it won't show me the actual location of those leaks. (These leaks only show up at runtime. When the program exits, there are no leaks whatsoever reported by any tool.)

    Read the article

  • Greasemonkey is getting an empty document.body on select Google pages.

    - by Brock Adams
    Hi, I have a Greasemonkey script that processes Google search results. But it's failing in a few instances, when xpath searches (and document body) appear to be empty. Running the code in Firebug's console works every time. It only fails in a Greasemonkey script. Greasemonkey sees an empty document.body. I've boiled the problem down to a test, greasemonkey script, below. I'm using Firefox 3.5.9 and Greasemonkey 0.8.20100408.6 (but earlier versions had the same problem). Problem: Greasemonkey sees an empty document.body. Recipe to Duplicate: Install the Greasemonkey script. Open a new tab or window. Navigate to Google.com (http://www.google.com/). Search on a simple term like "cats". Check Firefox's Error console (Ctrl-shift-J) or Firebug's console. The script will report that document body is empty. Hit refresh. The script will show a good result (document body found). Note that the failure only reliably appears on Google results obtained this way, and on a new tab/window. Turn javascript off globally (javascript.enabled set to false in about:config). Repeat steps 2 thru 5. Only now the Greasemonkey script will work. It seems that Google javascript is killing the DOM tree for greasemonkey, somehow. I've tried a time-delayed retest and even a programmatic refresh; the script still fails to see the document body. Test Script: // // ==UserScript== // @name TROUBLESHOOTING 2 snippets // @namespace http://www.google.com/ // @description For code that has funky misfires and defies standard debugging. // @include http://*/* // ==/UserScript== // function LocalMain (sTitle) { var sUserMessage = ''; //var sRawHtml = unsafeWindow.document.body.innerHTML; //-- unsafeWindow makes no difference. var sRawHtml = document.body.innerHTML; if (sRawHtml) { sRawHtml = sRawHtml.replace (/^\s\s*/, ''). substr (0, 60); sUserMessage = sTitle + ', Doc body = ' + sRawHtml + ' ...'; } else { sUserMessage = sTitle + ', Document body seems empty!'; } if (typeof (console) != "undefined") { console.log (sUserMessage); } else { if (typeof (GM_log) != "undefined") GM_log (sUserMessage); else if (!sRawHtml) alert (sUserMessage); } } LocalMain ('Preload'); window.addEventListener ("load", function() {LocalMain ('After load');}, false);

    Read the article

  • Tips on installing Visual Studio 2010 SP1

    - by Jon Galloway
    Visual Studio SP1 went up on MSDN downloads (here) on March 8, and will be released publicly on March 10 here. Release announcements: Soma: Visual Studio 2010 enhancements Jason Zander: Announcing Visual Studio 2010 Service Pack 1 I started on this post with tips on installing VS2010 SP1 when I realized I’ve been writing these up for Visual Studio and .NET framework SP releases for a while (e.g. VS2008 / .NET 3.5 SP1 post, VS2005 SP1 post). Looking back the years of Visual Studio SP installs (and remembering when we’d get up to SP6 for a Visual Studio release), I’m happy to see that it just keeps getting easier. Service Packs are a lot less finicky about requiring beta software to be uninstalled, install more quickly, and are just generally a lot less scary. If I can’t have a jetpack, at least my future provided me faster, easier service packs. Disclaimer: These tips are just general things I've picked up over the years. I don't have any inside knowledge here. If you see anything wrong, be sure to let me know in the comments. You may want to check the readme file before installing - it's short, and it's in that new-fangled HTML format. On with the tips! Before starting, uninstall Visual Studio features you don't use Visual Studio service packs (and other Microsoft service packs as well) install patches for the specific features you’ve got installed. This is a big reason to always do a custom install when you first install Visual Studio, but it’s not difficult to update your existing installation. Here’s the quick way to do that: Tap the windows key and type “add or remove programs” and press enter (or click on the “Add or remove programs” link if you must).   Type “Visual Studio 2010” in the search box in the upper right corner, click on the Visual Studio program (the one with the VS infinity looking logo) and click on Uninstall/Change. Click on Add or Remove Features The next part’s up to you – what features do you actually use? I’ve been doing primarily ASP.NET MVC development in C# lately, so I selected Visual C# and Visual Web Developer. Remember that you can install features later if needed, and can also install the express versions if you want. Selecting everything just because it’s there - or you paid for it – means that you install updates for everything, every time. When you’ve made your changes, click on the Update button to uninstall unused features. Shut down all instances of Visual Studio It probably goes without saying that you should close a program down before installing it, partly to avoid the file-in-use-reboot-after-install horror. Additional "hunch / works on my machine" quality tip: On one computer I saw a note in the setup log about Visual Studio a prompt for user input to close Visual Studio, although I never saw the prompt. Just to  be sure, I'd personally open up Task Manager and kill any devenv.exe processes I saw running, as it couldn't hurt. Use the web installer I use the Web Installers whenever possible. There’s no point in downloading the DVD unless you’re doing multiple installs or won’t have internet access. The DVD IS is 1.5GB, since it needs to be able to service every possible supported installation option on both x86 and x64. The web installer is 776 KB (smaller than calc.exe), so you can start the installation right away. Like other web installers, the real benefit is that it only installs the updates you need (hence the reason for step 1 – uninstalling unused components). Instead of 1.5GB, my download was roughly 530MB. If you’re installing from MSDN (this link takes you right to the Visual Studio installs), select the first one on the list: The first step in the installation process is to analyze the machine configuration and tell you what needs to be installed. Since I've trimmed down my features, that's a pretty short list. The time's not far off where I may not install SQL Server on my dev machines, just using SQL Server Compact - that would shorten the list further. When I hit next, you can see that the download size has shrunk considerably. When I start the install, note that the installation begins while other components are downloading - another benefit of the web install. On my mid-range desktop machine, the install took 25 minutes. What if it takes longer? According to Heath Stewart (Visual Studio installer guru), average SP1 installs take roughly 45 minutes. An installation which takes hours to complete may be a sign of a problem: see his post Visual Studio 2010 Service Pack 1 installing for over 2 hours could be a sign of a problem. Why so long? Yes, even 25 minutes is a while. Heath's got another blog post explaining why the update can take longer than the initial install (see: A patch may take as long or longer to install than the target product) which explains all the additional steps and complexities a patch needs to deal with, as well as some mitigation steps that deployment authors can take to mitigate the impact. Other things to know about Visual Studio 2010 SP1 Installs over Visual Studio 2010 SP1 Beta That's nice. Previous Visual Studio versions did a number of annoying things when you installed SP's over beta's - fail with weird errors, get part way through and tell you needed to cancel and uninstall first, etc. I've installed this on two machines that had random beta stuff installed without tears. That Readme file you didn't read I mentioned the readme file earlier (http://go.microsoft.com/fwlink/?LinkId=210711 ). Some interesting things I picked up in there: 2.1.3. Visual Studio 2010 Service Pack 1 installation may fail when a USB drive or other removeable drive is connected 2.1.4. Visual Studio must be restarted after Visual Studio 2010 SP1 tooling for SQL Server Compact (Compact) 4.0 is installed 2.2.1. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled to restore certain components 2.2.2. If Visual Studio 2010 Service Pack 1 is uninstalled, Visual Studio 2010 must be reinstalled before SP1 can be installed again 2.4.3.1. Async CTP If you installed the pre-SP1 version of Async CTP but did not uninstall it before you installed Visual Studio 2010 SP1, then your computer will be in a state in which the version of the C# compiler in the .NET Framework does not match the C# compiler in Visual Studio. To resolve this issue: After you install Visual Studio 2010 SP1, reinstall the SP1 version of the Async CTP from here. Hardware acceleration for Visual Studio is disabled on Windows XP Visual Studio 2010 SP1 disables hardware acceleration when running on Windows XP (only on XP). You can turn it back on in the Visual Studio options, under Environment / General, as shown below. See Jason Zander's post titled Performance Troubleshooting Article and VS2010 SP1 Change.

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • SQLAuthority News – Memories at Anniversary of SQL Wait Stats Book

    - by pinaldave
    SQL Wait Stats About a year ago, I had very proud moment. I had published my second book SQL Server Wait Stats with me as a primary author. It has been a long journey since then. The book got great response and it was widely accepted in the community. It was first of its kind of book written specifically on Wait Stats and Performance. The book was based on my earlier month long series written on the same subject SQL Server Wait Stats. Today, on the anniversary of the book, lots of things come to my mind let me share a few here. Idea behind Blog Series A very common question I often receive is why I wrote a 30 day series on Wait Stats. There were two reasons for it. 1) I have been working with SQL Server for a long time and have troubleshoot more than hundreds of SQL Server which are related to performance tuning. It was a great experience and it taught me a lot of new things. I always documented my experience. After a while I found that I was able to completely rely on my own notes when I was troubleshooting any servers. It is right then I decided to document my experience for the community. 2) While working with wait stats there were a few things, which I thought I knew it well as they were working. However, there was always a fear in the back of mind that what happens if what I believed was incorrect and I was on the wrong path all the time. There was only one way to get it validated. Put it out in front community with my understanding and request further help to improve my understanding. It worked, it worked beautifully. I received plenty of conversations, emails and comments. I refined my content based on various conversations and make it more relevant and near accurate. I guess above two are the major reasons for beginning my journey on writing Wait Stats blog series. Idea behind Book After writing a blog series there was a good amount of request I keep on receiving that I should convert it to eBook or proper book as reading blog posts is great but it goes not give a comprehensive understanding of the subject. The very common feedback from users who were beginning the subject that they will prefer to read it in a structured method. After hearing the feedback for more than 4 months, I decided to write a book based on the blog posts. When I envisioned book, I wanted to make sure this book addresses the wait stats concepts from the fundamentals and fill the gaps of blogs I wrote earlier. Rick Morelan and Joes 2 Pros Team I must acknowledge my co-author Rick Morelan for his unconditional support in writing this book. I had already authored one book before I published this book. The experience to write the book was out of the world. Writing blog posts are much much easier than writing books. The efforts it takes to write a book is 100 times more even though the content is ready. I could have not done it myself if there was not tremendous support of my co-author and editor’s team. We spend days and days researching and discussing various concepts covered in the book. When we were in doubt we reached out to experts as well did a practical reproduction of the scenarios to validate the concepts and claims. After continuous 3 months of hard work we were able to get this book out in the community. September 1st – the lucky day Well, we had to select any day to publish the books. When book was completed in August last week we felt very glad. We all had worked hard and having a sample draft book in hand was feeling like having a newborn baby in our hand. Every time my books are published I feel the same joy which I had when my daughter was born. The feeling of holding a new book in hand is the (almost) same feeling as holding newborn baby. I am excited. For me September 1st has been the luckiest day in mind life. My daughter Shaivi was born on September 1st. Since then every September first has been excellent day and have taken me to the next step in life. I believe anything and everything I do on September 1st it is turning out to be successful and blessed. Rick and I had finished a book in the last week of August. We sent it to the publisher (printer) and asked him to take the book live as soon as possible. We did not decide on any date as we wanted the book to get out as fast as it can. Interesting enough, the publisher/printer selected September 1st for publishing the book. He published the book on 1st September and I knew it at the same time that this book will go next level. Book Model – The Most Beautiful Girl We were done with book. We had no budget left for marketing. Rick and I had a long conversation regarding how to spread the words for the book so it can reach to many people. While we were talking about marketing Rick come up with the idea that we should hire a most beautiful girl around who acknowledge our book and genuinely care for book. It was a difficult task and Rick asked me to find a more beautiful girl. I am a father and the most beautiful girl for me my daughter. This was not a difficult task for me. Rick had given me task to find the most beautiful girl and I just could not think of anyone else than my own daughter. I still do not know what Rick thought about this idea but I had already made up my mind. You can see the detailed blog post here. The Fun Experiments Book Signing Event We had lots of fun moments along this book. We have given away more books to people for free than we have sold them actually. We had done book signing events, contests, and just plain give away when we found people can be benefited from this book. There was never an intention to make money and get rich. We just wanted that more and more people know about this new concept and learn from it. Today when I look back to the earnings there is nothing much we have earned if you talk about dollars. However the best reward which we have received is the satisfaction and love of community. The amount of emails, conversations we have so far received for this book is over thousands. We had fun writing this book, it was indeed a very satisfying journey. I have earned lots of friends while learning and exploring. Availability The book is one year old but still very relevant when it is about performance tuning. It is available at various online book stores. If you have read the book, do let me know what you think of it. Amazon | Kindle | Flipkart | Indiaplaza Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • Useful software for netbook?

    - by Moayad Mardini
    I'm looking for recommendations of good software that are particularly useful for netbooks. Software that run great on small screens and low CPU/RAM requirments. I'll start off with the following : Operating Systems: Ubuntu Netbook Remix. Easy Peasy: A fork of Ubuntu Netbook Remix that was once called UBuntu EEE. It isn't just for eeePCs though. Definitely worth a look if vanilla Netbook Remix isn't cutting it. (MarkM) Damn Small Linux (Source) Windows 7: With trimming the installation or compressing the Windows directory to fit on an 8GB SSD. (Will Eddins) nLite: A utility to install a lightweight version of Windows XP without the unnecessary components (like Media Player, Internet Explorer, Outlook Express, MSN Explorer, Messenger...). Utilites: TouchFreeze: To disable the touch pad while typing (Source) InSSIDer: Not only does it make it easier to find and keep a wireless connection, but it turns a netbook into the perfect mobile tool for troubleshooting wireless networks. (phenry) AltMove: Adds more functionality to your mouse for interacting with windows. (Rob) ASUS Font Resizer Utility and other tools by ASUS, specific to ASUS Eee PC series. Internet: Run FileZilla FTP client for a small screen : You can hide a lot of FileZilla's interface parts in the View menu, even the directory trees. Go into Settings = Interface and move the message log next to the transfer queue, if you haven't hidden them both or you want to see them. Select a theme with 16x16 icons. (Source) IDEs and Text Editors: Best lightweight IDE/Text Editor: A question on Stack Overflow that has many good suggestions of IDEs and general text editors for programmers. What’s a good linux C/C++ IDE for a low-res screen?: IDEs for Linux-powered netbooks. Online tools: Dropbox: Since the Netbook has limited disk space, you would like to use Cloud Apps like Dropbox and Ubuntu One so that you don't run out of space especially if you are on a holiday. Later when you go back to your desktop with big hard disk,you can take out the files from your dropbox repo. (Manish Sinha) Google products: like Docs, Calendar and Reader (aviraldg) Web sites and software lists: Netbookfiles.com: Netbook specific software downloads. Software Apps to Maximise your Netbook Battery Power: Netbooks are known for their portability. Not only are they small and lightweight but with their increased power efficiency, batteries can last much longer than conventional laptops. This also means you no longer have to carry a power adapter with you! Several brands emphasis the longevity of the battery as a strong selling point, and for those people who travel a lot, it sure is. Free Must-Have Netbook Apps: Finding software for netbooks can present challenges due to limited hard drive space, processor power, RAM, and screen real-estate. That doesn't mean you have to do without essential programs. The apps below cover all the bases -- entertainment, productivity, security, and communication -- without compromising on performance or usability. Best of all, they're free! Useful Netbook Software: With short battery lives and small resolution screens Netbooks, unlike many other computers on the market, could so with some specific software for their use. Now, not all of those I’ve found are specifically designed for Netbooks, but all are relevant. And they’re designed for Windows XP. The question is community wiki, so feel free to edit it. Updated, thank you all for suggestions.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Why is this mod_rewrite RewriteRule directive not working in the .htaccess file?

    - by morgant
    I've got a site that was hosted on a linux el cheapo hosting service that I'm migrating to my Mac OS X 10.5 Leopard Server server running Apache 2.2.8 & PHP 5.2.5 w/rewrite_module enabled and AllowOverride All, but I'm running into an issue with the following lines in the .htaccess file: RewriteEngine On #RewriteRule ^view/([^/\.]+)/?$ /view.php?item=$1 [L] #RewriteRule ^order/([^/\.]+)/?$ /order.php?item=$1 [L] RewriteRule ^category/([^/\.]+)/?$ /category.php?category=$1 [L] As you can see, I've commented out the RewriteRule directives for /view/ and /order/, so I'm only dealing with /category/. When I attempt to load http://domain.tld/category/2/ it runs category.php (I've added debug code to confirm), but $_SERVER['REQUEST_URI'] comes through as /category/2/ and $_GET['category'] comes through as empty. I'm usually fine with troubleshooting .htaccess files and mod_rewrite directives, but this one's got me stumped for some reason. Update: I followed Josh's suggestion and here's the what's dumped to mod_rewrite.log when I try to access http://domain.tld/category/2/: 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (2) init rewrite engine with requested uri /category/13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (3) applying pattern '.*' to uri '/category/13' 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (1) pass through /category/13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6aa98/subreq] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] add path info postfix: /Library/WebServer/Documents/tld.domain.www/category.php -> /Library/WebServer/Documents/tld.domain.www/category.php/13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6aa98/subreq] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] strip per-dir prefix: /Library/WebServer/Documents/tld.domain.www/category.php/13 -> category.php/13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6aa98/subreq] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] applying pattern '^category/([^/\.]+)/?$' to uri 'category.php/13' 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6aa98/subreq] (1) [perdir /Library/WebServer/Documents/tld.domain.www/] pass through /Library/WebServer/Documents/tld.domain.www/category.php 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] add path info postfix: /Library/WebServer/Documents/tld.domain.www/category.php -> /Library/WebServer/Documents/tld.domain.www/category.php/13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] strip per-dir prefix: /Library/WebServer/Documents/tld.domain.www/category.php/13 -> category.php/13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] applying pattern '^category/([^/\.]+)/?$' to uri 'category.php/13' 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b5ea98/initial] (1) [perdir /Library/WebServer/Documents/tld.domain.www/] pass through /Library/WebServer/Documents/tld.domain.www/category.php 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6ea98/subreq] (2) init rewrite engine with requested uri /13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6ea98/subreq] (3) applying pattern '.*' to uri '/13' 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6ea98/subreq] (1) pass through /13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6ea98/subreq] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] strip per-dir prefix: /Library/WebServer/Documents/tld.domain.www/13 -> 13 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6ea98/subreq] (3) [perdir /Library/WebServer/Documents/tld.domain.www/] applying pattern '^category/([^/\.]+)/?$' to uri '13' 65.19.81.253 - - [22/Oct/2009:17:31:53 --0400] [domain.tld/sid#100aae0b0][rid#100b6ea98/subreq] (1) [perdir /Library/WebServer/Documents/tld.domain.www/] pass through /Library/WebServer/Documents/tld.domain.www/13

    Read the article

  • 202 blog articles

    - by mprove
    All my blog articles under blogs.oracle.com since August 2005: 202 blog articles Apr 2012 blogs.oracle.com design patch Mar 2012 Interaction 12 - Critique Mar 2012 Typing. Clicking. Dancing. Feb 2012 Desktop Mobility in Hospitals with Oracle VDI /video Feb 2012 Interaction 12 in Dublin - Highlights of Day 3 Feb 2012 Interaction 12 in Dublin - Highlights of Day 2 Feb 2012 Interaction 12 in Dublin - Highlights of Day 1 Feb 2012 Shit Interaction Designers Say Feb 2012 Tips'n'Tricks for WebCenter #3: How to display custom page titles in Spaces Jan 2012 Tips'n'Tricks for WebCenter #2: How to create an Admin menu in Spaces and save a lot of time Jan 2012 Tips'n'Tricks for WebCenter #1: How to apply custom resources in Spaces Jan 2012 Merry XMas and a Happy 2012! Dec 2011 One Year Oracle SocialChat - The Movie Nov 2011 Frank Ludolph's Last Working Day Nov 2011 Hans Rosling at TED Oct 2011 200 Countries x 200 Years Oct 2011 Blog Aggregation for Desktop Virtualization Oct 2011 Oracle VDI at OOW 2011 Sep 2011 Design for Conversations & Conversations for Design Sep 2011 All Oracle UX Blogs Aug 2011 Farewell Loriot Aug 2011 Oracle VDI 3.3 Overview Aug 2011 Sutherland's Closing Remarks at HyperKult Aug 2011 Surface and Subface Aug 2011 Back to Childhood in UI Design Jul 2011 The Art of Engineering and The Engineering of Art Jul 2011 Oracle VDI Seminar - June-30 Jun 2011 SGD White Paper May 2011 TEDxHamburg Live Feed May 2011 Oracle VDI in 3 Minutes May 2011 Space Ship Earth 2011 May 2011 blog moving times Apr 2011 Frozen tag cloud Apr 2011 Oracle: Hardware Software Complete in 1953 Apr 2011 Interaction Design with Wireframes Apr 2011 A guide to closing down a project Feb 2011 Oracle VDI 3.2.2 Jan 2011 free VDI charts Jan 2011 Sun Founders Panel 2006 Dec 2010 Sutherland on Leadership Dec 2010 SocialChat: Efficiency of E20 Dec 2010 ALWAYS ON Desktop Virtualization Nov 2010 12,000 Desktops at JavaOne Nov 2010 SocialChat on Sharing Best Practices Oct 2010 Globe of Visitors Oct 2010 SocialChat about the Next Big Thing Oct 2010 Oracle VDI UX Story - Wireframes Oct 2010 What's a PC anyway? Oct 2010 SocialChat on Getting Things Done Oct 2010 SocialChat on Infoglut Oct 2010 IT Twenty Twenty Oct 2010 Desktop Virtualization Webcasts from OOW Oct 2010 Oracle VDI 3.2 Overview Sep 2010 Blog Usability Top 7 Sep 2010 100 and counting Aug 2010 Oracle'izing the VDI Blogs Aug 2010 SocialChat on Apple Aug 2010 SocialChat on Video Conferencing Aug 2010 Oracle VDI 3.2 - Features and Screenshots Aug 2010 SocialChat: Don't stop making waves Aug 2010 SocialChat: Giving Back to the Community Aug 2010 SocialChat on Learning in Meetings Aug 2010 iPAD's Natural User Interface Jul 2010 Last day for Sun Microsystems GmbH Jun 2010 SirValUse Celebration Snippets Jun 2010 10 years SirValUse - Happy Birthday! Jun 2010 Wim on Virtualization May 2010 New Home for Oracle VDI Apr 2010 Renaissance Slide Sorter Comments Apr 2010 Unboxing Sun Ray 3 Plus Apr 2010 Desktop Virtualisierung mit Sun VDI 3.1 Apr 2010 Blog Relaunch Mar 2010 Social Messaging Slides from CeBIT Mar 2010 Social Messaging Talk at CeBIT Feb 2010 Welcome Oracle Jan 2010 My last presentation at Sun Jan 2010 Ivan Sutherland on Leadership Jan 2010 Learning French with Sun VDI Jan 2010 Learning Danish with Sun Ray Jan 2010 VDI workshop in Nieuwegein Jan 2010 Happy New Year 2010 Jan 2010 On Creating Slides Dec 2009 Best VDI Ever Nov 2009 How to store the Big Bang Nov 2009 Social Enterprise Tools. Beipiel Sun. Nov 2009 Nov-19 Nov 2009 PDF and ODF links on your blog Nov 2009 Q&A on VDI and MySQL Cluster Nov 2009 Zürich next week: Swiss Intranet Summit 09 Nov 2009 Designing for a Sustainable World - World Usabiltiy Day, Nov-12 Nov 2009 How to export a desktop from VDI 3 Nov 2009 Virtualisation Roadshow in the UK Nov 2009 Project Wonderland at EDUCAUSE 09 Nov 2009 VDI Roadshow in Dublin, Nov-26, 2009 Nov 2009 Sun VDI at EDUCAUSE 09 Nov 2009 Sun VDI 3.1 Architecture and New Features Oct 2009 VDI 3.1 is Early-Access Sep 2009 Virtualization for MySQL on VMware Sep 2009 Silpion & 13. Stock Sommerparty Sep 2009 Sun Ray and VMware View 3.1.1 2009-08-31 New Set of Sun Ray Status Icons 2009-08-25 Virtualizing the VDI Core? 2009-08-23 World Usability Day Hamburg 2009 - CfP 2009-07-16 Rising Sun 2009-07-15 featuring twittermeme 2009-06-19 ISC09 Student Party on June-20 /Hamburg 2009-06-18 Before and behind the curtain of JavaOne 2009-06-09 20k desktops at JavaOne 2009-06-01 sweet microblogging 2009-05-25 VDI 3 - Why you need 3 VDI hosts and what you can do about that? 2009-05-21 IA Konferenz 2009 2009-05-20 Sun VDI 3 UX Story - Power of the Web 2009-05-06 Planet of Sun and Oracle User Experience Design 2009-04-22 Sun VDI 3 UX Story - User Research 2009-04-08 Sun VDI 3 UX Story - Concept Workshops 2009-04-06 Localized documentation for Sun Ray Connector for VMware View Manager 1.1 2009-04-03 Sun VDI 3 Press Release 2009-03-25 Sun VDI 3 launches today! 2009-03-25 Sun Ray Connector for VMware View Manager 1.1 Update 2009-03-11 desktop virtualization wiki relaunch 2009-03-06 VDI 3 at CeBIT hall 6, booth E36 2009-03-02 Keyboard layout problems with Sun Ray Connector for VMware VDM 2009-02-23 wikis.sun.com tips & tricks 2009-02-23 Sun VDI 3 is in Early Access 2009-02-09 VirtualCenter unable to decrypt passwords 2009-02-02 Sun & VMware Desktop Training 2009-01-30 VDI at next09? 2009-01-16 Sun VDI: How to use virtual machines with multiple network adapters 2009-01-07 Sun Ray and VMware View 2009-01-07 Hamburg World Usability Day 2008 - Webcasts 2009-01-06 Sun Ray Connector for VMware VDM slides 2008-12-15 mother of all demos 2008-12-08 Build your own Thumper 2008-12-03 Troubleshooting Sun Ray Connector for VMware VDM 2008-12-02 My Roller Tag Cloud 2008-11-28 Sun Ray Connector: SSL connection to VDM 2008-11-25 Setting up SSL and Sun Ray Connector for VMware VDM 2008-11-13 Inspiration for Today and Tomorrow 2008-10-23 Sun Ray Connector for VMware VDM released 2008-10-14 From Sketchpad to ILoveSketch 2008-10-09 Desktop Virtualization on Xing 2008-10-06 User Experience Forum on Xing 2008-10-06 Sun Ray Connector for VMware VDM certified 2008-09-17 Virtual Clouds over Las Vegas 2008-09-14 Bill Verplank sketches metaphors 2008-09-04 End of Early Access - Sun Ray Connector for VMware 2008-08-27 Early Access: Sun Ray Connector for VMware Virtual Desktop Manager 2008-08-12 Sun Virtual Desktop Connector - Insides on Recycling Part 2 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling Part 3 2008-07-20 Sun Virtual Desktop Connector - Insides on Recycling 2008-07-20 lost in wiki space 2008-07-07 Evolution of the Desktop 2008-06-17 Virtual Desktop Webcast 2008-06-16 Woodstock 2008-06-16 What's a Desktop PC anyway? 2008-06-09 Virtual-T-Box 2008-06-05 Virtualization Glossary 2008-05-06 Five User Experience Principles 2008-04-25 Virtualization News Feed 2008-04-21 Acetylcholinesterase - Second Season 2008-04-18 Acetylcholinesterase - End of Signal 2007-12-31 Produkt-Management ist... 2007-10-22 Usability Verbände, Verteiler und Netzwerke. 2007-10-02 The Meaning is the Message 2007-09-28 Visualization Methods 2007-09-10 Inhouse und Open Source Projekte – Usability verankern und Synergien nutzen 2007-09-03 Der Schwabe Darth Vader entdeckt das Virale Marketing 2007-08-29 Dick Hardt 3.0 on Identity 2.0 2007-08-27 quality of written text depends on the tool 2007-07-27 podcasts for reboot9 2007-06-04 It is the user's itch that need to be scratched 2007-05-25 A duel at reboot9 2007-05-14 Taxonomien und Folksonomien - Tagging als neues HCI-Element 2007-05-10 Dueling Interaction Models of Personal-Computing and Web-Computing 2007-03-01 22.März: Weizenbaum. Rebel at Work. /Filmpremiere Hamburg 2007-02-25 Bruce Sterling at UbiComp 2006 /webcast 2006-11-12 FSOSS 2006 /webcasts 2006-11-10 Highway 101 2006-11-09 User Experience Roundtable Hamburg: EuroGEL 2006 2006-11-08 Douglas Adams' Hyperland (BBC 1990) 2006-10-08 Taxonomien und Folksonomien – Tagging als neues HCI-Element 2006-09-13 Usability im Unternehmen 2006-09-13 Doug does HyperScope 2006-08-26 TED Talks and TechTalks 2006-08-21 Kai Krause über seine Freundschaft zu Douglas Adams 2006-07-20 Rebel At Work: Film Portrait on Weizenbaum 2006-07-04 Gabriele Fischer, mp3 2006-06-07 Dick Hardt at ETech 06 2006-06-05 Weinberger: From Control to Conversation 2006-04-16 Eye Tracking at User Experience Roundtable Hamburg 2006-04-14 dropping knowledge 2006-04-09 GEL 2005 2006-03-13 slide photos of reboot7 2006-03-04 Dick Hardt on Identity 2.0 2006-02-28 User Experience Newsletter #13: Versioning 2006-02-03 Ester Dyson on Choice and Happyness 2006-02-02 Requirements-Engineering im Spannungsfeld von Individual- und Produktsoftware 2006-01-15 User Experience Newsletter #12: Intuition Quiz 2005-11-30 User Experience und Requirements-Engineering für Software-Projekte 2005-10-31 Ivan Sutherland on "Research and Fun" 2005-10-18 Ars Electronica / Mensch und Computer 2005 2005-09-14 60 Jahre nach Memex: Über die Unvereinbarkeit von Desktop- und Web-Paradigma 2005-08-31 reboot 7 2005-06-30

    Read the article

  • Windows 7 Samba issue

    - by abduls85
    We have a strange samba issue affecting only one user. Our samba setup is as follow : Red Hat Enterprise Linux Server release 5.4 (Tikanga) - Samba Server Samba version 3.0.33-3.14.el5 - Samba version Domain Controller WIN2008R2 Standard - Windows DC Windows 7 64 bit - Client PCs User mentioned that he faced this problem after he force shutdown his PC few weeks ago. By right, for all users when we access \\sambaservername in windows it will show all the shares in the samba server but for this user once he startup his PC he will not be able to access \\sambaservername, Error message Windows cannot access \\sambaservername Current workaround to solve the problem : Try to access one share in \\sambaservername for instance \\sambaservername\sharedfolder1. But even when doing so, it will first prompt an error in the beginning, error message is as follows Logon failure: unknown user name or bad password. user need to enter the credentials again and he can access the share. Thereafter, he will be able to access \\sambaservername without any issues. But once he reboots his computer the problem will persists. Troubleshooting done so far: Ensure the following settings: Go to: Control Panel → Administrative Tools → Local Security Policy Select: Local Policies → Security Options "Network security: LAN Manager authentication level" → Send LM & NTLM responses "Minimum session security for NTLM SSP" → uncheck: Require 128-bit encryption Advise user to reset his password and try again but problem still persists Tried my account on users' PC, there is no issues. Tried user account on serveral other Windows 7 PC including mine but problem still persists. Windows XP does not have this problem. Ensure that there is no stored crendentials on the windows 7 PC. Checked the credentials manager in Control Panel as well as typing this command rundll32.exe keymgr.dll, KRShowKeyMgr Restart winbindd daemon on samba server but to no avail. I suspect this is due to some caching issue but not sure where is the issue. Whenever the user has error accessing \\sambaservername, the following errors will be logged in the samba server : [2012/10/10 17:10:26, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! [2012/10/10 17:10:27, 1] smbd/sesssetup.c:reply_spnego_kerberos(316) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! But after workaround, there will be no more errors. I suspect after reading the article listed below some amendments need to be made to the \var\samba\cache directory : http://www.linuxquestions.org/questions/linux-server-73/getent-passwd-dont-show-ad-groups-and-users-745829/ http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/tdb.html http://lists.samba.org/archive/samba/2010-May/155521.html http://lists.samba.org/archive/samba/2011-March/161912.html http://lzeit.blogspot.sg/2009/10/samba-shares-inaccessible-after-power.html There are several users using the samba server and i would like to solve this problem without any impacts. I saw the following article : http://www.samba.org/samba/docs/man/manpages-3/smb.conf.5.html#WINBINDCACHETIME "winbind offline logon (G) This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache. Default: winbind offline logon = false Example: winbind offline logon = true " Any idea on how to delete the entry for one user in the local cache ?

    Read the article

  • Developer’s Life – Disaster Lessons – Notes from the Field #039

    - by Pinal Dave
    [Note from Pinal]: This is a 39th episode of Notes from the Field series. What is the best solution do you have when you encounter a disaster in your organization. Now many of you would answer that in this scenario you would have another standby machine or alternative which you will plug in. Now let me ask second question – What would you do if you as an individual faces disaster?  In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. Howdy! When it was my turn to share the Notes from the Field last time, I took a departure from my normal technical content to talk about Attitude and Communication.(http://blog.sqlauthority.com/2014/05/08/developers-life-attitude-and-communication-they-can-cause-problems-notes-from-the-field-027/) Pinal said it was a popular topic so I hope he won’t mind if I stick with Professional Development for another of my turns at sharing some information here. Like I said last time, the “soft skills” of the IT world are often just as important – sometimes more important – than the technical skills. As a consultant with Linchpin People – I see so many situations where the professional skills I’ve gained and use are more valuable to clients than knowing the best way to tune a query. Today I want to continue talking about professional development and tell you about the way I almost got myself hit by a train – and why that matters in our day jobs. Sometimes we can learn a lot from disasters. Whether we caused them or someone else did. If you are interested in learning about some of my observations in these lessons you can see more where I talk about lessons from disasters on my blog. For now, though, onto how I almost got my vehicle hit by a train… The Train Crash That Almost Was…. My family and I own a little schoolhouse building about a 10 mile drive away from our house. We use it as a free resource for families in the area that homeschool their children – so they can have some class space. I go up there a lot to check in on the property, to take care of the trash and to do work on the property. On the way there, there is a very small Stop Sign controlled railroad intersection. There is only two small freight trains a day passing there. Actually the same train, making a journey south and then back North. That’s it. This road is a small rural road, barely ever a second car driving in the neighborhood there when I am. The stop sign is pretty much there only for the train crossing. When we first bought the building, I was up there a lot doing renovations on the property. Being familiar with the area, I am also familiar with the train schedule and know the tracks are normally free of trains. So I developed a bad habit. You see, I’d approach the stop sign and slow down as I roll through it. Sometimes I’d do a quick look and come to an “almost” stop there but keep on going. I let my impatience and complacency take over. And that is because most of the time I was going there long after the train was done for the day or in between the runs. This habit became pretty well established after a couple years of driving the route. The behavior reinforced a bit by the success ratio. I saw others doing it as well from the neighborhood when I would happen to be there around the time another car was there. Well. You already know where this ends up by the title and backstory here. A few months ago I came to that little crossing, and I started to do the normal routine. I’d pretty much stopped looking in some respects because of the pattern I’d gotten into.  For some reason I looked and heard and saw the train slowly approaching and slammed on my brakes and stopped. It was an abrupt stop, and it was close. I probably would have made it okay, but I sat there thinking about lessons for IT professionals from the situation once I started breathing again and watched the cars loaded with sand and propane slowly labored down the tracks… Here are Those Lessons… It’s easy to get stuck into a routine – That isn’t always bad. Except when it’s a bad routine. Momentum and inertia are powerful. Once you have a habit and a routine developed – it’s really hard to break that. Make sure you are setting the right routines and habits TODAY. What almost dangerous things are you doing today? How are you almost messing up your production environment today? Stop doing that. Be Deliberate – (Even when you are the only one) – Like I said – a lot of people roll through that stop sign. Perhaps the neighbors or other drivers think “why is he fully stopping and looking… The train only comes two times a day!” – they can think that all they want. Through deliberate actions and forcing myself to pay attention, I will avoid that oops again. Slow down. Take a deep breath. Be Deliberate in your job. Pay attention to the small stuff and go out of your way to be careful. It will save you later. Be Observant – Keep your eyes open. By looking around, observing the situation and understanding what your servers, databases, users and vendors are doing – you’ll notice when something is out of place. But if you don’t know what is normal, if you don’t look to make sure nothing has changed – that train will come and get you. Where can you be more observant? What warning signs are you ignoring in your environment today? In the IT world – trains are everywhere. Projects move fast. Decisions happen fast. Problems turn from a warning sign to a disaster quickly. If you get stuck in a complacent pattern of “Everything is okay, it always has been and always will be” – that’s the time that you will most likely get stuck in a bad situation. Don’t let yourself get complacent, don’t let your team get complacent. That will lead to being proactive. And a proactive environment spends less money on consultants for troubleshooting problems you should have seen ahead of time. You can spend your money and IT budget on improving for your customers. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

  • Integrating Flickr with ASP.Net application

    - by sreejukg
    Flickr is the popular photo management and sharing application offered by yahoo. The services from flicker allow you to store and share photos and videos online. Flicker offers strong API support for almost all services they provide. Using this API, developers can integrate photos to their public website. Since 2005, developers have collaborated on top of Flickr's APIs to build fun, creative, and gorgeous experiences around photos that extend beyond Flickr. In this article I am going to demonstrate how easily you can bring the photos stored on flicker to your website. Let me explain the scenario this article is trying to address. I have a flicker account where I upload photos and share in many ways offered by Flickr. Now I have a public website, instead of re-upload the photos again to public website, I want to show this from Flickr. Also I need complete control over what photo to display. So I went and referred the Flickr documentation and there is API support ready to address my scenario (and more… ). FlickerAPI for ASP.Net To Integrate Flicker with ASP.Net applications, there is a library available in CodePlex. You can find it here http://flickrnet.codeplex.com/ Visit the URL and download the latest version. The download includes a Zip file, when you unzip you will get a number of dlls. Since I am going to use ASP.Net application, I need FlickrNet.dll. See the screenshot of all the dlls, and there is a help file available in the download (.chm) for your reference. Once you have the dll, you need to use Flickr API from your website. I assume you have a flicker account and you are familiar with Flicker services. Arrange your photos using Sets in Flickr In flicker, you can define sets and add your uploaded photos to sets. You can compare set to photo album. A set is a logical collection of photos, which is an excellent option for you to categorize your photos. Typically you will have a number of sets each set having few photos. You can write application that brings photos from sets to your website. For the purpose of this article I already created a set Flickr and added some photos to it. Once you logged in to Flickr, you can see the Sets under the Menu. In the Sets page, you will see all the sets you have created. As you notice, you can see certain sample images I have uploaded just to test the functionality. Though I wish I couldn’t create good photos so please bear with me. I have created 2 photo sets named Blue Album and Red Album. Click on the image for the set, will take you to the corresponding set page. In the set “Red Album” there are 4 photos and the set has a unique ID (highlighted in the URL). You can simply retrieve the photos with the set id from your application. In this article I am going to retrieve the images from Red album in my ASP.Net page. For that First I need to setup FlickrAPI for my usage. Configure Flickr API Key As I mentioned, we are going to use Flickr API to retrieve the photos stored in Flickr. In order to get access to Flickr API, you need an API key. To create an API key, navigate to the URL http://www.flickr.com/services/apps/create/ Click on Request an API key link, now you need to tell Flickr whether your application in commercial or non-commercial. I have selected a non-commercial key. Now you need to enter certain information about your application. Once you enter the details, Click on the submit button. Now Flickr will create the API key for your application. Generating non-commercial API key is very easy, in couple of steps the key will be generated and you can use the key in your application immediately. ASP.Net application for retrieving photos Now we need write an ASP.Net application that display pictures from Flickr. Create an empty web application (I named this as FlickerIntegration) and add a reference to FlickerNet.dll. Add a web form page to the application where you will retrieve and display photos(I have named this as Gallery.aspx). After doing all these, the solution explorer will look similar to following. I have used the below code in the Gallery.aspx page. The output for the above code is as follows. I am going to explain the code line by line here. First it is adding a reference to the FlickrNet namespace. using FlickrNet; Then create a Flickr object by using your API key. Flickr f = new Flickr("<yourAPIKey>"); Now when you retrieve photos, you can decide what all fields you need to retrieve from Flickr. Every photo in Flickr contains lots of information. Retrieving all will affect the performance. For the demonstration purpose, I have retrieved all the available fields as follows. PhotoSearchExtras.All But if you want to specify the fields you can use logical OR operator(|). For e.g. the following statement will retrieve owner name and date taken. PhotoSearchExtras extraInfo = PhotoSearchExtras.OwnerName | PhotoSearchExtras.DateTaken; Then retrieve all the photos from a photo set using PhotoSetsGetPhotos method. I have passed the PhotoSearchExtras object created earlier. PhotosetPhotoCollection photos = f.PhotosetsGetPhotos("72157629872940852", extraInfo); The PhotoSetsGetPhotos method will return a collection of Photo objects. You can just navigate through the collection using a foreach statement. foreach (Photo p in photos) {     //access each photo properties } Photo class have lot of properties that map with the properties from Flickr. The chm documentation comes along with the CodePlex download is a great asset for you to understand the fields. In the above code I just used the following p.LargeUrl – retrieves the large image url for the photo. p.ThumbnailUrl – retrieves the thumbnail url for the photo p.Title – retrieves the Title of the photo p.DateUploaded – retrieves the date of upload Visual Studio intellisense will give you all properties, so it is easy, you can just try with Visual Studio intellisense to find the right properties you are looking for. Most of hem are self-explanatory. So you can try retrieving the required properties. In the above code, I just pushed the photos to the page. In real time you can use the retrieved photos along with JQuery libraries to create animated photo galleries, slideshows etc. Configuration and Troubleshooting If you get access denied error while executing the code, you need to disable the caching in Flickr API. FlickrNet cache the photos to your local disk when retrieved. You can specify a cache folder where the application need write permission. You can specify the Cache folder in the code as follows. Flickr.CacheLocation = Server.MapPath("./FlickerCache/"); If the application doesn’t have have write permission to the cache folder, the application will throw access denied error. If you cannot give write permission to the cache folder, then you must disable the caching. You can do this from code as follows. Flickr.CacheDisabled = true; Disabling cache will have an impact on the performance. Take care! Also you can define the Flickr settings in web.config file.You can find the documentation here. http://flickrnet.codeplex.com/wikipage?title=ExampleConfigFile&ProjectName=flickrnet Flickr is a great place for storing and sharing photos. The API access allows developers to do seamless integration with the photos uploaded on Flickr.

    Read the article

  • What to Do When Windows Won’t Boot

    - by Chris Hoffman
    You turn on your computer one day and Windows refuses to boot — what do you do? “Windows won’t boot” is a common symptom with a variety of causes, so you’ll need to perform some troubleshooting. Modern versions of Windows are better at recovering from this sort of thing. Where Windows XP might have stopped in its tracks when faced with this problem, modern versions of Windows will try to automatically run Startup Repair. First Things First Be sure to think about changes you’ve made recently — did you recently install a new hardware driver, connect a new hardware component to your computer, or open your computer’s case and do something? It’s possible the hardware driver is buggy, the new hardware is incompatible, or that you accidentally unplugged something while working inside your computer. The Computer Won’t Power On At All If your computer won’t power on at all, ensure it’s plugged into a power outlet and that the power connector isn’t loose. If it’s a desktop PC, ensure the power switch on the back of its case — on the power supply — is set to the On position. If it still won’t power on at all, it’s possible you disconnected a power cable inside its case. If you haven’t been messing around inside the case, it’s possible the power supply is dead. In this case, you’ll have to get your computer’s hardware fixed or get a new computer. Be sure to check your computer monitor — if your computer seems to power on but your screen stays black, ensure your monitor is powered on and that the cable connecting it to your computer’s case is plugged in securely at both ends. The Computer Powers On And Says No Bootable Device If your computer is powering on but you get a black screen that says something like “no bootable device” or another sort of “disk error” message, your computer can’t seem to boot from the hard drive that Windows was installed on. Enter your computer’s BIOS or UEFI firmware setup screen and check its boot order setting, ensuring that it’s set to boot from its hard drive. If the hard drive doesn’t appear in the list at all, it’s possible your hard drive has failed and can no longer be booted from. In this case, you may want to insert Windows installation or recovery media and run the Startup Repair operation. This will attempt to make Windows bootable again. For example, if something overwrote your Windows drive’s boot sector, this will repair the boot sector. If the recovery environment won’t load or doesn’t see your hard drive, you likely have a hardware problem. Be sure to check your BIOS or UEFI’s boot order first if the recovery environment won’t load. You can also attempt to manually fix Windows boot loader problems using the fixmbr and fixboot commands. Modern versions of Windows should be able to fix this problem for you with the Startup Repair wizard, so you shouldn’t actually have to run these commands yourself. Windows Freezes or Crashes During Boot If Windows seems to start booting but fails partway through, you may be facing either a software or hardware problem. If it’s a software problem, you may be able to fix it by performing a Startup Repair operation. If you can’t do this from the boot menu, insert a Windows installation disc or recovery disk and use the startup repair tool from there. If this doesn’t help at all, you may want to reinstall Windows or perform a Refresh or Reset on Windows 8. If the computer encounters errors while attempting to perform startup repair or reinstall Windows, or the reinstall process works properly and you encounter the same errors afterwards, you likely have a hardware problem. Windows Starts and Blue Screens or Freezes If Windows crashes or blue-screens on you every time it boots, you may be facing a hardware or software problem. For example, malware or a buggy driver may be loading at boot and causing the crash, or your computer’s hardware may be malfunctioning. To test this, boot your Windows computer in safe mode. In safe mode, Windows won’t load typical hardware drivers or any software that starts automatically at startup. If the computer is stable in safe mode, try uninstalling any recently installed hardware drivers, performing a system restore, and scanning for malware. If you’re lucky, one of these steps may fix your software problem and allow you to boot Windows normally. If your problem isn’t fixed, try reinstalling Windows or performing a Refresh or Reset on Windows 8. This will reset your computer back to its clean, factory-default state. If you’re still experiencing crashes, your computer likely has a hardware problem. Recover Files When Windows Won’t Boot If you have important files that will be lost and want to back them up before reinstalling Windows, you can use a Windows installer disc or Linux live media to recover the files. These run entirely from a CD, DVD, or USB drive and allow you to copy your files to another external media, such as another USB stick or an external hard drive. If you’re incapable of booting a Windows installer disc or Linux live CD, you may need to go into your BIOS or UEFI and change the boot order setting. If even this doesn’t work — or if you can boot from the devices and your computer freezes or you can’t access your hard drive — you likely have a hardware problem. You can try pulling the computer’s hard drive, inserting it into another computer, and recovering your files that way. Following these steps should fix the vast majority of Windows boot issues — at least the ones that are actually fixable. The dark cloud that always hangs over such issues is the possibility that the hard drive or another component in the computer may be failing. Image Credit: Karl-Ludwig G. Poggemann on Flickr, Tzuhsun Hsu on Flickr     

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58  | Next Page >