Search Results

Search found 14099 results on 564 pages for 'group policy preferences'.

Page 154/564 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Optimising a query for Top 5% of users

    - by Nai
    On my website, there exists a group of 'power users' who are fantastic and adding lots of content on to my site. However, their prolific activities has led to their profile pages slowing down a lot. For 95% of the other users, the SPROC that is returning the data is very quick. It's only for these group of power users, the very same SPROC is slow. How does one go about optimising the query for this group of users? You can assume that the right indexes have already been constructed. EDIT: Ok, I think I have been a bit too vague. To rephrase the question, how can I optimise my site to enhance the performance for these 5% of users. Given that this SPROC is the same one that is in use for every user and that it is already well optimised, I am guessing the next steps are to explore caching possibilities on the data and application layers?

    Read the article

  • Installing EclipseFP on Mac OS X

    - by Dom Kennedy
    I am trying to install EclipseFP. I'm running OS X Mavericks. I've tried following both the official installation instructions and the advice in this answer on SU, but I'm still having the same problem. I can get the plugin itself installed painlessly using Help -> Install New Software..., Bbut when I restart and switch to the Haskell perspective, things start to go wrong. The installation instructions tells me that I should receive a prompt to install BuildWrapper and Scion Browser. I do not receive this prompt. Furthermore, if I create a new Haskell project, my code has no syntax highlighting, and the Hoogle search feature does not appear to do anything. It's clear that the plugin is not set up correctly yet. I've tried running cabal update in Terminal, but this does not change anything. After several attempts going round in circles with this on Eclipse Juno, I uninstalled Eclispe and the Haskell Platform and performed a clean install of Eclipse Luna and the latest Haskell Platform. However, the problems are persisting. I've tried going into Preferences to see if I could sort any of this out manually. I should initially point out that my GHC installation seems to be correctly references under Preferences -> Haskell Implementations Under Haskell -> Helper executables, there are areas for configuring the options of both BuildWrapper and Scion Browser. At present, both are blank. I tried clicking the Install from Hackage... button beside each of them with no success; I receive an error message saying Expected executable <workspace>/.metadata/.plugins/net.sf.eclipsefp.haskell.ui/sandbox/.cabal-sandbox/bin/buildwrapper not found!` (replace buildwrapper for scion-browser and the message is the same) The Eclipse console displays the following exception after doing the above with BuildWrapper: src/Language/Haskell/BuildWrapper/GHCStorage.hs:313:32: Not in scope: data constructor ‘MatchGroup’ cabal.real: Error: some packages failed to install: buildwrapper-0.7.4 failed during the building phase. The exception was: ExitFailure 1 and after doing it for Scion-Browser: zip-archive-0.2.3.4 (reinstall) changes: text-1.1.0.0 -> 0.11.3.1 pandoc-1.12.3.3 (latest: 1.13) -http-conduit (new version) Graphalyze-0.14.1.0 (reinstall) changes: pandoc-1.12.4.2 -> 1.12.3.3, text-1.1.0.0 -> 0.11.3.1 cabal.real: The following packages are likely to be broken by the reinstalls: pandoc-1.12.4.2 unordered-containers-0.2.4.0 aeson-0.7.0.4 scientific-0.2.0.2 case-insensitive-1.1.0.3 HTTP-4000.2.10 Use --force-reinstalls if you want to install anyway. After receiving similar results as the above on previous attempts, I've tried using force-reinstalls and ended up at more dead ends. I am at a loss as to what is wrong and how to solve this. I should point out that my GHC installation appears to be correctly configured under Preferences -> Haskell -> Haskell Implementations. Apologies if any of this information is irrelevant, I'm just not really sure what is important and what isn't at this point. Any help anyone could provide me with would be greatly appreciated.

    Read the article

  • HttpWebRequest and Ignoring SSL Certificate Errors

    - by Rick Strahl
    Man I can't believe this. I'm still mucking around with OFX servers and it drives me absolutely crazy how some these servers are just so unbelievably misconfigured. I've recently hit three different 3 major brokerages which fail HTTP validation with bad or corrupt certificates at least according to the .NET WebRequest class. What's somewhat odd here though is that WinInet seems to find no issue with these servers - it's only .NET's Http client that's ultra finicky. So the question then becomes how do you tell HttpWebRequest to ignore certificate errors? In WinInet there used to be a host of flags to do this, but it's not quite so easy with WebRequest. Basically you need to configure the CertificatePolicy on the ServicePointManager by creating a custom policy. Not exactly trivial. Here's the code to hook it up: public bool CreateWebRequestObject(string Url) {    try     {        this.WebRequest =  (HttpWebRequest) System.Net.WebRequest.Create(Url);         if (this.IgnoreCertificateErrors)            ServicePointManager.CertificatePolicy = delegate { return true; };}One thing to watch out for is that this an application global setting. There's one global ServicePointManager and once you set this value any subsequent requests will inherit this policy as well, which may or may not be what you want. So it's probably a good idea to set the policy when the app starts and leave it be - otherwise you may run into odd behavior in some situations especially in multi-thread situations.Another way to deal with this is in you application .config file. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <configuration>   <system.net>     <settings>       <servicePointManager           checkCertificateName="false"           checkCertificateRevocationList="false"                />     </settings>   </system.net> </configuration> This seems to work most of the time, although I've seen some situations where it doesn't, but where the code implementation works which is frustrating. The .config settings aren't as inclusive as the programmatic code that can ignore any and all cert errors - shrug. Anyway, the code approach got me past the stopper issue. It still amazes me that theses OFX servers even require this. After all this is financial data we're talking about here. The last thing I want to do is disable extra checks on the certificates. Well I guess I shouldn't be surprised - these are the same companies that apparently don't believe in XML enough to generate valid XML (or even valid SGML for that matter)...© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp  HTTP  

    Read the article

  • vmware network installation problem

    - by shantanu
    After installation from vmware_bunddle it shows network device error during configuration(First run). Log File: 2012-04-03T20:01:24.881+06:00| vthread-3| I120: Log for VMware Workstation pid=5766 version=8.0.2 build=build-591240 option=Release 2012-04-03T20:01:24.881+06:00| vthread-3| I120: The process is 64-bit. 2012-04-03T20:01:24.881+06:00| vthread-3| I120: Host codepage=UTF-8 encoding=UTF-8 2012-04-03T20:01:24.881+06:00| vthread-3| I120: Host is Linux 3.2.0-19-generic Ubuntu precise (development branch) 2012-04-03T20:01:24.880+06:00| vthread-3| I120: Msg_Reset: 2012-04-03T20:01:24.880+06:00| vthread-3| I120: [msg.dictionary.load.openFailed] Cannot open file "/usr/lib/vmware/settings": No such file or directory. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: ---------------------------------------- 2012-04-03T20:01:24.880+06:00| vthread-3| I120: PREF Optional preferences file not found at /usr/lib/vmware/settings. Using default values. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: Msg_Reset: 2012-04-03T20:01:24.880+06:00| vthread-3| I120: [msg.dictionary.load.openFailed] Cannot open file "/root/.vmware/config": No such file or directory. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: ---------------------------------------- 2012-04-03T20:01:24.880+06:00| vthread-3| I120: PREF Optional preferences file not found at /root/.vmware/config. Using default values. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: Msg_Reset: 2012-04-03T20:01:24.880+06:00| vthread-3| I120: [msg.dictionary.load.openFailed] Cannot open file "/root/.vmware/preferences": No such file or directory. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: ---------------------------------------- 2012-04-03T20:01:24.881+06:00| vthread-3| I120: PREF Failed to load user preferences. 2012-04-03T20:01:24.881+06:00| vthread-3| W110: Logging to /tmp/vmware-root/modconfig-5766.log 2012-04-03T20:01:25.200+06:00| vthread-3| I120: modconf query interface initialized 2012-04-03T20:01:25.201+06:00| vthread-3| I120: modconf library initialized 2012-04-03T20:01:25.269+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.284+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.306+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.355+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:25.355+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.362+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.383+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.434+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:25.502+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.507+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.511+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.516+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.521+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.561+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.566+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.571+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.576+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.581+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.593+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.614+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.663+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:25.740+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.747+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.752+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.757+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.762+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.772+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.792+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.843+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:26.838+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.848+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.853+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.858+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.863+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:28.460+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:28.460+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:28.466+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:28.488+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:28.542+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:28.542+06:00| vthread-3| I120: Building module vmmon. 2012-04-03T20:01:28.553+06:00| vthread-3| I120: Extracting the sources of the vmmon module. 2012-04-03T20:01:28.615+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmmon-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:01:36.499+06:00| vthread-3| I120: Installing module vmmon from /tmp/vmware-root/modules/vmmon.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:01:36.507+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmmon.ko 2012-04-03T20:01:58.314+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:58.315+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:58.336+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:58.379+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:58.431+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:58.431+06:00| vthread-3| I120: Building module vmnet. 2012-04-03T20:01:58.431+06:00| vthread-3| I120: Extracting the sources of the vmnet module. 2012-04-03T20:01:58.541+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmnet-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:05.973+06:00| vthread-3| I120: Failed to compile module vmnet! 2012-04-03T20:02:05.984+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:02:05.984+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:02:05.990+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:06.015+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:06.067+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:02:06.067+06:00| vthread-3| I120: Building module vmblock. 2012-04-03T20:02:06.067+06:00| vthread-3| I120: Extracting the sources of the vmblock module. 2012-04-03T20:02:06.141+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmblock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:13.531+06:00| vthread-3| I120: Installing module vmblock from /tmp/vmware-root/modules/vmblock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:02:13.532+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmblock.ko 2012-04-03T20:02:19.090+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:02:19.090+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:02:19.097+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:19.117+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:19.173+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:02:19.173+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:02:19.174+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:02:19.284+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:28.525+06:00| vthread-3| I120: Installing module vmci from /tmp/vmware-root/modules/vmci.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:02:28.526+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmci.ko 2012-04-03T20:02:31.760+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:02:31.760+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:02:31.766+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:31.786+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:31.838+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:02:31.838+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:02:31.839+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:02:31.864+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:33.684+06:00| vthread-3| I120: Building module vsock. 2012-04-03T20:02:33.685+06:00| vthread-3| I120: Extracting the sources of the vsock module. 2012-04-03T20:02:33.809+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vsock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:41.050+06:00| vthread-3| I120: Installing module vsock from /tmp/vmware-root/modules/vsock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:02:41.051+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vsock.ko 2012-04-03T20:03:02.757+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.762+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.767+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.771+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.776+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:02.790+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:02.814+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:02.865+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:02.958+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.968+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.973+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.978+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.983+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:04.372+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:04.372+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:04.378+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:04.399+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:04.452+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:04.452+06:00| vthread-3| I120: Building module vmmon. 2012-04-03T20:03:04.452+06:00| vthread-3| I120: Extracting the sources of the vmmon module. 2012-04-03T20:03:04.486+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmmon-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:05.976+06:00| vthread-3| I120: Installing module vmmon from /tmp/vmware-root/modules/vmmon.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:05.977+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmmon.ko 2012-04-03T20:03:09.056+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:09.057+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:09.065+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:09.090+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:09.142+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:09.142+06:00| vthread-3| I120: Building module vmnet. 2012-04-03T20:03:09.142+06:00| vthread-3| I120: Extracting the sources of the vmnet module. 2012-04-03T20:03:09.169+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmnet-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:12.072+06:00| vthread-3| I120: Failed to compile module vmnet! 2012-04-03T20:03:12.090+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:12.090+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:12.098+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:12.121+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:12.179+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:12.179+06:00| vthread-3| I120: Building module vmblock. 2012-04-03T20:03:12.179+06:00| vthread-3| I120: Extracting the sources of the vmblock module. 2012-04-03T20:03:12.205+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmblock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:15.340+06:00| vthread-3| I120: Installing module vmblock from /tmp/vmware-root/modules/vmblock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:15.341+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmblock.ko 2012-04-03T20:03:18.451+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:18.451+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:18.457+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:18.480+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:18.531+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:18.531+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:03:18.531+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:03:18.569+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:19.787+06:00| vthread-3| I120: Installing module vmci from /tmp/vmware-root/modules/vmci.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:19.789+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmci.ko 2012-04-03T20:03:22.933+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:22.933+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:22.939+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:22.959+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:23.009+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:23.009+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:03:23.009+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:03:23.034+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:24.227+06:00| vthread-3| I120: Building module vsock. 2012-04-03T20:03:24.227+06:00| vthread-3| I120: Extracting the sources of the vsock module. 2012-04-03T20:03:24.254+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vsock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:26.125+06:00| vthread-3| I120: Installing module vsock from /tmp/vmware-root/modules/vsock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:26.126+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vsock.ko My System details: cpu : AMD APU dual core E450 ram: 2GB ubuntu: 12.04 (64 bit) I have downloaded Latest vmware version. Thanks in advance

    Read the article

  • Failed to create symbolic link to keytool

    - by mt0s
    Keytool is /usr/bin/keytool and points to /etc/alternatives/keytool which in turn points to /usr/lib/jvm/java-6-openjdk-i386/jre/bin/keytool. Now I have installed java version 1.7.0_45 so I need to change keytool to the new path : /usr/lib/jvm/jdk1.7.0_45/jre/bin/keytool I tried deleting the /usr/bin/keytool with rm -rf and then adding a new link like : sudo ln -s /usr/bin/keytool /usr/lib/jvm/jdk1.7.0_45/jre/bin/keytool but what I get is ln: failed to create symbolic link `/usr/lib/jvm/jdk1.7.0_45/jre/bin/keytool': File exists I also tried : sudo update-alternatives --config keytool There is only one alternative in link group keytool: /usr/lib/jvm/java-6-openjdk-i386/jre/bin/keytool Nothing to configure. update-alternatives: warning: forcing reinstallation of alternative /usr/lib/jvm/java-6-openjdk-i386/jre/bin/keytool because link group keytool is broken. but doesn't works too. Any suggestions ? Thank you

    Read the article

  • How to Use and Manage Extensions to Safari 5

    - by Mysticgeek
    While there have been hacks to include extensions in Safari for some time now, Safari 5 now offers proper support for them. Today we take a look at managing extensions in the latest version of Safari. Installation and Setup Download and install Safari 5 (link below). Make sure to download the installer that doesn’t include QuickTime if you don’t want it. Also, uncheck getting Apple updates and news in your email. Then decide if you want to install Bonjour for Windows and have Safari automatically update or not. Once it’s installed, launch Safari and select Show Menu Bar from the the Settings Menu. Then go into Preferences \ Advanced and check the box Show Develop menu in the menu bar. Develop will now appear on the Menu Bar…click on it and select Enable Extensions. Using Extensions Now you can find and start using extensions (link below) that will work with Safari 5. In this example we’re installing PageSaver which takes an image of what is showing in your browser. Click on the link for the Extension you want to install…   Then you’ll get a confirmation asking if you want to open or save it. Opening it will install it right away. Click Install in the dialog that asks if you’re sure you want to. Here we see the Extension was successfully installed and you can see the camera icon on the Toolbar. When you’re on a portion of a webpage you want to take an image of, click on the camera icon and you’ll have the image saved in your Downloads folder. Then you can open it up in a browser or image editor. Go into Preferences \ Extensions and from here you can turn the extensions on or off, uninstall, or check for updates. If you’re a Safari user, or thinking about trying it, you’ll enjoy proper support for extensions in version 5. At the time of this writing we couldn’t find any extensions on the Apple site, but you might want to keep your eye on it to see if they do start listing them.  Download Safari 5 for Mac & PC Safari Extensions Similar Articles Productive Geek Tips Manage Web Searches In SafariMake Safari Stop Crashing Every 20 Seconds on Windows VistaCustomize Safari for Windows ToolbarMake Your Safari Web Browsing PrivateSave Screen Space by Hiding the Bookmarks Toolbar in Safari for Windows TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • ASP.NET Cookies

    - by Aamir Hasan
    Cookies are domain specific and cannot be used across different network domains. The only domain that can read a cookie is the domain that sets it. It does not matter what domain name you set.Cookies are used to store small pieces of information on a client machine. A cookie can store only up to 4 KB of information. Generally cookies are used to store data which user types frequently such as user id and password to login to a site.The HttpCookie class defined in the System.Web namespace represents a browser cookie.Creating cookies (C#)Dim cookie As HttpCookie = New HttpCookie("UID")cookie.Value = "id"cookie.Expires = #3/30/2010#Response.Cookies.Add(cookie)cookie = New HttpCookie("username")cookie.Value = "username"cookie.Expires = #3/31/2010#Response.Cookies.Add(cookie)Creating cookies (VB.NET) HttpCookie cookie = Request.Cookies["Preferences"];      if (cookie == null)      {        cookie = new HttpCookie("Preferences");      }      cookie["Name"] = txtName.Text;      cookie.Expires = DateTime.Now.AddYears(1);      Response.Cookies.Add(cookie);Creating cookies (C#)    HttpCookie MyCookie = new HttpCookie("Background");    MyCookie.Value = "value";    Response.Cookies.Add(MyCookie);Reading cookies  (VB.NET)Dim cookieCols As New HttpCookieCollectioncookieCols = Request.CookiesDim str As String' Read and add all cookies to the list boxFor Each str In cookieColsListBox1.Items.Add("Value:" Request.Cookies(str).Value)Next Reading cookies (C#) ArrayList colCookies = new ArrayList();        for (int i = 0; i < Request.Cookies.Count; i++)            colCookies.Add(Request.Cookies[i]);        grdCookies.DataSource = colCookies;        grdCookies.DataBind();Deleting cookies (VB.NET)Dim cookieCols As New HttpCookieCollectioncookieCols = Request.CookiesDim str As String' Read and add all cookies to the list boxRequest.Cookies.Remove("PASS")Request.Cookies.Remove("UID")Deleting cookies (C#)string[] cookies = Request.Cookies.AllKeys;        foreach (string cookie in cookies)        {            ListBox1.Items.Add("Deleting " + cookie);            Response.Cookies[cookie].Expires = DateTime.Now.AddDays(-1);        }

    Read the article

  • Eclipse juno - ubuntu 12 > can't install RadRails throws error for a gem i have installed allready

    - by Bogdan M
    The thing is I installed ubbuntu 12, java(for eclipse), eclipse, ruby, ruby gems, rails. Everything went smooth. When i tried to prepare eclipse for ruby on rails i isntaled ruby dev kit plugin. This workd, but RadRails failed with this error: Cannot complete the install because one or more required items could not be found. Software currently installed: org.radrails.rails-feature 0.7.2 (org.radrails.rails_feature.feature.group 0.7.2) Missing requirement: Rails Core Plug-in 0.7.2 (org.radrails.rails.core 0.7.2) requires 'bundle org.eclipse.update.core 0.0.0' but it could not be found Cannot satisfy dependency: From: org.radrails.rails-feature 0.7.2 (org.radrails.rails_feature.feature.group 0.7.2) To: org.radrails.rails.core [0.7.2]

    Read the article

  • Please recommend citations for source code documentation standards

    - by Aerik
    I'm trying to convince another group in my company that they need to provide more documentation in their source code (they want to hand off the code to my group) but they're treating it as a "nice to have". In my view, it's a necessity. I've run a source code analysis tool and it's showing about 10% comment lines - but looking at the source code, most of that is coming from entire functions that the author has commented out. Can anyone provide some authoritative citations / references for documentation / comment standards for source code? (In case it matters, we're a C# house, with a little Matlab thrown in).

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • How to configure a zone cluster on Solaris Cluster 4.0

    - by JuergenS
    This is a short overview on how to configure a zone cluster on Solaris Cluster 4.0. This is a little bit different as in Solaris Cluster 3.2/3.3 because Solaris Cluster 4.0 is only running on Solaris 11. The name of the zone cluster must be unique throughout the global Solaris Cluster and must be configured on a global Solaris Cluster. Please read all the requirements for zone cluster in Solaris Cluster Software Installation Guide for SC4.0. For Solaris Cluster 3.2/3.3 please refer to my previous blog Configuration steps to create a zone cluster in Solaris Cluster 3.2/3.3. A. Configure the zone cluster into the already running global clusterCheck if zone cluster can be created # cluster show-netprops to change number of zone clusters use # cluster set-netprops -p num_zoneclusters=12 Note: 12 zone clusters is the default, values can be customized! Create config file (zc1config) for zone cluster setup e.g: Configure zone cluster # clzc configure -f zc1config zc1 Note: If not using the config file the configuration can also be done manually # clzc configure zc1 Check zone configuration # clzc export zc1 Verify zone cluster # clzc verify zc1 Note: The following message is a notice and comes up on several clzc commands Waiting for zone verify commands to complete on all the nodes of the zone cluster "zc1"... Install the zone cluster # clzc install zc1 Note: Monitor the consoles of the global zone to see how the install proceed! (The output is different on the nodes) It's very important that all global cluster nodes have installed the same set of ha-cluster packages! Boot the zone cluster # clzc boot zc1 Login into non-global-zones of zone cluster zc1 on all nodes and finish Solaris installation. # zlogin -C zc1 Check status of zone cluster # clzc status zc1 Login into non-global-zones of zone cluster zc1 and configure the shell environment for root (for PATH: /usr/cluster/bin, for MANPATH: /usr/cluster/man) # zlogin -C zc1 If using additional name service configure /etc/nsswitch.conf of zone cluster non-global zones. hosts: cluster files netmasks: cluster files Configure /etc/inet/hosts of the zone cluster zones Enter all the logical hosts of non-global zones B. Add resource groups and resources to zone cluster Create a resource group in zone cluster # clrg create -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Note1: Use command # cluster status for zone cluster resource group overview. Note2: You can also run all commands for zone cluster in global cluster by adding the option -Z to the command. e.g: # clrg create -Z zc1 -n <zone-hostname-node1>,<zone-hostname-node2> app-rg Set up the logical host resource for zone cluster In the global zone do: # clzc configure zc1 clzc:zc1 add net clzc:zc1:net set address=<zone-logicalhost-ip> clzc:zc1:net end clzc:zc1 commit clzc:zc1 exit Note: Check that logical host is in /etc/hosts file In zone cluster do: # clrslh create -g app-rg -h <zone-logicalhost> <zone-logicalhost>-rs Set up storage resource for zone cluster Register HAStoragePlus # clrt register SUNW.HAStoragePlus Example1) ZFS storage pool In the global zone do: Configure zpool eg: # zpool create <zdata> mirror cXtXdX cXtXdX and # clzc configure zc1 clzc:zc1 add dataset clzc:zc1:dataset set name=zdata clzc:zc1:dataset end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p zpools=zdata app-hasp-rs Example2) HA filesystem In the global zone do: Configure SVM diskset and SVM devices. and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/data clzc:zc1:fs set special=/dev/md/datads/dsk/d0 clzc:zc1:fs set raw=/dev/md/datads/rdsk/d0 clzc:zc1:fs set type=ufs clzc:zc1:fs add options [logging] clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: # clrs create -g app-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/data app-hasp-rs Example3) Global filesystem as loopback file system In the global zone configure global filesystem and it to /etc/vfstab on all global nodes e.g.: /dev/md/datads/dsk/d0 /dev/md/datads/dsk/d0 /global/fs ufs 2 yes global,logging and # clzc configure zc1 clzc:zc1 add fs clzc:zc1:fs set dir=/zone/fs (zc-lofs-mountpoint) clzc:zc1:fs set special=/global/fs (globalcluster-mountpoint) clzc:zc1:fs set type=lofs clzc:zc1:fs end clzc:zc1 verify clzc:zc1 commit clzc:zc1 exit Check setup with # clzc show -v zc1 In the zone cluster do: (Create scalable rg if not already done) # clrg create -p desired_primaries=2 -p maximum_primaries=2 app-scal-rg # clrs create -g app-scal-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/zone/fs hasp-rs More details of adding storage available in the Installation Guide for zone cluster Switch resource group and resources online in the zone cluster # clrg online -eM app-rg # clrg online -eM app-scal-rg Test: Switch of the resource group in the zone cluster # clrg switch -n zonehost2 app-rg # clrg switch -n zonehost2 app-scal-rg Add supported dataservice to zone cluster Documentation for SC4.0 is available here Example output: Appendix: To delete a zone cluster do: # clrg delete -Z zc1 -F + Note: Zone cluster uninstall can only be done if all resource groups are removed in the zone cluster. The command 'clrg delete -F +' can be used in zone cluster to delete the resource groups recursively. # clzc halt zc1 # clzc uninstall zc1 Note: If clzc command is not successful to uninstall the zone, then run 'zoneadm -z zc1 uninstall -F' on the nodes where zc1 is configured # clzc delete zc1

    Read the article

  • Oracle and ATG: The Next Generation of Customer Experience

    - by divya.malik
    Oracle today announced that it has completed the acquisition of Art Technology Group (ATG), Inc. In a webcast this morning, Thomas Kurian, Executive Vice President, Oracle Anthony Lye, Senior Vice President, CRM at Oracle and  Ken Volpe, Senior Vice President of Products and Technology from ATG, presented the rationale, strategy and future direction with this acquisition, ATG is a leading E-Commerce service provider and Oracle is a leading CRM and Retail Applications provider, which makes it a winning team. There has been a lot of positive feedback from the analysts, press as well as customers. “As a customer of both Oracle and ATG, we view the integration of the two companies as a natural fit,” said Kevin Cunnington, Global Head of Online, Vodafone Group. “We look forward to new efficiencies that address our online and cross-channel business strategies and help us further provide superior customer experiences.” For more information about Oracle and ATG: Overiew and FAQs Webcast Press Release Technorati Tags: oracle,oracle siebel crm,atg,crm

    Read the article

  • Why can't i change the permissions of files I have access to?

    - by Erik
    I'm logged into a server as user "ubuntu" and I've got files that look like this: -rw-rw-r-- 1 www-data www-data 33150 2012-06-04 22:17 file-a.png -rw-rw-r-- 1 www-data www-data 36371 2012-06-04 22:15 file-b.png -rw-rw-r-- 1 www-data www-data 41439 2012-06-04 22:16 file-c.png the ubuntu user is a member of the group www-data: > groups unbuntu ubuntu : ubuntu www-data so shouldn't I be able to change other permissions since I have access to the file? I'm not an expert on the user/group stuff ... so this is just perplexing me. I'm trying to run: > chmod o-r * I realize I can do it with sudo, easily, but I'm trying to understand why I can't modify the files without sudo. Thanks for any help!

    Read the article

  • Female Armor is Horribly Designed [Humorous Video]

    - by Asian Angel
    Our intrepid group of adventurers show up at the blacksmith shop to pick up their new armor, but not all is well. The two gentlemen in the group “provided” the design for their companion’s armor and she is less than pleased with the result. Does she get the armor and revenge she wants? Watch to find out! Note 1: Video contains some language that may be considered inappropriate. Note 2: Make sure to catch the last few seconds of the video for the best part of all! Female Armor Sucks [via Dorkly Bits] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example

    - by pinaldave
    From last several days I am working on various Denali Analytical functions and it is indeed really fun to refresh the concept which I studied in the school. Earlier I wrote article where I explained how we can use PERCENTILE_CONT() to find median over here SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012. Today I am going to ask question based on the same blog post. Again just like last time the intention of this puzzle is as following: Learn new concept of SQL Server 2012 Learn new concept of SQL Server 2012 even if you are on earlier version of SQL Server. On another note, SQL Server 2012 RC0 has been announced and available to download SQL SERVER – 2012 RC0 Various Resources and Downloads. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The reason we get median is because we are passing value .05 to PERCENTILE_COUNT() function. Now run read the puzzle. Puzzle: Run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.9) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.9 value passed. For first four value the value is 775.1. Now run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.1) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.1 value passed. For first four value the value is 709.3. Now in my example I have explained how the median is found using this function. You have to explain using mathematics and explain (in easy words) why the value in last columns are 709.3 and 775.1 Hint: SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012 Rules Leave a comment with your detailed answer by Nov 25's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tampa Bay WinDev 1st Meeting Announced !

    - by Nikita Polyakov
    This is very exciting for anyone close to Tampa Bay, FL Tampa Bay WinDev Meeting #1: Intro to Metro UI Our first meeting as Tampa Bay WinDev (formerly known as Tampa Bay Silverlight) will show off the new Metro UI (Windows vNext) as well as will have local (again) development speaker (and celebrity) John Papa. John will take us through the beginnings of Metro Development. Additionally this is our kickoff meeting for the new group so we will also have some additional information on what this group is all about. We will also have food and drinks at the meeting (although we are still looking for a sponsor). Please RSVP to this always FREE event here: http://tbwindev11.eventbrite.com Wednesday, November 30, 2011 from 6:00 PM to 8:00 PM (ET) Microsoft Office (in Tampa) 5426 Bay Center Dr., Suite 700 Tampa, FL 33609

    Read the article

  • Create user in Oracle 11g with same priviledges as in Oracle 10g XE

    - by Álvaro G. Vicario
    I'm a PHP developer (not a DBA) and I've been working with Oracle 10g XE for a while. I'm used to XE's simplified user management: Go to Administration/ Users/ Create user Assign user name and password Roles: leave the default ones (connect and resource) Privileges: click on "Enable all" to select the 11 possible ones Create This way I get a user that has full access to its data and no access to everything else. This is fine since I only need it to develop my app. When the app is to be deployed, the client's DBAs configure the environment. Now I have to create users in a full Oracle 11g server and I'm completely lost. I have a new concept (profiles) and there're like 20 roles and hundreds of privileges in various categories. What steps do I need to complete in Oracle Enterprise Manager in order to obtain a user with the same privileges I used to assign in XE? ==== UPDATE ==== I think I'd better provide a detailed explanation so I make myself clearer. This is how I create a user in 10g XE: Roles: [X] CONNECT [X] RESOURCE [ ] DBA Direct Asignment System Privileges: [ ] CREATE DATABASE LINK [ ] CREATE MATERIALIZED VIEW [ ] CREATE PROCEDURE [ ] CREATE PUBLIC SYNONYM [ ] CREATE ROLE [ ] CREATE SEQUENCE [ ] CREATE SYNONYM [ ] CREATE TABLE [ ] CREATE TRIGGER [ ] CREATE TYPE [ ] CREATE VIEW I click on Enable All and I'm done. This is what I'm asked when doing the same in 11g: Profile: (*) DEFAULT ( ) WKSYS_PROF ( ) MONITORING_PROFILE Roles: CONNECT: [ ] Admin option [X] Default value Edit List: AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE AUTHENTICATEDUSER CSW_USR_ROLE CTXAPP CWM_USER DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE DBA DELETE_CATALOG_ROLE EJBCLIENT EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS GLOBAL_AQ_USER_ROLE HS_ADMIN_ROLE IMP_FULL_DATABASE JAVADEBUGPRIV JAVAIDPRIV JAVASYSPRIV JAVAUSERPRIV JAVA_ADMIN JAVA_DEPLOY JMXSERVER LOGSTDBY_ADMINISTRATOR MGMT_USER OEM_ADVISOR OEM_MONITOR OLAPI_TRACE_USER OLAP_DBA OLAP_USER OLAP_XS_ADMIN ORDADMIN OWB$CLIENT OWB_DESIGNCENTER_VIEW OWB_USER RECOVERY_CATALOG_OWNER RESOURCE SCHEDULER_ADMIN SELECT_CATALOG_ROLE SPATIAL_CSW_ADMIN SPATIAL_WFS_ADMIN WFS_USR_ROLE WKUSER WM_ADMIN_ROLE XDBADMIN XDB_SET_INVOKER XDB_WEBSERVICES XDB_WEBSERVICES_OVER_HTTP XDB_WEBSERVICES_WITH_PUBLIC System Privileges: <Empty> Edit List: ACCESS_ANY_WORKSPACE ADMINISTER ANY SQL TUNING SET ADMINISTER DATABASE TRIGGER ADMINISTER RESOURCE MANAGER ADMINISTER SQL MANAGEMENT OBJECT ADMINISTER SQL TUNING SET ADVISOR ALTER ANY ASSEMBLY ALTER ANY CLUSTER ALTER ANY CUBE ALTER ANY CUBE DIMENSION ALTER ANY DIMENSION ALTER ANY EDITION ALTER ANY EVALUATION CONTEXT ALTER ANY INDEX ALTER ANY INDEXTYPE ALTER ANY LIBRARY ALTER ANY MATERIALIZED VIEW ALTER ANY MINING MODEL ALTER ANY OPERATOR ALTER ANY OUTLINE ALTER ANY PROCEDURE ALTER ANY ROLE ALTER ANY RULE ALTER ANY RULE SET ALTER ANY SEQUENCE ALTER ANY SQL PROFILE ALTER ANY TABLE ALTER ANY TRIGGER ALTER ANY TYPE ALTER DATABASE ALTER PROFILE ALTER RESOURCE COST ALTER ROLLBACK SEGMENT ALTER SESSION ALTER SYSTEM ALTER TABLESPACE ALTER USER ANALYZE ANY ANALYZE ANY DICTIONARY AUDIT ANY AUDIT SYSTEM BACKUP ANY TABLE BECOME USER CHANGE NOTIFICATION COMMENT ANY MINING MODEL COMMENT ANY TABLE CREATE ANY ASSEMBLY CREATE ANY CLUSTER CREATE ANY CONTEXT CREATE ANY CUBE CREATE ANY CUBE BUILD PROCESS CREATE ANY CUBE DIMENSION CREATE ANY DIMENSION CREATE ANY DIRECTORY CREATE ANY EDITION CREATE ANY EVALUATION CONTEXT CREATE ANY INDEX CREATE ANY INDEXTYPE CREATE ANY JOB CREATE ANY LIBRARY CREATE ANY MATERIALIZED VIEW CREATE ANY MEASURE FOLDER CREATE ANY MINING MODEL CREATE ANY OPERATOR CREATE ANY OUTLINE CREATE ANY PROCEDURE CREATE ANY RULE CREATE ANY RULE SET CREATE ANY SEQUENCE CREATE ANY SQL PROFILE CREATE ANY SYNONYM CREATE ANY TABLE CREATE ANY TRIGGER CREATE ANY TYPE CREATE ANY VIEW CREATE ASSEMBLY CREATE CLUSTER CREATE CUBE CREATE CUBE BUILD PROCESS CREATE CUBE DIMENSION CREATE DATABASE LINK CREATE DIMENSION CREATE EVALUATION CONTEXT CREATE EXTERNAL JOB CREATE INDEXTYPE CREATE JOB CREATE LIBRARY CREATE MATERIALIZED VIEW CREATE MEASURE FOLDER CREATE MINING MODEL CREATE OPERATOR CREATE PROCEDURE CREATE PROFILE CREATE PUBLIC DATABASE LINK CREATE PUBLIC SYNONYM CREATE ROLE CREATE ROLLBACK SEGMENT CREATE RULE CREATE RULE SET CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE TYPE CREATE USER CREATE VIEW CREATE_ANY_WORKSPACE DEBUG ANY PROCEDURE DEBUG CONNECT SESSION DELETE ANY CUBE DIMENSION DELETE ANY MEASURE FOLDER DELETE ANY TABLE DEQUEUE ANY QUEUE DROP ANY ASSEMBLY DROP ANY CLUSTER DROP ANY CONTEXT DROP ANY CUBE DROP ANY CUBE BUILD PROCESS DROP ANY CUBE DIMENSION DROP ANY DIMENSION DROP ANY DIRECTORY DROP ANY EDITION DROP ANY EVALUATION CONTEXT DROP ANY INDEX DROP ANY INDEXTYPE DROP ANY LIBRARY DROP ANY MATERIALIZED VIEW DROP ANY MEASURE FOLDER DROP ANY MINING MODEL DROP ANY OPERATOR DROP ANY OUTLINE DROP ANY PROCEDURE DROP ANY ROLE DROP ANY RULE DROP ANY RULE SET DROP ANY SEQUENCE DROP ANY SQL PROFILE DROP ANY SYNONYM DROP ANY TABLE DROP ANY TRIGGER DROP ANY TYPE DROP ANY VIEW DROP PROFILE DROP PUBLIC DATABASE LINK DROP PUBLIC SYNONYM DROP ROLLBACK SEGMENT DROP TABLESPACE DROP USER ENQUEUE ANY QUEUE EXECUTE ANY ASSEMBLY EXECUTE ANY CLASS EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY INDEXTYPE EXECUTE ANY LIBRARY EXECUTE ANY OPERATOR EXECUTE ANY PROCEDURE EXECUTE ANY PROGRAM EXECUTE ANY RULE EXECUTE ANY RULE SET EXECUTE ANY TYPE EXECUTE ASSEMBLY EXPORT FULL DATABASE FLASHBACK ANY TABLE FLASHBACK ARCHIVE ADMINISTER FORCE ANY TRANSACTION FORCE TRANSACTION FREEZE_ANY_WORKSPACE GLOBAL QUERY REWRITE GRANT ANY OBJECT PRIVILEGE GRANT ANY PRIVILEGE GRANT ANY ROLE IMPORT FULL DATABASE INSERT ANY CUBE DIMENSION INSERT ANY MEASURE FOLDER INSERT ANY TABLE LOCK ANY TABLE MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP MANAGE SCHEDULER MANAGE TABLESPACE MERGE ANY VIEW MERGE_ANY_WORKSPACE ON COMMIT REFRESH QUERY REWRITE READ ANY FILE GROUP REMOVE_ANY_WORKSPACE RESTRICTED SESSION RESUMABLE ROLLBACK_ANY_WORKSPACE SELECT ANY CUBE SELECT ANY CUBE DIMENSION SELECT ANY DICTIONARY SELECT ANY MINING MODEL SELECT ANY SEQUENCE SELECT ANY TABLE SELECT ANY TRANSACTION UNDER ANY TABLE UNDER ANY TYPE UNDER ANY VIEW UNLIMITED TABLESPACE UPDATE ANY CUBE UPDATE ANY CUBE BUILD PROCESS UPDATE ANY CUBE DIMENSION UPDATE ANY TABLE Object Privileges: <Empty> Add: Clase Java Clases de Trabajos Cola Columna de Tabla Columna de Vista Espacio de Trabajo Función Instantánea Origen Java Paquete Planificaciones Procedimiento Programas Secuencia Sinónimo Tabla Tipos Trabajos Vista Consumer Group Privileges: <Empty> Default Consumer Group: (*) None Edit List: AUTO_TASK_CONSUMER_GROUP BATCH_GROUP DEFAULT_CONSUMER_GROUP INTERACTIVE_GROUP LOW_GROUP ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_MEDIUM_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_URGENT_GROUP ORA$DIAGNOSTICS SYS_GROUP And, of course, I wonder what options I should pick.

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Save Links for Later Reading in Firefox

    - by Asian Angel
    Do you want a simple way to save and manage links for reading later? The Save-To-Read extension for Firefox makes it easy to do without an account. Using Save-To-Read As soon as you install the extension you will notice two new additions to your UI. You will see a small plus sign in the address bar and a new toolbar button (opens and closes the sidebar shown here). Your bookmarks menu will also have a new folder entry. For our example we chose to save three pages for later reading. Each time you want to save a website click on the small plus sign, and it is automatically added to your read later list. Our second article… And finally the third article. Notice that the small plus sign has become a minus sign after adding the article to our list. Opening the sidebar shows our three entries waiting to be read. Checking the bookmarks menu shows the same articles available there. When you are ready to read your articles simply click on the link in the sidebar, bookmarks menu, etc. Notice that the entry is still available at the moment…there are no automatic deletions until you are finished with an article. This is great if you accidentally click the wrong link before you are ready for it. Removing an article from the list is as simple as clicking on the address bar minus sign. It will revert to a plus sign and the entry is no longer visible in your list. For those who want to avoid using a sidebar there is a different toolbar button available too. The alternate toolbar button provides access to a drop-down article list. Choose the access style that best suits your needs. Preferences The preferences are simple to work with and focus on appearance/ease-of-use. Conclusion If you have been looking for a simpler alternative to other “read later” extensions, then Save-To-Read could be just what you have been waiting for. Another cool option for reading posts later, even on eReaders, then check out our article on saving articles to read later with Instapaper. Links Download the Save-To-Read extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Save Pages for Later With Reading List Extension for FirefoxInstall Adobe PDF Reader on Ubuntu EdgyQuick Hits: 11 Firefox Tab How-TosSave Webpage Links & URLs as Files in FirefoxQuick Tip: Save Windows and Tabs When Restarting Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • Solaris 11 pkg fix is my new friend

    - by user12611829
    While putting together some examples of the Solaris 11 Automated Installer (AI), I managed to really mess up my system, to the point where AI was completely unusable. This was my fault as a combination of unfortunate incidents left some remnants that were causing problems, so I tried to clean things up. Unsuccessfully. Perhaps that was a bad idea (OK, it was a terrible idea), but this is Solaris 11 and there are a few more tricks in the sysadmin toolbox. Here's what I did. # rm -rf /install/* # rm -rf /var/ai # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 So far so good. Then comes an oops..... setup-service[168]: cd: /var/ai//service/.conf-templ: [No such file or directory] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is where you generally say a few things to yourself, and then promise to quit deleting configuration files and directories when you don't know what you are doing. Then you recall that the new Solaris 11 packaging system has some ability to correct common mistakes (like the one I just made). Let's give it a try. # pkg fix installadm Verifying: pkg://solaris/install/installadm ERROR dir: var/ai Group: 'root (0)' should be 'sys (3)' dir: var/ai/ai-webserver Missing: directory does not exist dir: var/ai/ai-webserver/compatibility-configuration Missing: directory does not exist dir: var/ai/ai-webserver/conf.d Missing: directory does not exist dir: var/ai/image-server Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/cgi-bin Missing: directory does not exist dir: var/ai/image-server/images Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/logs Missing: directory does not exist dir: var/ai/profile Missing: directory does not exist dir: var/ai/service Group: 'root (0)' should be 'sys (3)' dir: var/ai/service/.conf-templ Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_data Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_files Missing: directory does not exist file: var/ai/ai-webserver/ai-httpd-templ.conf Missing: regular file does not exist file: var/ai/service/.conf-templ/AI.db Missing: regular file does not exist file: var/ai/image-server/cgi-bin/cgi_get_manifest.py Missing: regular file does not exist Created ZFS snapshot: 2012-12-11-21:09:53 Repairing: pkg://solaris/install/installadm Creating Plan (Evaluating mediators): | DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 3/3 0.0/0.0 0B/s PHASE ITEMS Updating modified actions 16/16 Updating image state Done Creating fast lookup database Done In just a few moments, IPS found the missing files and incorrect ownerships/permissions. Instead of reinstalling the system, or falling back to an earlier Live Upgrade boot environment, I was able to create my AI services and now all is well. # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 Refreshing install services Warning: mDNS registry of service solaris11-x86 could not be verified. Creating default-i386 alias Setting the default PXE bootfile(s) in the local DHCP configuration to: bios clients (arch 00:00): default-i386/boot/grub/pxegrub Refreshing install services Warning: mDNS registry of service default-i386 could not be verified. # installadm create-service -n solaris11u1-x86 --imagepath /install/solaris11u1-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 514/514 292.3/292.3 0B/s PHASE ITEMS Installing new actions 661/661 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11u1-x86 Image path: /install/solaris11u1-x86 Refreshing install services Warning: mDNS registry of service solaris11u1-x86 could not be verified. # installadm list Service Name Alias Of Status Arch Image Path ------------ -------- ------ ---- ---------- default-i386 solaris11-x86 on i386 /install/solaris11-x86 solaris11-x86 - on i386 /install/solaris11-x86 solaris11u1-x86 - on i386 /install/solaris11u1-x86 This is way way better than pkgchk -f in Solaris 10. I'm really beginning to like this new IPS packaging system.

    Read the article

  • Some of my favourite Visual Studio 2012 things&ndash;Teams

    - by Aaron Kowall
    Getting the balance right for when and how many team projects to create has always been a bit of a balance.  On large initiatives, there are often teams who work toward a common system.  These teams often have quite a bit of autonomy, but need to roll up to some higher level initiative.  In TFS 2010, people were often tempted to create separate Team Projects for each of the sub-teams and then do some magic with reporting and cross-team queries to get the consolidated view.  My recommendation was always to use Areas as a means of separating work across the team, but that always resulted in a large number of queries that need to be maintained and just seemed confusing.  When doing anything you had to remember to filter the query or view by Area in order to get correct results. Along with the awesome web access portal that comes in TFS 2012 (which I will cover details of in another post) the product group has introduced the concept of Teams.  A team is a sub-group within a TFS 2012 Team Project which allows us to more easily divide work along team boundaries. Technically, a Team is defined by an Area Path and a TFS Group, both of which could be done in TFS 2012.  However, by allowing for creation of a ‘Team’ in TFS 2012, the web portal is able to do a bunch of ‘magic’ for us.  We can view the project site (backlog, taskboard, etc) for the the team, we can assign items to the team and we can view the burndown for the team.  Basically, all the stuff that we had to prepare manually we now get created and managed for us with a nice UI. When you create a Team Project in TFS 2012, a ‘Default’ team is created with the same name as the Team Project.  So, if you only have 1 team working on the project, you are set.  If you want to divide the work into additional teams, you can create teams by using the Team Web Client. Teams are created using the ‘Administer Server’ icon in the top right of the web site.   You can select the team site by using the team chooser: Once you have selected a team, the Product Backlog, TaskBoard, Burndown Charts, etc. are all filtered to that team. NOTE: You always have the ability to choose the ‘Default’ team to see items for the entire project. PS: It’s been a long while since I shared on this blog.  To help with that I’m in a blogging challenge with some other developer and agilist friends.  Please check out their blogs as well: Steve Rogalsky: http://winnipegagilist.blogspot.ca Dylan Smith: http://www.geekswithblogs.net/optikal Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com   Technorati Tags: TFS 2012,Agile,Team

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Updated Technical Best Practices whitepaper

    - by ACShorten
    The Technical Best Practices whitepaper has been updated with the latest advice. This edition of the whitepaper covers advice from our internal management team from the product group that manages our environments. Our product teams manage over 1500+ copies of the product, covering every version, every platform and every phase of our development, testing and production product development cycle. The technical team managing that group of environments has compiled some additional advice that has been incorporated into the Technical Best Practices and other whitepapers (inclusding Performance Troubleshooting and the Software Configuration Management Series). New advice includes new installation advice, advanced settings, new security settings and advice for both Oracle WebLogic and IBM WebSphere installations. The Technical Best Practices whitepaper is available from My Oracle Support at Doc Id: 560367.1. To assist readers of past editions of the whitepaper, new or updated advice is marked with an appropriate graphic.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >