Search Results

Search found 28207 results on 1129 pages for 'tfs process template'.

Page 170/1129 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • problem understanding templates in c++

    - by hidayat
    Template code is not compiled until the template function is used. But where does it save the compiled code, is it saved in the object file from which used the template function in the first place? For example, main.cpp is calling a template function from the file test.h, the compiler generates an object file main.o, Is the template function inside the main.o file? because template code is not inlined, is it?

    Read the article

  • javac will not compile enum, ( Windows Sun 1.6 --> OpenJDK 1.6)

    - by avgvstvs
    package com.scheduler.process; public class Process { public enum state { NOT_SUBMITTED, SUBMITTED, BLOCKED, READY, RUNNING, COMPLETED } private state currentState; public state getCurrentState() { return currentState; } public void setCurrentState(state currentState) { this.currentState = currentState; } } package com.scheduler.machine; import com.scheduler.process.Process; import com.scheduler.process.Process.state; public class Machine { com.scheduler.process.Process p = new com.scheduler.process.Process(); state s = state.READY; //fails if I don't also explicitly import Process.state p.setCurrentState(s); //says I need a declarator id after 's'... this is wrong. p.setCurrentState(state.READY); } Modified the example to try and direct to the issue. I cannot change the state on this code. Eclipse suggests importing Process.state like I had on my previous example, but this doesn't work either. This allows state s = state.READY but the call to p.setCurrentState(s); fails as does p.setCurrentState(state.READY);

    Read the article

  • How to run long time process on Udev event?

    - by neclude
    (sorry for my bad english) I want run ppp connection when my usb modem is connect. so i use next udev rule: ACTION=="add", SUBSYSTEM=="tty", ATTRS{idVendor}=="16d8",\ RUN+="/usr/local/bin/newPPP.sh $env{DEVNAME}" (my modem appear in /dev as ttyACM0) newPPP.sh: #!/bin/bash /usr/bin/pon prov $1 >/dev/null 2>&1 & Problem: udev event fire, newPPP.sh running, BUT newPPP.sh process will be killed after ~4-5s. ppp not have time to connect. (in it params is timeout 10s for dial up). How can i run long time process, that will not be killed? (I was try nohup. It don't work too.) System: Arch Linux

    Read the article

  • Override template shell on linux system in Active Directory domain?

    - by benizi
    Is there an easy way to override the Samba "template shell = /bin/bash" setting on a per-user basis? This is for Linux systems joined to an Active Directory domain. Some users want /bin/bash. Others including myself want /bin/zsh. Is there some AD attribute I can set? Anything I've found via googling seems hackish at best (writing a script to replace /bin/sh -- maintenance hassle). A similar serverfault question Override LDAP shell seems OpenLDAP-oriented (but if someone knows how to get it working with AD, please say so).

    Read the article

  • What's the proper way to change a process' scheduling policy to IDLE?

    - by ??O?????
    Hello. I have a long running process on a server running Ubuntu Server 9.10. I would like to make it run under the SCHED_IDLE policy using the chrt command. However, after reading the man page, I can't manage to understand the proper way to issue the command for a running process. I've tried unsuccessfully: # chrt -i -p 688 pid 688's current scheduling policy: SCHED_OTHER pid 688's current scheduling priority: 0 # chrt -p -i 688 pid 688's current scheduling policy: SCHED_OTHER pid 688's current scheduling priority: 0 # chrt -p 688 -i chrt: failed to set pid 0's policy: Invalid argument I'll keep trying, but do you know how to do what I want?

    Read the article

  • Does template class/function specialization improves compilation/linker speed?

    - by Stormenet
    Suppose the following template class is heavily used in a project with mostly int as typename and linker speed is noticeably slower since the introduction of this class. template <typename T> class MyClass { void Print() { std::cout << m_tValue << std::endl;; } T m_tValue; } Will defining a class specialization benefit compilation speed? eg. void MyClass<int>::Print() { std::cout << m_tValue << std::endl; }

    Read the article

  • Launch ms word template as new document from IE...

    - by Simon
    I have a requirement to launch .dot files (ms word templates) as new documents from the browser... Let me explain... if you click on a .dot file in Windows Explorer it opens a new document and runs any macros... you can right click and edit the template... I want to link to the files, so I use <a href="file://myserver/templates/letter.dot">Letter</a>... However this then prompts for the "Download File" dialogue box... and then if I click "Open" it opens the template in edit mode... not the required new document mode... This may be a technical impossibility but can I achieve the desired result in ActiveX or something??

    Read the article

  • How to make a custom template in WordPress work as a password protected page?

    - by KaOSoFt
    I'm building a page with a custom template. The thing is, I need this page to be password protected, or at least accessible to logged in users, but even if I set it as such (Private/Password protected) in the New Pages section in WordPress Administration, it won't display the menu entry nor the content (if Private) or it would show the page contents immediately (if Password protected). I've read somewhere that the_content() function is what makes this work, but as you can guess, my custom template doesn't use the_content() at all, and it's all based on custom content. Do you happen to know how can I (re)implement these two options?

    Read the article

  • How can I traverse a reverse generic relation in a Django template?

    - by user569139
    I have the following class that I am using to bookmark items: class BookmarkedItem(models.Model): is_bookmarked = models.BooleanField(default=False) user = models.ForeignKey(User) content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey() And I am defining a reverse generic relationship as follows: class Link(models.Model): url = models.URLField() bookmarks = generic.GenericRelation(BookmarkedItem) In one of my views I generate a queryset of all links and add this to a context: links = Link.objects.all() context = { 'links': links } return render_to_response('links.html', context) The problem I am having is how to traverse the generic relationship in my template. For each link I want to be able to check the is_bookmarked attribute and change the add/remove bookmark button according to whether the user already has it bookmarked or not. Is this possible to do in the template? Or do I have to do some additional filtering in the view and pass another queryset?

    Read the article

  • Declare variables that depend on unknown type in template functions.

    - by rem
    Suppose I'm writing a template function foo that has type parameter T. It gets an object of type T that must have method bar(). And inside foo I want to create a vector of objects of type returned by bar. In GNU C++ I can write something like that: template<typename T> void foo(T x) { std::vector<__typeof(x.bar())> v; v.push_back(x.bar()); v.push_back(x.bar()); v.push_back(x.bar()); std::cout << v.size() << std::endl; } How to do the same thing in Microsoft Visual C++? Is there some way to write this code that works in both GNU C++ and Visual C++?

    Read the article

  • MediaWiki : is it possible to add an edit link in a template?

    - by leo
    I have a template on my wiki, kind of a box template. Then, there is this page where I use it several times. Can I add an edit link to each of the boxes so I don't have to edit the whole page in order to modify one of the boxes? The boxes contain only text, not other templates. Thanks! Edit: Actually there's an easier way to ask my question: Let's say I have a page without sections defined (namely without == titles ==): content A content B content C Is there a way to open an edit form only for content B?

    Read the article

  • is it possible to create a multi-project template that references n number of existing projects and

    - by jcollum
    The situation: I need to create about 40+ solutions that all reference 3 projects and have one project that is unique to each one. I'd like to create a multi-project template that does this, but from what I've read it looks like it's very difficult or impossible (related SO question, but doesn't answer). I want my solution to look like this (names changed of course): These three are used by all solutions created under this "family": MyCompany.Extensions MyCompany.MyProject.Tests.Shared MyCompany.MyProject.Scripts This one is the one that makes the solution unique, 123, 124, 125 etc: MyCompany.MyProject.Tests.Unit123 Is it possible to set up a multi-project template that will generate this structure? References: MSDN Create Multi Project Templates

    Read the article

  • Restrict whole system on certain cores except a few process?

    - by icando
    Hi I am running some latency sensitive program on a Linux machine (more specifically, CentOS 6), and I don't want the threads of the process being preempted. So in my plan, the first step is to set cpu affinity of the threads so that threads are running on separate cores, so they don't preempt each other. Then the second step is to make sure other processes in the system not running on these cores. So my question is: is it possible to restrict the whole system running on certain cores, except this process? This should apply to any newly created processes in the future.

    Read the article

  • Twitter Feeds in Umbraco using XSLT

    - by Vizioz Limited
    There are currently two packages tagged on the Umbraco forum that can be used to add a twitter feed to your website. I was playing around with "Twitter for Umbraco" by Warren Buckley and noticed a bug in the way it converted twitter @names to links, so I thought I would try and solve this using XSLT.It may also be useful for those of you using Darren Ferguson's "Feed Cache" package as the demo on Darren's site does not add links to the tweets.To use this XSLT you simple call the XSLT Template passing in your Twitter message:<xsl:call-template name="formaturl"> <xsl:with-param name="url" select="text"/></xsl:call-template>Then add the XSLT template to your XSLT macro (outside of the main template)<xsl:template name="formaturl"> <xsl:param name="twitterfeed"/> <xsl:variable name="transform-http" select="Exslt.ExsltRegularExpressions:replace($twitterfeed, '(http\:\/\/\S+)',ig,'<a href="$1">$1</a>')"/> <xsl:variable name="transform-https" select="Exslt.ExsltRegularExpressions:replace($transform-http, '(HTTps\:\/\/\S+)',ig,'<a href="$1">$1</a>')"/> <xsl:variable name="transform-AT" select="Exslt.ExsltRegularExpressions:replace($transform-https, '(^|\s)@(\w+)',ig,' <a href="http://www.twitter.com/$2">@$2</a>')"/> <xsl:variable name="transform-HASH" select="Exslt.ExsltRegularExpressions:replace($transform-AT, '(^|\s)#(\w+)',ig,' <a href="http://www.twitter.com/search?q=$2">#$2</a>')"/> <xsl:value-of select="$transform-HASH" disable-output-escaping="yes"/> </xsl:template>You should find that this now replaces all the @names, #names and URL's with links!

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • ANNOUNCEMENT: Oracle VM 3 Templates Available for Oracle Secure Global Desktop 4.62

    - by Mohan Prabhala
    Today, we are proud to announce the general availability of Oracle VM 3 templates for Oracle Secure Global Desktop version 4.62.  With Oracle VM 3 templates, anyone using Oracle VM 3 need not download, install and configure the Operating System and product(s) individually. In this case, the supported operating system (Oracle Linux 5.7) and Oracle Secure Global Dekstop 4.62 product is packaged together into a template that one can easily import and clone as a VM into Oracle VM 3. This results in a nearly instant deployment and configuration of Oracle Secure Global Desktop within Oracle VM 3.  This means drastically reducing the evaluation and deployment time for Oracle Secure Global Desktop when leveraging Oracle VM 3. Feel free to give it a try! Login into the Oracle VM section at Oracle Software Delivery Cloud  (click on 'Cloud Portal (Main)' at the top-right) and: Under Oracle VM templates - x86 64-bit, look for Oracle VM 3 Template (OVF) for Oracle Secure Global Desktop Media Pack for x86_64 (64 bit) Oracle Secure Global Desktop 4.62 template for x86_64 (64 bit) with Oracle Linux 5.7 Under Oracle VM templates – x86 32 bit, look for Oracle VM 3 Template (OVF) for Oracle Secure Global Desktop Media Pack for x86 (32 bit) Oracle Secure Global Desktop 4.62 template for x86 (32 bit) with Oracle Linux 5.7 Download any of the above templates. Once you are done, you must First import the assembly (ova) file that you downloaded from Oracle Software Delivery Cloud Next, create a virtual machine template from the assembly And finally create a virtual machine from the template. Once the virtual machine is created and starts up, be sure to configure the networking parameters (hostname, IP address, netmask, gateway etc), and optional user parameters correctly. You must also enter a root password during first boot. And that's it - the Oracle Secure Global Desktop install script will pick up the networking parameters, prompt for confirmation and complete a default installation. Once the installation is complete, you may want to refer to the Oracle Secure Global Desktop Administration Guide to learn more about Oracle Secure Global Desktop and its capabilities.

    Read the article

  • How to properly diagram lambda expressions or traversals through them in Architecture Explorer?

    - by MainMa
    I'm exploring a piece of code in Architecture Explorer in Visual Studio 2010 to study the relations between methods. I noticed a strange behavior. Take the following source code. It generates a hello message based on a template and a template engine, the template engine being a method (a sort of strategy pattern simplified at a maximum for demo purposes). public string GenerateHelloMessage(string personName) { return this.ApplyTemplate( this.DefaultTemplateEngine, this.GenerateLocalizedHelloTemplate(), personName); } private string GenerateLocalizedHelloTemplate() { return "Hello {0}!"; } public string ApplyTemplate( Func<string, string, string> templateEngine, string template, string personName) { return templateEngine(template, personName); } public string DefaultTemplateEngine(string template, string personName) { return string.Format(template, personName); } The graph generated from this code is this one: Change the first method from this: public string GenerateHelloMessage(string personName) { return this.ApplyTemplate( this.DefaultTemplateEngine, this.GenerateLocalizedHelloTemplate(), personName); } to this: public string GenerateHelloMessage(string personName) { return this.ApplyTemplate( (a, b) => this.DefaultTemplateEngine(a, b), this.GenerateLocalizedHelloTemplate(), personName); } and the graph becomes: While semantically identical, those two versions of code produce different dependency graphs, and Architecture Explorer shows no trace of the lambda expression (while Visual Studio's code coverage, for example, shows them, as well as Code analysis seems to be able to understand that the link exists). How would it be possible, without changing the source code, to: Either force Architecture Explorer to display everything, including lambda expressions, Or make it traverse lambda expressions while drawing a dependency through them (so in this case, drawing the dependency from GenerateHelloMessage to DefaultTemplateEngine in the second example)?

    Read the article

  • System crashes/lockups + compiz/cairo/gnome-panel crashing due to cached ram, please help?

    - by Kristian Thomson
    Can someone help me to troubleshoot system crashes and lockups which result in compiz/cairo dock and gnome-panel crashing? I also get no window borders after the crash and a lot of kernel memory errors. Logs are telling me that apps were killed due to not enough memory, but the system is caching like 14GB of my ram so I'm a bit stuck on what/how to stop it. I'm running Ubuntu 12.10 on a 2011 Mac Mini with 16 GB ram. Here's some of the logs that look like they could be causing trouble. I woke up this morning to find chrome/skype/cairo dock and a few others had been killed and here is what the log said. Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959890] Out of memory: Kill process 12247 (chromium-browse) score 101 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.959893] Killed process 12247 (chromium-browse) total-vm:238948kB, anon-rss:17064kB, file-rss:20008kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972283] Out of memory: Kill process 10976 (dropbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.972288] Killed process 10976 (dropbox) total-vm:316392kB, anon-rss:115484kB, file-rss:16504kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975890] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9310.975895] Killed process 11515 (tray_icon_worke) total-vm:63336kB, anon-rss:15960kB, file-rss:11436kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281535] Out of memory: Kill process 10887 (rhythmbox) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.281539] Killed process 10887 (rhythmbox) total-vm:528980kB, anon-rss:92272kB, file-rss:36520kB Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283110] Out of memory: Kill process 10889 (skype) score 3 or sacrifice child Nov 5 04:00:45 linkandzelda-Macmini kernel: [ 9311.283113] Killed process 10889 (skype) total-vm:415056kB, anon-rss:84880kB, file-rss:22160kB I went to look deeper into things and saw that the whole time I'm having these kernel errors with out of memory and something mentioning radeon. I have a Radeon HD 6600M graphics card using the open source driver, not the proprietary one. I was wondering if perhaps using the proprietary one would solve the problem. Also, while writing this in Chrome rhythmbox and chrome just got killed while typing this, due to out of memory errors or so it reports, though I have 7 GB of free RAM at the time with 7 GB cached as well. Here is a full copy of my logs that happened in kern.log simply from when I began typing this question. http://pastebin.com/cdxxDktG Thanks in advance, Kris

    Read the article

  • Different types of Session state management options available with ASP.NET

    - by Aamir Hasan
    ASP.NET provides In-Process and Out-of-Process state management.In-Process stores the session in memory on the web server.This requires the a "sticky-server" (or no load-balancing) so that the user is always reconnected to the same web server.Out-of-Process Session state management stores data in an external data source.The external data source may be either a SQL Server or a State Server service.Out-of-Process state management requires that all objects stored in session are serializable.Linkhttp://msdn.microsoft.com/en-us/library/ms178586%28VS.80%29.aspx

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >