Search Results

Search found 9410 results on 377 pages for 'simulator difference'.

Page 343/377 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • Powershell Exchange script returning inconsistent results - PS weirdness

    - by Ian
    Maybe someone can shed some light on a bit of Powershell weirdness I've come across and can't explain. This PS script returns a list of exchange databases, their sizes and the number of mailboxes in each one: Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name="Size (GB)";Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$"+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2)}}, @{Name="Size (MB)";Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$"+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2)}}, @{Name="No. Of Mbx";expression={(Get-Mailbox -Database $_.Identity | Measure-Object).Count}} | Format-table –AutoSize If I add a simple 'sort name' before the 'format-table' my resulting table contains blanks where the database sizes and number of mailboxes should appear (not zeros, blank empty space).... but only in some rows, not all rows. Some rows contain numbers! If I put the '|sort name| ' after the initial 'get-mailboxdatabase' it works fine. Whats even weirder is if I do the following: Execute the above command Add the sort before format-table Execute the new command Execute the initial command again What I see is different amounts in each of the three cases - all of which are incorrect. Yet 1 and 3 are the same command and the only difference with 2 is a sort. 1 and 3 should, at a minimum, return the same results. I get blanks where I should have MBs If I add the sort after the get-mailboxdatabase, it always returns identical results (as it should). Can anyone suggest an explanation as to what may be going on? If its of any help in reading the expression, I've reformatted it here to make it a bit more readable: Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name="Size (GB)";Expression={ $objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$" + $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2) }}, @{Name="Size (MB)";Expression={ $objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$" + $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2) }}, @{Name="No. Of Mbx";expression={ (Get-Mailbox -Database $_.Identity | Measure-Object).Count }} | Format-table –AutoSize

    Read the article

  • Explanation of the init.d/scripts Fedora

    - by Shahmir Javaid
    Below is a copy of vsftpd, i need some explanations of some of the scripts mentioned below in this script: #!/bin/bash # ### BEGIN INIT INFO # Provides: vsftpd # Required-Start: $local_fs $network $named $remote_fs $syslog # Required-Stop: $local_fs $network $named $remote_fs $syslog # Short-Description: Very Secure Ftp Daemon # Description: vsftpd is a Very Secure FTP daemon. It was written completely from # scratch ### END INIT INFO # vsftpd This shell script takes care of starting and stopping # standalone vsftpd. # # chkconfig: - 60 50 # description: Vsftpd is a ftp daemon, which is the program \ # that answers incoming ftp service requests. # processname: vsftpd # config: /etc/vsftpd/vsftpd.conf # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network RETVAL=0 prog="vsftpd" start() { # Start daemons. # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 if [ -d /etc/vsftpd ] ; then CONFS=`ls /etc/vsftpd/*.conf 2>/dev/null` [ -z "$CONFS" ] && exit 6 for i in $CONFS; do site=`basename $i .conf` echo -n $"Starting $prog for $site: " daemon /usr/sbin/vsftpd $i RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/$prog break else if [ -f /var/lock/subsys/$prog ]; then RETVAL=0 break fi fi done else RETVAL=1 fi return $RETVAL } stop() { # Stop daemons. echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog return $RETVAL } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart|reload) stop start RETVAL=$? ;; condrestart|try-restart|force-reload) if [ -f /var/lock/subsys/$prog ]; then stop start RETVAL=$? fi ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|try-restart|force-reload|status}" exit 1 esac exit $RETVAL Question I What the hell is the difference between the && and || signs in the below commands, and is it just an easy way to do a simple if check or is it completely different to a if[..something..]; then ..something.. fi: # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 Question II i get what -eq and -gt is (equal to, greater than) but is there a simple website that explains what -x, -d and -f are? Any help would be apreciated Running Fedora 12 on my OS. Script copied from /etc/init.d/vsftpd Question III It says required starts are $local_fs $network $named $remote_fs $syslog but i cant see any where it checks for those.

    Read the article

  • Trouble using gitweb with nginx

    - by Rayne
    I have a git repository in a directory inside of /home/raynes/pubgit/. I'm trying to use gitweb to provide a web interface to it. I use nginx as my web server for everything else, so I don't really want to have to use another just for this. I'm mostly following this guide: http://michalbugno.pl/en/blog/gitweb-nginx, which is the only guide I can find via google and is really recent. fcgiwrap apparently isn't in Lucid Lynx's repositories, so I installed it manually. I spawn instances via spawn-fcgi: spawn-fcgi -f /usr/local/sbin/fcgiwrap -a 127.0.0.1 -p 9001 That's all good. My /etc/gitweb.conf is as follows: # path to git projects (<project>.git) #$projectroot = "/home/raynes/pubgit"; $my_uri = "http://mc.raynes.me"; $home_link = "http://mc.raynes.me/"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages #$home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use $stylesheet = "/gitweb/gitweb.css"; # logo to use $logo = "/gitweb/git-logo.png"; # the 'favicon' $favicon = "/gitweb/git-favicon.png"; And my nginx server configuration is this: server { listen 80; server_name mc.raynes.me; location / { root /usr/share/gitweb; if (!-f $request_filename) { fastcgi_pass 127.0.0.1:9001; } fastcgi_index index.cgi; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } } The only difference here is that I've set fastcgi_pass to 127.0.0.1:9001. When I go to http://mc.raynes.me I'm greeted with a page that simply says "403" and nothing else. I have not the slightest clue what I did wrong. Any ideas?

    Read the article

  • Can't get virtual desktops to show up on RDWeb for Server 2012 R2

    - by Scott Chamberlain
    I built a test lab using the Windows Server 2012 R2 Preview. The initial test lab has the following configuration (I have replaced our name with "OurCompanyName" because I would like it if Google searches for our name did not cause people to come to this site, please do the same in any responses) Physical hardware running Windows Server 2012 R2 Preview full GUI, acting as Hyper-V host (joined to the test domain as testVwHost.testVw.OurCompanyName.com) with the following VM's running on it VM running 2012 R2 Core acting as domain controller for the forest testVw.OurCompanyName.com (testDC.testVw.OurCompanyName.com) VM running 2012 R2 Core with nothing running on it joined to the test domain as testIIS.testVw.OurCompanyName.com A clean install of Windows 7, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it A clean install of Windows 8, all that was done to it was all windows updates where loaded and sysprep /generalize /oobe /shutdown /mode:vm was run on it I then ran "Add Roles and Features" from testVwHost and chose the "Remote Desktop Services Installation", "Standard Deployment", "Virtual machine-based desktop deployment". I choose testIIS for the roles "RD Connection Broker" and "RD Web Access" and testVwHost as "RD Virtualization Host" The Install of the roles went fine, I then went to Remote Desktop Services in server manager and wet to setup Deployment Properties. I set the certificate for all 3 roles to our certificate signed by a CA for *.OurCompanyName.com. I then created a new Virtual Desktop Collection for Windows 7 and Windows 8 and both where created without issue. On the Windows 7 pool I added RemoteApp to launch WordPad, For windows 8 I did not add any RemoteApp programs. Everything now appears to be fine from a setup perspective however if I go to https://testIIS.testVw.OurCompanyName.com/RDWeb and log in as the use Administrator (or any orher user) I don't see the virtual desktops I created nor the RemoteApp publishing of WordPad. I tried adding a licensing server, using testDC as the server but that made no difference. What step did I miss in setting this up that is causing this not to show up on RDWeb? If any additional information is needed pleas let me know. I have tried every possible thing I can think of and I am just groping around in the dark now. The virtual machines running on testVwHost The configuration screen for RD Services The Windows 7 Pool The Windows 8 Pool This is logged in as testVw\Administrator

    Read the article

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • what is Remote Desktop Services in Windows Server 2008 R2 all about?

    - by fejesjoco
    Seriously, I'm lost in all that sales mumbo-jumbo. Let's say I want 1 or 2 users to be able to remotely log on to a server, run Word, Visual Studio, Firefox, and whatever. Do I gain anything at all if I install Remote Desktop Services? Or do I just install Desktop Experience feature pack, enable remote desktop and voila, nobody will ever notice the difference? Here's what TechNet says about Remote Desktop Session Host: A Remote Desktop Session Host (RD Session Host) server is the server that hosts Windows-based programs or the full Windows desktop for Remote Desktop Services clients. Users can connect to an RD Session Host server to run programs, to save files, and to use network resources on that server. Users can access an RD Session Host server by using Remote Desktop Connection or by using RemoteApp. The good old simple remote desktop can also host a full Windows desktop for remote clients so that they can run programs, save files and do all that stuff. Why do they write about it like it's such a great new invention, besides that they want to sell it? RDSH doesn't seem all that different at all. What do I install when I install RDSH, since all those features are already there in Windows? What's even more confusing is that you need to take special care when you want to install applications to an RDSH so that they will be usable by many concurrent users. Why? All the modern applications install the program files in one directory, store some common settings in the ProgramData folder and the HKLM hive, and store user specific settings in the Users folder and the HKCU hive. They are designed to be usable by many users on the same machine. 2 or 2000 users can use them concurrently without any efforts. I can sign in with 2 users to a server with only remote desktop enabled, and both of us can run Word or anything without any problems, can't we? So what changes if I set RDSH to install mode, or what happens if I don't? Why is the feature to switch between install and execute mode there at all? Yes I know of some advantages in Remote Desktop Services, like there's no 2 user limit, it supports virtualization, video acceleration and stuff, it has a whole infrastructure with gateway, web access, connection broker, etc. But I don't need those, so if you take these away, how are these two technologies different? From the articles it seems like they are completely different technologies, whereas it looks to me that they are completely the same at the core, and Remote Desktop Services just adds some additional features, but doesn't reinvent anything.

    Read the article

  • Scientific Linux - mysql and apache fail to start on reboot

    - by Derek Deed
    Both mysqld and httpd fail to restart following a reboot of the server, although chkconfig --list shows both daemons set to on for run levels 2,3,4 & 5 All control is being exectuted via Webmin Reboot server – MySQl and Apache not running MySQL Database Server MySQL version 5.1.69 MySQL is not running on your system - database list could not be retrieved. ________________________________________ Click this button to start the MySQL database server on your system with the command /etc/rc.d/init.d/mysqld start. This Webmin module cannot administer the database until it is started. Apache Webserver Apache version 2.2.15 Start Apache Search Docs.. Global configuration Existing virtual hosts Create virtual host Select all. | Invert selection. Default Server Defines the default settings for all other virtual servers, and processes any unhandled requests. Address Any Port Any Server Name Automatic Document Root /var/www/drupal Virtual Server Processes all requests on port 443 not handled by other virtual servers. Address Any Port 443 Server Name Automatic Document Root /var/www/drupal Select all. | Invert selection. chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart Apache chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart MySQL chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off Everything now running okay; but no difference in the chkconfig outputs above. I tried: chkconfig --levels 235 httpd on /etc/init.d/httpd start and the same for mysqld but no change in operation. Log files show that the shutdown has been completed successfully; but there is no indication of the service restarting until it is executed manually: 131112 13:59:15 InnoDB: Starting shutdown... 131112 13:59:16 InnoDB: Shutdown completed; log sequence number 0 881747021 131112 13:59:16 [Note] /usr/libexec/mysqld: Shutdown complete 131112 13:59:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 131112 14:09:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131112 14:09:52 InnoDB: Initializing buffer pool, size = 8.0M 131112 14:09:52 InnoDB: Completed initialization of buffer pool And the Apache logs: [Tue Nov 12 13:59:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 13:59:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 13:59:13 2013] [notice] Digest: done [Tue Nov 12 13:59:14 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Tue Nov 12 13:59:14 2013] [notice] caught SIGTERM, shutting down [Tue Nov 12 14:27:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 14:27:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 14:27:13 2013] [notice] Digest: done [Tue Nov 12 14:27:13 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations Is anyone able to shed any light on this problem?

    Read the article

  • How to setup ssh's umask for all type of connections

    - by Unode
    I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027. Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough. If I don't set this parameter, sftp uses by default umask 0022. This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute -rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute -rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp, permissions are always umask 0022. I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp. In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd: # Setting UMASK for all ssh based connections (ssh, sftp, scp) session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login. EDIT: After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros.

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • Local Apache on Windows XP not finishing page requests

    - by asgeo1
    I have Apache 2.2.11 installed locally on my Windows XP (SP3) dev machine, which I setup about 3 months ago. I have just started having a strange problem in the last few week. Apache is serving some basic PHP applications like phpMyAdmin. When I make a page request, Apache appears to not finish serving all resources for that page. Firefox shows the "Transferring data from servername..." message, and the page never completes. The same problem happens in Internet Explorer too. I can sometimes tell which resource it is waiting on, because most of the page will render except for some image or similar resources. (Not sure why Firebug doesn't show this) It doesn't have the problem every page request - for page requests where most of the resources are cached in my browser, the page request will work with no problems. Or pages that are very light will work with no problems. However, if I "hard" refresh the page, I will have this problem (probably because it is requesting all page resources) Does anyone know what this could be? It is so strange that it has only just started happening - and I did not make any changes to my system (that I am aware of) I tried playing with the Apache ThreadsPerChild setting, but it did not seem to make a difference. UPDATE: I have been doing some more tests. I have been serving the most basic of pages, just a plain HTML file: <html> <body> <h1>testing</h1> </body> </html> If I request this page multiple times in a row, AND each request occurs immediately after the previous has completed, then 50% of the time the request will time out. However, if I put a 1-2 second gap between requests, then there is no problem. This correlates to what I have observed when the brower requests a real application page. When the browser has nothing cached, then all of the page resources are requested from the browser in a short amount of time - this appears to trigger the problem. UPDATE2: Nathan Long has helped me understand the issue a little better with the server-status page (see below). It is weird, it is like the server has a hickup sending data to the client. The client sits there waiting forever for data that never arrives. Closing the client process does not terminate the connection on the server - the server still has active threads for each previously attempted connection, but they just sit there - not sending any data and never terminating. (even though the client is now closed) Only a restart of the server seems to terminate them.

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • Daemon process exiting when shell closes

    - by Pace
    I have a script which starts a daemon process and then sleeps for 20 seconds. If I run the script on SLES11 SP1 or RHEL6 then after the script exits the process is still running. If I run the script on SLES11 SP3 or RHEL6.3 then after the script exits the process is no longer running. The process continues to run for the entire 20 second sleep and is killed when the process exits. The script is run via expect so the script's entire shell exits with the process. Obviously if this wasn't a daemon it was starting I wouldn't be surprised. Also, I suspect the problem isn't the OS version as much as it is the difference in the way we've setup the newer servers (no idea what those differences are though, the older servers were set up years ago). During the 20 seconds the process runs if I do a ps I get the following: root 4699 1 0 15:14 pts/2 00:00:00 sudo -u openmq /opt/PacketPortal/openmq/default/bin/imqbrokerd -bgnd -autorestart -silent -port 7676 -Dimq.service.activelist=admin,ssljms -D openmq 4701 4699 0 15:14 pts/2 00:00:00 /bin/sh /opt/PacketPortal/openmq/default/bin/imqbrokerd -bgnd -autorestart -silent -port 7676 -Dimq.service.activelist=admin,ssljms -Dimq.ssl The fact that the parent process of 4699 is 1 seems to suggest to me that the process has been correctly daemonized. However, after the expect script exits both 4699 and 4701 are killed. What could be causing this? UPDATE I've printed the same output on the servers that work. During the 20 second sleep I get: openmq 18652 1 0 15:44 pts/1 00:00:00 /bin/sh /opt/PacketPortal/openmq/default/bin/imqbrokerd -bgnd -autorestart -silent -port 7676 -Dimq.service.activelist=admin,ssljms -Dimq.ssljms.tls.port=7680 openmq 18686 18652 8 15:44 pts/1 00:00:02 /usr/java/latest/bin/java -cp /opt/PacketPortal/openmq/default/bin/../lib/imqbroker.jar:/opt/PacketPortal/openmq/default/bin/../lib/imqutil.jar:/opt/PacketPortal/ope After the 20 second sleep I get: openmq 18652 1 0 15:44 ? 00:00:00 /bin/sh /opt/PacketPortal/openmq/default/bin/imqbrokerd -bgnd -autorestart -silent -port 7676 -Dimq.service.activelist=admin,ssljms -Dimq.ssljms.tls.port=7680 openmq 18686 18652 5 15:44 ? 00:00:02 /usr/java/latest/bin/java -cp /opt/PacketPortal/openmq/default/bin/../lib/imqbroker.jar:/opt/PacketPortal/openmq/default/bin/../lib/imqutil.jar:/opt/PacketPortal/ope After the script exits it disconnects the controlling terminal. I wonder why it doesn't do that on the newer servers. UPDATE Here is the section of the script that actually launches OpenMQ. The -bgnd flag is what is supposed to daemonize it. sudo -u openmq $IMQ_HOME/bin/$EXECUTABLE -bgnd $BROKER_OPTIONS $ARGS > /dev/null 2>&1 &

    Read the article

  • Mixed sessions with Classic ASP on IIS 7.5 and Windows 2008 R2 64 bit

    - by Marcin
    Recently had an issues with a server upgrade from IIS 6 on Windows 2003 to IIS 7.5 on Windows 2008 R2 64 bit. We have a number of websites running on Classic ASP. All the sites sit under a particular site, e.g. www.example.com/foo and www.example.com/foobar. On IIS 6 each site was set up as a virtual directory and things worked fine. Since moving to the new set up, a lot of websites seem to have mixed Sessions. To be clear, this is not a app pool recycling issue; rather the sessions are populated with information when the user hits the site and while browsing they get sessions from different sites. We've determined this based on - a few customers called up and reported having their shopping cart with items with names of items belonging to a different site - also our own testing showed that some queries being run would try to bring products in from a different site We've tried - disabling dynamic caching - converting each site to be a virtual application (if I understand correctly, the virtual directory/application concepts were changed/refined somewhat in IIS 7 although to be honest, I'm not clear what the difference is) - various application pool changes (using .NET 2 framework), classic and integrated modes, changing the Process model to NetworkIdentity), all to no avail. The only thing we haven't tried is changing it to run as a 32 bit application. We're not using http only cookies, so when I open up a browser and type document.cookie into the dev console in Firefox/Chrome/IE that there will be multiple ASPSESSIONID=... values whereas previously I believe there was only one. Finally, we use server side JScript for the classic ASP pages, not VBScript, so we have code similar to the below. //the user's login account as a jscript object Session("user") = { email : "[email protected]", id : 123 }; and if we execute a line of code like below: Response.Write( typeof(Session("user")) ); When things are running correctly, we get "object" - as expected. When the Session gets trashed, the output is "unknown" and we are also unable to access the fields within the JScript object (e.g. the .email or .id fields). Much appreciated if anyone can provide any pointers about how to resolve this, everything on google seems to point to different issues.

    Read the article

  • Very slow printing from print server

    - by evolvd
    Print server is a VM on Xen The VM is Windows 2003 32bit. During the issue the VM is not being taxed in anyway, cpu, memory, hd read/write, and network speed is all good. The problem that I see is the transfer of the print file from the print server to the printer. The 80Mb file is transferred from the client to the print server in about 2 minutes but then it takes about 2 hours for that file to be sent to the printer. I can't figure out why this would just start to happen. The printer is rebooted every evening and is just used for one large print job in the morning. The server has been rebooted with no effect I changed the spool option to send the entire spool to the server before printing starts and it had no effect. This printer problem did happen to come about after some changes to the Xen environment. The Xen servers changed from using HBA NIC cards to software iscsi and a new switch was put in. I don't think this is related to the problem since all the speeds on the VMs are better now. The changed happened on Saturday and the first print to this printer happened on Monday morning. I'm just putting that out there but like I said I don't think it is related but I don't want to rule it out. At this point I don't have many other options besides the physical layer. I can switch out network cable that goes to the printer and I might be able to print the same job to another printer. I wont be able to test those things out till this afternoon though. Any other ideas or test I could do to try to find the reason for the slow speed? I forgot to say that this is only happening when printing to this one printer. ===Update=== I found out that there are a few printers that currently have this issue, not just the one. There are over 30 printers on the server though so I know it's not happening to all of them. I printed a large pdf doc from the server and it was able to print at the normal speed. If the machine sends the large print request it gets to the server fine but then slow to get from the server to the printer. If sent directly from the printer it gets to the printer at the normal speed. The question now is why is there a speed difference when it comes from the machine and why would it start now?

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Google Chrome issues

    - by Ben Hooper
    I'm an avid user of Chrome and would defend it to the death but there is no denying that it has issues, and I've been experiencing a lot of them recently. Issues Stuck tabs Issue: Around 60% of the time (higher when the system has just started), when Chrome is launched, some or all of the pinned tabs and startup pages will get stuck in the counter-clockwise-rotating "contacting server" mode and never snap out of it. Fix: Quit and launch Chrome until they load properly (this really isn't good, as quitting and launching Chrome can and has cleared all pinned tabs). Extra Information: If you stop the loading and re-enter the URL then the page will load perfectly. The amount of pinned tabs or startup pages seems to be irrelevant, but I could be wrong.   "Downloaded/out of" Issue: Each item in the download bar has a downloaded status section, formatted as "downloaded/out of". Sometimes Chrome doesn't display the "out of" part.   Collapsed settings windows Issue: In Chrome version 19(ish)+, settings are configured via overlayed / popup windows. Sometimes, the window will open fully collapsed. Fix: Resize Chrome's window or open the developer tools. "Network Error" with large downloads Issue: Sometimes, when downloading large files (500MB+) Chrome will download the entire file, the download status will freeze (for example, "1 GB/1 GB") for a few minutes, report "Network error", and delete the .crdownload file. Extra Information: The same file from the same website on the same computer downloads perfectly in other browsers. The website and file type seem to be irrelevant.   Information: I've experienced some of these issues on my home PC and some on my work PC, both of which are Windows 7 Ultimate x64. The only things that link them are my Google account (all settings are synced). Updating Chrome hasn't worked. Most of these issues presented themselves around about version 17 and have continued right through to 21 (current). Uninstalling Chrome, deleting all data in %programFiles(x86)%\Google and %localAppData%\Google, and reinstalling hasn't worked. I have yet to see whether disabling all extensions would make a difference, but it's hard to diagnose as these issues don't occur 100% of the time.   In my case, I don't know if there's an actual solution. I'm just curious as to whether anyone else is having similar issues to those that I'm experiencing. At least then I'll know it's an issue with Chrome itself.

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • Plesk Postfix Mail Server 9.5.4 very heavy load, 1000s of processes

    - by Eugene van der Merwe
    Our Plesk Linux Ubuntu 64-bit mail server has extremely high load and we don't know how to isolate it. The load was okay will two weeks ago but in the last two weeks it's seriously deteriorated. The mail server has been running for years and we have had sporadic performance issues. Normally we reduce the load by turning off all SPAM checks until the problem is sorted (which sometimes resolves itself). Currently we have turned of real time block lists, SPF checking and we have attempted to turn off SpamAssassin. No matter what we do the SpamAssassin check box stays ticked in the GUI. Out of desperation we have done /etc/init.d/psa-spamassassin stop. For years we haven't been able to do SpamAssassin because it kills the server. We would like to use it but performance is more important for now. We cannot turn off Greylisting. The moment we turn off Greylisting our help desk is inandated with calls. Out of desperation we investigated truncating the Greylisting database which is now 2.5 GB big but we abandoned this after noticing turning of Greylisting doesn't improve the performance at all. We have no anti-virus. It's just more load and Dr. Web never really worked that well for us. But we'll try that if it will make a difference. We have implemented Postfix Anvil. This seems to have made the situation worse so we disabled it. We’re not sure if this is the case. Our current mail server is configured to forward all SMTP to a relay server. We did so to reduce the load. This helped a lot because outgoing queues are generally empty. We are running in an Expand configuration. The mail server has about 12 000 accounts of which maybe half are active. We have read through this document: http://www.postfix.org/STRESS_README.html but there are too many settings and we don’t know which ones to choose. Please assist urgently. We need advice on how to fix this problem before all our clients abandon is. The only clue we have is that there are 100s of these processes: 30 13205 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13207 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue 30 13208 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13209 1 0 11:38 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10026 before-remote 30 13213 1 0 13:18 ? 00:00:00 /usr/lib/plesk-9.0/postfix-queue 127.0.0.1 10027 before-queue

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • SQL Clustering on Hyper V - is a cluster within a cluster a benefit.

    - by Chris W
    This is a re-hash of a question I asked a while back - after a consultant has come in firing ideas in to other teams in the department the whole issue has been raised again hence I'm looking for more detailed answers. We're intending to set-up a multi-instance SQL Cluster across a number of physical blades which will run a variety of different systems across each SQL instance. In general use there will be one virtual SQL instance running on each VM host. Again, in general operation each VM host will run on a dedicated underlying blade. The set-up should give us lots of flexibility for maintenance of any individual VM or underlying blade with all the SQL instances able to fail over as required. My original plan had been to do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Within the VMs - create a failover cluster and then install SQL Server clustering. The consultant has suggested that we instead do the following: Install 2008 R2 on each blade Add Hyper V to each blade Install a 2008 R2 VM to each blade Create a cluster on the HOST machines which will host all the VMs. Within the VMs - create a failover cluster and then install SQL Server clustering. The big difference is the addition of step 4 whereby we cluster all of the guest VMs as well. The argument is that it improves maintenance further since we have no ties at all between the SQL cluster and physical hardware. We can in theory live migrate the guest VMs around the hosts without affecting the SQL cluster at all so we for routine maintenance physical blades we move the SQL cluster around without interruption and without needing to failover. It sounds like a nice idea but I've not come across anything on the internet where people say they've done this and it works OK. Can I actually do the live migrations of the guests without the SQL Cluster hosted within them getting upset? Does anyone have any experience of this set up, good or bad? Are there some pros and cons that I've not considered? I appreciate that mirroring is also a valuable option to consider - in this case we're favouring clustering since it will do the whole of each instance and we have a good number of databases. Some DBs are for lumbering 3rd party systems that may not even work kindly with mirroring (and my understanding of clustering is that fail overs are completely transparent to the clients). Thanks.

    Read the article

  • Cannot Send Item error in Outlook - permissions to registry?

    - by Tim Alexander
    The issue I am trying to solve is to do with users getting a Cannot Send Item error in Outlook 2007 connecting to Exchange 2007. Basically if there is an image in the email (either one they have pasted in or one from another email in the chain) they get a "Cannot Send Item" error. Initially thought it was a citrix issue but users get it when they RDP to a server as well. Changing the message to Rich Text works 80% of the time but I do not think this is a solution but more of a temporary workaround. After some troubleshooting we found that the error can be fixed by adding the user as a member of the local power users group. of course this is not really a fix. My thoughts were that the ability of a power user to add/remove software may give them more access to the registry which might allow them to get round a restriction that is in place for a normal user. I have tried going through a procmon but the wealth of information is confusing. It initially looked like it may be an Outlook 2007 email security setting but this does not change between power user and normal user (set to 1 in the registry, "Use the security setting from Outlook Security Settings Public Folders"). I am struggling to fine tune my troubleshooting to work out exactly what is blocking it. Has anyone had an experience with an error similar to this? Or are there any tips for trying to track down issues via procmon as I must admit my approach seems somewhat lacking :) EDIT: So I have trawled through the two logs we have from process monitor (one as a power user and one a normal user). annoyingly I can find no obvious difference where something is denied access. There are more access denied events in the normal user log but these are quickly followed by sucessful entries to the same path fractions of a second later. The only thing that does stand out is an access denied to HKCR.html. This does not even appear in the power user version of the log. From what I understand this helps determine the default browser which ties in nicely with the fact that 9 out of 10 times you can send the message as Rich Text. EDIT: Looks like KB2509470 was causing the issue. Not really sure why but when I can work out what it does and why it causes the problem will post here unless anyone beats me to it!

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Default permission for newly-created files/folders using ACLs not respected by commands like "unzip"

    - by Ngoc Pham
    I am having trouble with setting up a system for multiple users accessing the same set of files. I've read tuts and docs around and played with ACLs but haven't succeeded yet. MY SCENARIO: Have multiple users, for example, user1 and user2, which is belong to a group called sharedusers. They must have all WRITE permission to a same set of files and directories, say underlying in /userdata/sharing/. I have the folder's group set to sharedusers and SGID to have all newly created files/dirs inside set to same group. ubuntu@home:/userdata$ ll drwxr-sr-x 2 ubuntu sharedusers 4096 Nov 24 03:51 sharing/ I set ACLs for this directory so I can have permission of sub dirs/files inheritted from its parents. ubuntu@home:/userdata$ setfacl -m group:sharedusers:rwx sharing/ ubuntu@home:/userdata$ setfacl -d -m group:sharedusers:rwx sharing/ Here's what I've got: ubuntu@home:/userdata$ getfacl sharing/ # file: sharing/ # owner: ubuntu # group: sharedusers # flags: -s- user::rwx group::r-x group:sharedusers:rwx mask::rwx other::r-x default:user::rwx default:group::r-x default:group:sharedusers:rwx default:mask::rwx default:other::r-x Seems okay as when I create new folder with new files inside and the permission is correct. ubuntu@home:/userdata/sharing$ mkdir a && cd a ubuntu@home:/userdata/sharing/a$ touch a_test ubuntu@home:/userdata/sharing/a$ getfacl a_test # file: a_test # owner: ubuntu # group: sharedusers user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:rw- mask::rw- other::r-- As you can see, the sharedusers group has effective permission rw-. HOWEVER, if I have a zip file, and use unzip -q command to unzip the file inside the folder sharing, the extracted folders don't have group write permisison. Therefore, the users from group sharedusers cannot modify files under those extracted folders. ubuntu@home:/userdata/sharing$ unzip -q Joomla_3.0.2-Stable-Full_Package.zip ubuntu@home:/userdata/sharing$ ll drwxrwsr-x+ 2 ubuntu sharedusers 4096 Nov 24 04:00 a/ drwxr-xr-x+ 10 ubuntu sharedusers 4096 Nov 7 01:52 administrator/ drwxr-xr-x+ 13 ubuntu sharedusers 4096 Nov 7 01:52 components/ You an spot the difference in permissions between folder a (created before) and folder administrator extracted by unzip. And the ACLs of a files inside administrator: ubuntu@home:/userdata/sharing$ getfacl administrator/index.php # file: administrator/index.php # owner: ubuntu # group: ubuntu user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:r-- mask::r-- other::r-- It also has ubuntu group, not sharedusers group as expected. Could someone please explain the problem and give me advice? Thank you in advance!

    Read the article

  • Creating a network link between 2 very close buildings

    - by Daniel Johnson
    I have a charity who have two adjacent medium sized modern detached houses (in the UK): the buildings stand next to each other and are less than 5 metres apart. They have DSL connected to a single computer in one of the buildings. They want to add a network with wireless, and want it to work across both buildings. Being a charity they need to keep costs down. The network would be used for sharing Word documents, e-mail, browsing and skyping. My initial thoughts were to connect the buildings with fibre. So: Option 1 Use fibre between the buildings. Sufficient cable and two TP-LINK MC100CM Fast Ethernet Media Converters. Cost ~£80.00. But there is the extra cost and hassle of running the cable down and up the external walls, lifting and relaying paving, and burying underground. Never having fitted fibre I'm also a little worried about going up the wall and then bending the cable at 90 degrees to go through the wall and into the building. Option 2 Use two TP-Link TL-WA7510N High Powered Outdoor 5Ghz 15dBi Wireless antennas to connect the buildings. There is a clear line of sight at first floor level. Cost ~£100. And much easier to fit than fibre! Is using the TL-WA7510Ns overkill? Is there something more suitable? I had hoped to use some Netgear stuff, e.g. two DGN2200, one in each house and also use them to provide the wireless link between the buildings. However, in bridge mode wireless client association is not available and repeater mode with client association only supports WEP security which isn't strong enough. Is there something similar that would be up to the job? Option 3 Connect the buildings with UTP cable. My concerns here are risk of electric shock due to a difference of potential between the buildings (or are they so close this shouldn't be an issue) and protection from lightning strikes. Is fitting lighting arrestors expensive? And what can be done to ameliorate against the risk of shock? This all falls outside my area of expertise so I would really appreciate some advice.

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >