Search Results

Search found 39614 results on 1585 pages for 'value chain planning'.

Page 316/1585 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Why doesn't environment variable get updated in cmd without restart?

    - by John Nevermore
    CMD commands: setx SOMEVARIABLE "newpath" /M setx SOMEVARIABLE "%SOMEVARIABLE%;newpath2" /M Expected output on ECHO %SOMEVARIABLE%: newpath;newpath2 Actual output: %SOMEVARIABLE% Actual value stored (From System Properties-Environment Variables GUI): %SOMEVARIABLE%;newpath2 The only way i can get the expected output is, if i restart the command prompt every time i modify the environment variable. I'm using this command to automate environment variable value appending multiple times during the same process. Why doesn't environment variable get updated in cmd without restart? Is it possible to get the updated value of %SOMEVARIABLE% without restarting the command prompt?

    Read the article

  • php crashes with no core file and this message : apc_mmap failed

    - by greg0ire
    Description of the problem Regularly, cron php processes crash on our production server, which result in mails with the following body : PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 Segmentation fault (core dumped) I think the Segmentation fault (core dumped) should result in core files being handled by apport and then written in /var/crashes, but the files I can see there are there since yesterday, although the last crash occured today : -rw-r----- 1 root whoopsie 1138528 mai 22 04:09 _usr_bin_php5.0.crash -rw-r----- 1 frontoffice whoopsie 1166373 mai 20 18:00 _usr_bin_php5.1005.crash -rw-r----- 1 frontoffice whoopsie 81622658 mai 22 00:05 _usr_sbin_php5-fpm.1005.crash I tried to download the last one anyway, and ran gdb /usr/sbin/php5-fpm /tmp/_usr_sbin_php5-fpm.1005.crash, only to be told that the file is not a core file (its format was not recognized). Here is the server's apc configuration : cat /etc/php5/cli/conf.d/20-apc.ini extension=apc.so apc.shm_size=512M apc.ttl=3600 apc.user_ttl=3600 apc.enable_cli=1 I'm mostly worried about the apc.shm_size… isn't it too high or too low ? I understand it has to do with the size of memory segments. Question(s) What could be the problem ? How can I troubleshoot it (how can I get a valid core file ?) ? System information free total used free shared buffers cached Mem: 5081296 4354684 726612 0 374744 959968 -/+ buffers/cache: 3019972 2061324 Swap: 522236 516888 5348 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" php -v PHP 5.4.17-1~precise+1 (cli) (built: Jul 17 2013 18:14:06) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies php -i excerpt : Configuration apc APC Support => enabled Version => 3.1.13 APC Debugging => Disabled MMAP Support => Enabled MMAP File Mask => Locking type => pthread mutex Locks Serialization Support => php Revision => $Revision: 327136 $ Build Date => Nov 20 2012 18:41:36 Directive => Local Value => Master Value apc.cache_by_default => On => On apc.canonicalize => On => On apc.coredump_unmap => Off => Off apc.enable_cli => On => On apc.enabled => On => On apc.file_md5 => Off => Off apc.file_update_protection => 2 => 2 apc.filters => no value => no value apc.gc_ttl => 3600 => 3600 apc.include_once_override => Off => Off apc.lazy_classes => Off => Off apc.lazy_functions => Off => Off apc.max_file_size => 1M => 1M apc.mmap_file_mask => no value => no value apc.num_files_hint => 1000 => 1000 apc.preload_path => no value => no value apc.report_autofilter => Off => Off apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 512M => 512M apc.shm_strings_buffer => 4M => 4M apc.slam_defense => On => On apc.stat => On => On apc.stat_ctime => Off => Off apc.ttl => 3600 => 3600 apc.use_request_time => On => On apc.user_entries_hint => 4096 => 4096 apc.user_ttl => 3600 => 3600 apc.write_lock => On => On php -m [PHP Modules] apc bcmath bz2 calendar Core ctype curl date dba dom ereg exif fileinfo filter ftp gd gettext hash iconv imagick intl json ldap libxml mbstring memcache memcached mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix Reflection session shmop SimpleXML soap sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tidy tokenizer wddx xml xmlreader xmlwriter zip zlib [Zend Modules] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 39531 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 39531 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Hyper-V Machine drifts time all over, even with NTP

    - by MichaelGG
    Resolved The problem was Hyper-V on that machine. I removed Hyper-V, installed VMware Server, ran the same VM. Time sync issues went away (< 100ms difference after a day). My setup is like this: HYV1 - HyperV machine (non domain) - sync irrelevant AD1 - VM AD server on HYV1, sync'd to time.nist.gov. HyperV time sync off. S1 - Physical machine, sync'd to domain. S2 - Physical machine running HyperV, sync'd to domain. V1 - Linux VM machine on S2, sync'd to AD1. No HyperV integration. AD1 and S1 have fine sync -- stripchart shows less than 100ms difference. S2 drifts like crazy. Here's a bit of the stripchart against AD1: 18:33:22 d:+00.0010138s o:+05.4101899s 18:33:24 d:+00.0010138s o:+05.4319765s 18:33:26 d:+00.0000000s o:+05.4788429s 18:33:28 d:+00.0000000s o:+05.6089942s 18:33:30 d:+00.0010138s o:+05.7240269s 18:33:32 d:+00.0000000s o:+06.0421911s 18:33:34 d:+00.0081104s o:+06.5613708s 18:33:37 d:+00.0000000s o:+06.9096594s 18:33:39 d:+00.0000000s o:+06.8867838s 18:33:41 d:+00.0010127s o:+06.8936401s In 20 seconds, it drifted over a second. If I manually reset it to within 1s, within a few minutes it'll be back drifting about 2 seconds. Overnight it went from ~2s to ~5s. The Linux VM inside S2 has perfect sync with AD1. Here's the config: C:\Users\mgg>w32tm /dumpreg /subkey:Parameters Value Name Value Type Value Data ------------------------------------------------------------ ServiceDll REG_EXPAND_SZ %systemroot%\system32\w32time.dll ServiceMain REG_SZ SvchostEntry_W32Time ServiceDllUnloadOnStop REG_DWORD 1 Type REG_SZ NT5DS NtpServer REG_SZ ad01.mydomain ad02.mydomain C:\Users\mgg>w32tm /dumpreg /subkey:Config Value Name Value Type Value Data ----------------------------------------------------------- FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 LocalClockDispersion REG_DWORD 9 HoldPeriod REG_DWORD 5 PhaseCorrectRate REG_DWORD 1 UpdateInterval REG_DWORD 30000 EventLogFlags REG_DWORD 2 AnnounceFlags REG_DWORD 5 TimeJumpAuditOffset REG_DWORD 28800 MinPollInterval REG_DWORD 2 MaxPollInterval REG_DWORD 8 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 MaxAllowedPhaseOffset REG_DWORD 300 I looked at the event log, and apart from warnings about sync (after it gets way out of sync), there's no other warnings. How can I go about troubleshooting this? It's the only machine that is having this problem. All the other machines (physical and virtual) are doing fine. Edit: To clarify: The VM (AD1) has integration turned off and syncs to time.nist.gov. AD1 is fine. It's the physical machine S1 that can't sync to AD1 and drifts all over. All the other physical servers are able to sync to AD1 just fine. Update So, it appears to be an issue of running the VM. The clock slips slowly with the VM off. Turned on, it immediately starts losing seconds. I swt the VM to only use half the resources, and that seems to have slightly mitigated it, for now. Thanks!

    Read the article

  • conditional formatting for subsequent rows or columns

    - by Trailokya Saikia
    I have data in a range of cells (say six columns and one hundred rows). The first four column contains data and the sixth column has a limiting value. For data in every row the limiting value is different. I have one hundred such rows. I am successfully using Conditional formatting (e.g. cells containing data less than limiting value in first five columns are made red) for 1st row. But how to copy this conditional formatting so that it is applicable for entire hundred rows with respective limiting values. I tried with format painter. But it retains the same source cell (here limiting value) for the purpose of conditional formatting in second and subsequent rows. So, now I am required to use conditional formatting for each row separately s

    Read the article

  • How to get both PIPESTATUS and output in bash script

    - by Mustafa Serdar Sanli
    I'm trying to get last modification date of a file with this command TM_LOCAL=`ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }'` TM_LOCAL has value like "2012-05-16 23:18" after execution of this line I'd also like to check PIPESTATUS to see if there was an error. For example if file does not exists, ls returns 2. But $? has value 0 as it has the return value of awk. If I run this command alone, I can check the return value of ls by looking at ${PIPESTATUS[0]} ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }' But $PIPESTATUS does not work as I've expected if I assign the output to a variable as in the first example. In this case, $PIPESTATUS array has only 1 element which is same as $? So, the question is, how can I get both $PIPESTATUS and assign the output to a variable at the same time?

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Delete the pendive contents and also trash in mac?

    - by Warrior
    I am using mac pro.I copied some data from my pen drive to my mac and i deleted the content by moving it to trash.After that when i see the info of pen drive it give more value than the original value.If i cleaned the content of the trash only i am able to see the correct value of pen drive and able to copy data. Is mac has been designed like that or is there any other way to delete other than using "move to trash" option? Thanks.

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Make errors when compiling HPL-2.1 on MOSIX-clustered Debian server

    - by tlake
    I'm trying to compile HPL 2.1 on a MOSIX-clustered Debian server, but the make process terminates with errors as seen below. Included are my makefile and two versions of output: one from a standard execution, and one from an execution run with the debug flag. Any help and guidance would be very much appreciated! The makefile: # ---------------------------------------------------------------------- # - shell -------------------------------------------------------------- # ---------------------------------------------------------------------- # SHELL = /bin/bash # CD = cd CP = cp LN_S = ln -s MKDIR = mkdir RM = /bin/rm -f TOUCH = touch # # ---------------------------------------------------------------------- # - Platform identifier ------------------------------------------------ # ---------------------------------------------------------------------- # ARCH = Linux_PII_CBLAS # # ---------------------------------------------------------------------- # - HPL Directory Structure / HPL library ------------------------------ # ---------------------------------------------------------------------- # TOPdir = $(HOME)/hpl-2.1 INCdir = $(TOPdir)/include BINdir = $(TOPdir)/bin/$(ARCH) LIBdir = $(TOPdir)/lib/$(ARCH) # HPLlib = $(LIBdir)/libhpl.a # # ---------------------------------------------------------------------- # - Message Passing library (MPI) -------------------------------------- # ---------------------------------------------------------------------- # MPinc tells the C compiler where to find the Message Passing library # header files, MPlib is defined to be the name of the library to be # used. The variable MPdir is only used for defining MPinc and MPlib. # MPdir = /usr/local MPinc = -I$(MPdir)/include MPlib = $(MPdir)/lib/libmpi.so # # ---------------------------------------------------------------------- # - Linear Algebra library (BLAS or VSIPL) ----------------------------- # ---------------------------------------------------------------------- # LAinc tells the C compiler where to find the Linear Algebra library # header files, LAlib is defined to be the name of the library to be # used. The variable LAdir is only used for defining LAinc and LAlib. # LAdir = $(HOME)/CBLAS/lib LAinc = LAlib = $(LAdir)/cblas_LINUX.a # # ---------------------------------------------------------------------- # - F77 / C interface -------------------------------------------------- # ---------------------------------------------------------------------- # You can skip this section if and only if you are not planning to use # a BLAS library featuring a Fortran 77 interface. Otherwise, it is # necessary to fill out the F2CDEFS variable with the appropriate # options. **One and only one** option should be chosen in **each** of # the 3 following categories: # # 1) name space (How C calls a Fortran 77 routine) # # -DAdd_ : all lower case and a suffixed underscore (Suns, # Intel, ...), [default] # -DNoChange : all lower case (IBM RS6000), # -DUpCase : all upper case (Cray), # -DAdd__ : the FORTRAN compiler in use is f2c. # # 2) C and Fortran 77 integer mapping # # -DF77_INTEGER=int : Fortran 77 INTEGER is a C int, [default] # -DF77_INTEGER=long : Fortran 77 INTEGER is a C long, # -DF77_INTEGER=short : Fortran 77 INTEGER is a C short. # # 3) Fortran 77 string handling # # -DStringSunStyle : The string address is passed at the string loca- # tion on the stack, and the string length is then # passed as an F77_INTEGER after all explicit # stack arguments, [default] # -DStringStructPtr : The address of a structure is passed by a # Fortran 77 string, and the structure is of the # form: struct {char *cp; F77_INTEGER len;}, # -DStringStructVal : A structure is passed by value for each Fortran # 77 string, and the structure is of the form: # struct {char *cp; F77_INTEGER len;}, # -DStringCrayStyle : Special option for Cray machines, which uses # Cray fcd (fortran character descriptor) for # interoperation. # F2CDEFS = # # ---------------------------------------------------------------------- # - HPL includes / libraries / specifics ------------------------------- # ---------------------------------------------------------------------- # HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib) # # - Compile time options ----------------------------------------------- # # -DHPL_COPY_L force the copy of the panel L before bcast; # -DHPL_CALL_CBLAS call the cblas interface; # -DHPL_CALL_VSIPL call the vsip library; # -DHPL_DETAILED_TIMING enable detailed timers; # # By default HPL will: # *) not copy L before broadcast, # *) call the BLAS Fortran 77 interface, # *) not display detailed timing information. # HPL_OPTS = -DHPL_CALL_CBLAS # # ---------------------------------------------------------------------- # HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES) # # ---------------------------------------------------------------------- # - Compilers / linkers - Optimization flags --------------------------- # ---------------------------------------------------------------------- # CC = /usr/bin/gcc CCNOOPT = $(HPL_DEFS) CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops # # On some platforms, it is necessary to use the Fortran linker to find # the Fortran internals used in the BLAS library. # LINKER = ~/BLAS LINKFLAGS = $(CCFLAGS) # ARCHIVER = ar ARFLAGS = r RANLIB = echo # # ---------------------------------------------------------------------- Make output: ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so /bin/bash: /homes/laket/BLAS: Is a directory make[2]: *** [dexe.grd] Error 126 make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' make[1]: *** [build_tst] Error 2 make[1]: Leaving directory `/homes/laket/hpl-2.1' make: *** [build] Error 2 make: Target `all' not remade because of errors. Make -d output: Considering target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Looking for an implicit rule for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/s.libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/SCCS/s.libhpl.a'. No implicit rule found for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. No need to remake target `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `dexe.grd'. Must remake target `dexe.grd'. ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so Putting child 0x0129a2c0 (dexe.grd) PID 24853 on the chain. Live child 0x0129a2c0 (dexe.grd) PID 24853 /bin/bash: /homes/laket/BLAS: Is a directory make[2]: Reaping losing child 0x0129a2c0 PID 24853 *** [dexe.grd] Error 126 Removing child 0x0129a2c0 PID 24853 from chain. Failed to remake target file `dexe.grd'. Finished prerequisites of target file `dexe'. Giving up on target file `dexe'. Finished prerequisites of target file `all'. Giving up on target file `all'. make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' Reaping losing child 0x010ce900 PID 24841 make[1]: *** [build_tst] Error 2 Removing child 0x010ce900 PID 24841 from chain. Failed to remake target file `build_tst'. make[1]: Leaving directory `/homes/laket/hpl-2.1' Reaping losing child 0x00d91ae0 PID 24774 make: *** [build] Error 2 Removing child 0x00d91ae0 PID 24774 from chain. Failed to remake target file `build'. Finished prerequisites of target file `install'. make: Target `all' not remade because of errors. Giving up on target file `install'. Finished prerequisites of target file `all'. Giving up on target file `all'. Thanks!

    Read the article

  • CentOS vps is randomly rebooting

    - by develroot
    I have a centos vps (Parallels Virtuozzo container) which has been running for months. However, a few days ago it started to randomly reboot itself, and i can't find out why. And the biggest problem that i don't understand is that it takes 40 minutes to reboot (as far as i can see in the logs) root ~ # cat /var/log/messages | grep shutdown Oct 11 13:52:11 vps27 shutdown[23968]: shutting down for system halt Oct 14 14:55:17 vps27 shutdown[30662]: shutting down for system halt Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt And notice the time difference between shutdown and xinetd's start: Oct 15 06:21:23 vps27 shutdown[20157]: shutting down for system halt Oct 15 06:21:24 vps27 init: Switching to runlevel: 0 Oct 15 06:21:27 vps27 saslauthd[30614]: server_exit : master exited: 30614 Oct 15 06:21:38 vps27 named[30661]: shutting down Oct 15 06:21:47 vps27 exiting on signal 15 Oct 15 07:04:34 vps27 syslogd 1.4.1: restart. Oct 15 07:05:06 vps27 xinetd[1471]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Oct 15 07:05:06 vps27 xinetd[1471]: Started working: 0 available services And here's what Parallels Power Panel says in terms of Status Changes: Time Old Status Status Obtained Oct 15, 2011 06:23:46 AM Mounted Down Oct 15, 2011 06:22:31 AM Running Mounted Oct 14, 2011 03:06:48 PM Starting Running Oct 14, 2011 03:06:23 PM Down Starting Oct 14, 2011 03:06:08 PM Mounted Down Oct 14, 2011 02:58:24 PM Running Mounted For some reason it's getting into Mounting mode and then restarts itself. The only problem that i can imagine is disk space utilization, which is now 84%. But can that be a reson for system halt? Time Category Details Type Parameter Oct 15, 2011 07:08:33 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:27:23 AM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 82 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 15, 2011 06:23:50 AM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used Oct 14, 2011 03:06:24 PM Resource Resource counter_disk_share_used yellow alert on environment vps27 current value: 83 soft limit: 85 hard limit: 95 Yellow zone counter_disk_share_used Oct 14, 2011 03:05:50 PM Resource Resource counter_disk_share_used green alert on environment vps27 current value: 0 soft limit: hard limit: 0 Green zone counter_disk_share_used

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Annotating Graphs From Textual Data

    - by steven
    I've got a graph that was generated from a data set that contains: (date, value, annotation) The annotation is a constant value [its either there or is blank] and I would like to add in the third bit of data into the graph I have. An example of this is in the image. The blue line is a graph of the (date, value) graph, and I would like to add in the red dots as graphing (date, annotation@value). Is there an easy way to do this in excel, without having to modify the appearance of the data?

    Read the article

  • Odd Suhosin memory alerts

    - by slice
    I am getting a lot of odd suhosin alerts in my syslog. The following are example entries: Jun 9 08:46:11 suhosin[9764]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '157.55.39.180', file '/var/www/site/index.php') Jun 9 08:46:11 suhosin[9744]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '109.74.2.136', file '/var/www/site/test.php') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 0 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') What is happening here? Why 0 bytes or 2145386496 bytes (2046 GB!!??)? Why does it sometimes state the attacker and the requested script and sometimes state 'REMOTE_ADDR not set' and file 'unknown'? How do I proceed to figure this out?

    Read the article

  • Puppet templates and undefined/nil variables

    - by larsks
    I often want to include default values in Puppet templates. I was hoping that given a class like this: class myclass ($a_variable=undef) { file { '/tmp/myfile': content => template('myclass/myfile.erb'), } } I could make a template like this: a_variable = <%= a_variable || "a default value" %> Unfortunately, undef in Puppet doesn't translate to a Ruby nil value in the context of the template, so this doesn't actually work. What is the canonical way of handling default values in Puppet templates? I can set the default value to an empty string and then use the empty? test... a variable = <%= a_variable.empty? ? "a default value" : a_variable %> ...but that seems a little clunky.

    Read the article

  • Get Used rather than Provisioned space in PowerCLI

    - by Matt
    I'm trying to run a capacity report, and when I run the Get-HardDisk cmdlet in PowerCLI, the value it returns for CapacityKB is the Provisioned space. For example, let's say I've thin provisioned a 200GB disk, which is currently using say 30GB, it returns the 200GB value. Is there any way I can get the other value? I need to know how much disk space is actually being used on the LUN by the vmdk file. My PowerCLI version is 5.0.1.

    Read the article

  • Conditionally set an Apache environment variable

    - by Tom McCarthy
    I would like to conditionally set the value of an Apache2 environment variable and assign a default value if one of the conditions is not met. This example if a simplification of what I'm trying to do but, in effect, if the subdomain portion of the host name is hr, finance or marketing I want to set an environment var named REQUEST_TYPE to 2, 3 or 4 respectively. Otherwise it should be 1. I tried the following configuration in httpd.conf: <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com DocumentRoot /var/www/html SetEnv REQUEST_TYPE 1 SetEnvIfNoCase Host ^hr\. REQUEST_TYPE=2 SetEnvIfNoCase Host ^finance\. REQUEST_TYPE=3 SetEnvIfNoCase Host ^marketing\. REQUEST_TYPE=4 </VirtualHost> However, the variable is always assigned a value of 1. The only way I have so far been able get it to work is to replace: SetEnv REQUEST_TYPE 1 with a regular expression containing a negative lookahead: SetEnvIfNoCase Host ^(?!hr.|finance.|marketing.) REQUEST_TYPE=1 Is there a better way to assign the default value of 1? As I add more subdomain conditions the regular expression could get ugly. Also, if I want to allow another request attribute to affect the REQUEST_TYPE (e.g. if Remote_Addr = 192.168.1.[100-150] then REQUEST_TYPE = 5) then my current method of assigning a default value (i.e. using the regular expression with a negative lookahead) probaby won't work.

    Read the article

  • what are valid 'ack' values?

    - by WileECanisLatrans
    having an issue with a vendor who claims the cause of a problem is an invalid 'ack' value in the tcp data. I'm using java so I didn't write this layer. I used snoop to capture the traffic on the wire and am using wireshark to display the data. Here is what is happening. After receiving a multi-packet(5) message I see a multi-pack(3) response. The first packet in the response has a value for 'ack' that is different than the 'ack' value in the other two packets. The vendor claims this data is suspect. I've provided sample data below. I'm not a tcp expert so I don't know if this is a problem or not. I've tried to find something on valid ack values and it seems to me the value should be 80018 but that doesn't mean the 78345 is wrong. I found this on the web and it seems to apply but I'm not sure: "the ack value of any data segment is considered valid as long as it does not acknowledge data ahead of the next segment to send". Thanks for your help. My understanding is the vendor has written their own tcp layer. * source seq ack len * vendor 75465 10924 0 * vendor 75465 10924 1440 * vendor 76905 10924 1440 * vendor 78345 10924 1440 * vendor 79785 10924 233 * me 10924 78345 0 * me 10924 80018 0 * me 10924 80018 197

    Read the article

  • Filling cells with sequential numbers in an Excel (2003) macro

    - by Fred Hamilton
    I need to fill an excel column with a sequential series, in this case from -500 to 1000. I've got a macro to do it, but it takes a lot of lines for something that seems like it should be a single function [something like FillRange(A2:A1502, -500, 1000, 1)]. But if that function exists, I can't find it. Is the following as simple and elegant as it gets? 'Draw X axis scale Cells(1, 1).Value = "mV" Cells(2, 1).Value = -500 Cells(3, 1).Value = -499 Cells(4, 1).Value = -498 Dim selection1 As Range, selection2 As Range Set selection1 = Sheet1.Range("A2:A4") Set selection2 = Sheet1.Range("A2:A1502") selection1.AutoFill Destination:=selection2

    Read the article

  • Apache mod_header rule to change all cookies to secure

    - by Supowski
    I would like to change all cookies to be secure and http-only. I works fine for one cookie, but doesn't work when multiple cookies are set in response. Apache mod_header rule should change cookies from: Set-Cookie cookie1=value; Path=/somePath Set-Cookie cookie2=value; Path=/somePath to Set-Cookie cookie1=value; Path=/somePath; Secure; Http-Only Set-Cookie cookie2=value; Path=/somePath; Secure; Http-Only I use mod_headers for it with following rule: Header edit Set-Cookie ^(.*)$ $1;Secure;HttpOnly It works fine when only one cookie is set, but if there is more than one, it just removes all the following and they are not set at all. Any help how to write mod_headers rule for multiple values? or the problem is in something else?

    Read the article

  • Can Excel show a formula and its result simultaneously?

    - by nhinkle
    I know that it's possible in Excel to toggle between displaying values and displaying formulas. I'm required to turn in assignments for a statistics class as a printed Excel sheet showing both the formula and the result. Right now the instructor makes us either copy the formula and paste it as text next to the computed value, or copy the value and paste it next to the formula. This is very inefficient, prone to error (if you change the formula or values after doing the copy-paste), and generally a waste of time. Is there any way to have Excel show the formula and its value in the same cell? If not, is there any function which will display the formula from a referenced cell as plain text, e.g. =showformula(A1) which would print out =sum(A2:A5) instead of 25 (if those were the formula and value of cell A1)? I'm using Excel 2010, but a general answer that works for any recent edition of Excel would be nice.

    Read the article

  • Disable ActiveMQ demo component

    - by Rich
    Is it possible to disable the demo component of the Active MQ console? I have tried removing the following lines from jetty.xml but the /demolink still works. <bean class="org.eclipse.jetty.webapp.WebAppContext"> <property name="contextPath" value="/demo" /> <property name="resourceBase" value="${activemq.home}/webapps/demo" /> <property name="logUrlOnStart" value="true" /> </bean> I am using Active MQ 5.5.1

    Read the article

  • Apache mod_header rule to change all cookies to secure

    - by Supowski
    I would like to change all cookies to be secure and http-only. It works fine for one cookie, but doesn't work when multiple cookies are set in response. Apache mod_header rule should change cookies from: Set-Cookie cookie1=value; Path=/somePath Set-Cookie cookie2=value; Path=/somePath to Set-Cookie cookie1=value; Path=/somePath; Secure; Http-Only Set-Cookie cookie2=value; Path=/somePath; Secure; Http-Only I use mod_headers for it with following rule: Header edit Set-Cookie ^(.*)$ $1;Secure;HttpOnly It works fine when only one cookie is set, but if there is more than one, it just removes all the following and they are not set at all. Any help how to write mod_headers rule for multiple values? or the problem is in something else?

    Read the article

  • Is there a feature in Nagios that allows Memory between checks?

    - by Kyle Brandt
    There are various instances where there are values I want to monitor with Nagios, and I don't care as much about the value itself, but rather how it compares to the previous value. For instance, I wrote one to check the fail counters in OpenVZ. In this case, I didn't care about the value that much, but rather I cared if the value increased. Another example might be switch ports, I would be most interested to get alerted about the change of state of a port (Although perhaps a trap would be better for this one). For my OpenVZ script, I used a temp file, but I am wondering if there is a better way? Maybe Nagios has some variables that plugins (check scripts) can access that are persistent across checks?

    Read the article

  • Unexpected behaviour in a Lotus Notes programmable table

    - by Mark B
    I'm designing a workflow database in Lotus Notes 6.0.3 (soon upgrading to 8.5), and my OS is Windows XP. I have recently tried converting a tabbed table into a programmable one. This was so that I could control which tab was displayed to the user when it was opened, so that they were presented with the most appropriate one for that document's progress through the workflow. That part of it works! One of the tabs features a radio button that controls visibility of the next tab, and a pair of cascading dialogue boxes. One contains the static list "Person":"Team", and the other has a formula based on the first: view:=@If(PeerReview = "Team"; "GroupNames"; "GroupMembers"); @Unique(@DbColumn(""; ""; view; 1)) The dialogue boxes have the property "Refresh fields on keyword change" selected. The behaviour that I wasn't expecting is this. If the radio button is set to "Yes" and a value is selected in one of the dialogue boxes, the table opens the next tab. If the radio button is set to "No" and a value is selected in one of the dialogue boxes, the entire table is hidden. I can duplicate the latter by switching off the "Refresh fields on keyword change" property on the dialogue boxes and instead pressing F9 after selecting a value. I have no idea why the former occurs, though. The table is called "RFCInfo", and I have a field on the form called "$RFCInfo" which is editable, hidden from all users who aren't me and initially set by a Postopen script, which I can post if necessary - it's essentially a Select Case statement that looks at a particular item value and returns the name of the table row relating to that value. Can anyone offer any pointers?

    Read the article

  • EXCEL function working like SQL group by + count(distinct *)?

    - by Solo
    Suppose I have an EXCEL sheet with below data CODE (COL A) | VALUE (COL B) ============================== A01 | 10 A01 | 20 A01 | 30 A01 | 10 B01 | 30 B01 | 30 Is there an EXCEL function working like .. SELECT CODE, count (Distinct *) FROM TABLE GROUP BY CODE CODE | Distinct Count of Value =================================== A01 | 3 B01 | 1 or, better yet, Can we have an excel formula pasted in Column C to get something like this: CODE (COL A) | VALUE (COL B) | DISTINCT VALUE COUNT WITH MATCHING CODE (COL C) =============================================================================== A01 | 10 | 3 A01 | 20 | 3 A01 | 30 | 3 A01 | 10 | 3 B01 | 30 | 1 B01 | 30 | 1 I know I can use pivot table to get this result easily. However due to reporting requirements I have to append the "distinct count" column to the excel sheet, hence pivot table is not an option. My last resort is to use Excel Macro (Which is fine), but before that I would like to learn whether excel functions can accomplish this kind of task. Many thanks!

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >