Search Results

Search found 3872 results on 155 pages for 'argument deduction'.

Page 111/155 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

  • Why is Mac supposedly better than Windows for graphics?

    - by Svish
    Ok, people just keep telling me that if you're going to be working with graphics and design and stuff, you should get a Mac. And I just don't get the logic. Because most of these people would be working with Adobe software, which are for both Windows and Mac. To me it seems like their whole argument is based on that "everyone else does". Like, Mac had some graphics software that Windows didn't earlier in history, so most people were using Mac. And since most people were using Mac, new people also started using Mac. And since most people were using Mac, schools and universities used Mac. Which taught new people to use Mac. So they were using Mac. And told everyone they met that everyone they knew were using Mac. And so on. Anyways... What is the deal really? Is there actually any advantage in using Mac for graphics and design and such things? My take is that you pretty much have the same software and both Mac and Windows are powerful enough, support enough RAM, are stable (as long as you don't install lot's of junk or faulty drivers), et cetera. So, can anyone give me a good explanation on this? Is there a real difference or are people just brainwashed?

    Read the article

  • cygwin sshd fails to allocate pty for some users

    - by user115851
    I have (finally) got sshd working under cygwin on Win7 - well, sort of. The sshd runs as user 'cyg_server'. I'm able to successfully ssh to my computer using that same user name. However, if I attempt to ssh using my normal (Windows) user name, it fails trying to allocate a pty for my login session. For example, output of 'sshd -D -d -d -d' contains this .. ... debug1: Entering interactive session for SSH2. debug2: fd 4 setting O_NONBLOCK debug2: fd 5 setting O_NONBLOCK debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 1048576 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug2: session_new: allocate (allocated 0 max 10) debug3: session_unused: session id 0 unused debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_pty_req: session 0 alloc /dev/pty1 !!! chown(/dev/pty1, 17308, 10513) failed: Invalid argument debug1: do_cleanup debug1: session_pty_cleanup: session 0 release /dev/pty1 Currently /dev is owned by my normal account. I've tried changing its ownership to cyg_server as well as SYSTEM. In both cases the problem persists. I've also changed permissions for /dev (e.g, 700 and 777) - again problem persists. [As a side note - it is strange that whenever I do 'ls -al /dev' the ptys do not show up. However, if I 'ls -l /dev/ptyX' for a pty I know to exist, it shows up. Is that normal for cygwin?] -Bob Andover, MA

    Read the article

  • How to make Exchange 2003 non-authoritive

    - by Romski
    Background We are a small company with an internally hosted Exchange 2003. It receives email for 2 domains (the company was renamed a few years back). For the sake of argument, the domains are: oldname.com newname.com We have moved newname.com to a hosted exchange service, and our DNS record is correctly routing emails. Our internal server still receives email for oldname.com, although we have asked our hosting company to accept emails for that domain. Problem My problem is that emails generated internally from monitoring software, printer, etc. are being caught by our (defunct) internal server and being delivered to the old mailboxes. I believe that what is happening is that our internal exchange server considers itself to be the authoritive server for newname.com. I think it must be looking in active directory for a mailbox and delivering it internally without ever going outside. Attempt to fix I started to follow the article here: http://support.microsoft.com/kb/321721. I removed the SMTP recipient policy for newname.com, and added a dummy address and made it primary. I also answered yes for updating the associated emails. I then restarted the Microsoft Exchange Routing System and SMTP, but emails are still being routed internally. Is there a way to force the exchange server to route all emails for the domain newname.com to the new hosted service?

    Read the article

  • Why does just splitting an Ethernet cable not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically a one-line communication bus (for argument's sake, I am excluding hubs). All machines attached on the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why would splitting one Ethernet line from my home router into two and connecting two computers not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room, and I had put a computer in another room (also my room, though). Since a wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • How to install port versions of perl modules for perl5.14 in freebsd 9.0

    - by jm666
    Trying to use perl5.14 on Freebsd with port based p5-modules. uname -impr 9.0-RELEASE amd64 amd64 ALTQ delete all installed ports, start with a clean system # pkg_delete -a # rm -rf /var/db/pkg /var/db/ports /usr/local installing portmaster, checking /etc/make.conf (here is only WITHOUT_X11=YES). Now installing perl. # portmaster -g --force-config lang/perl5.14 # perl -v This is perl 5, version 14, subversion 2 (v5.14.2) built for amd64-freebsd-multi Now perl modules from the ports, # portmaster -g devel/p5-Moose #install Moose and its deps check with pkg_info and got zilion errors like: # pkg_info pkg_info: corrupted record (pkgdep line without argument), ignoring dpendecy check with portmaster - showing dependecies on perl5.12 #portmaster --check-depends Checking p5-Class-C3-0.24 ===>>> lang/perl5.12 is listed as a dependency ===>>> but there is no installed version ===>>> Delete this dependency data? y/n [n] when tried # perl-after-upgrade -f got: Fixed 0 packages (0 files moved, 0 files modified) In short: i got installed Moose into /usr/local/lib/perl5/site_perl/5.14.2/ but all its dependencies into /usr/local/lib/perl5/site_perl/5.12.4/ Yes, it is possible fix this with: # portmaster p5- what reinstall all installed p5-packages once again, now correctly for the 5.14 but it is terrible installing them twice... Questions: What is the correct way install p5-MODULES from ports with installed perl5.14 in an clean system? How to fix wrong dependency data on perl5.12 without the need install and reinstall them again What i'm doing wrong? Ps: know perlbrew and/or Local::lib - but for this case - want port versions.

    Read the article

  • How to send Content-Disposition headers in apache for files?

    - by Rory McCann
    I have a directory of text files that I'm serving out with apache 2. Normally when I (or any user) access the files they see them in their browser. I want to 'force'* the web browser to pop up a 'Save as' dialog box. I know this is possible to do with the Content-Disposition headers (more info). Is there some way to turn that on for each file? Ideally I'd like something like this: <Directory textfiles> AutoAddContentDispositionHeaders On </Directory> And then apache would set the correct content disposition header, including using the same filename. Something like this might be possible with the apache Header directive. Bonus points if it's included by standing in apache in debian. I could do a simple PHP wrapper script that takes in a filename argument, makes the call to header(...) and then prints the file, but then i have to validdate input etc. that's work I'm trying to avoid. * I know you can't actually force things when it comes to the web

    Read the article

  • Launching Installer Via Powershell and WinRM and Nothing Happens

    - by Nick DeMayo
    I'm currently working on a Powershell script to run some Microsoft Hotfix installers remotely on several Windows Server 2008 R2 servers that I manage. Basically, the script copies all the appropriate files up to the server, and then runs the installer via Invoke-Command, like so: function InstallCU { Write-Host "Installing June 2013 CU..." Invoke-Command -ComputerName $ServerName -ScriptBlock { Start-Process "c:\aaa\prjcusp2\ubersrvprj2010-kb2817530-fullfile-x64-glb.exe" -ArgumentList "/passive" } } If I run the "Start-Process" command locally on the server, the installer runs properly. However, when trying to run it remotely, nothing happens (actually, I can see the installer start up in Task Manager, but it closes a couple seconds later and doesn't run). I've attempted giving the Invoke-Command -Credentials, I've turned off UAC on the server, and I've ensured that my WinRM settings (running 'winrm quickconfig' and setting TrustedHosts to *) are correct. I've also tried having the Invoke-Command script run a local Powershell script to run the installer and changing the Argument from '/passive' to 'quiet' (in case it can't remotely launch something that has a UI), but again, no dice. Is there anything else I can try, or am I just not going to be able to do this?

    Read the article

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Nagios command not transmitting all arguments

    - by markus
    I'm using the following service to monitor our postgres db from nagios: define service{ use test-service ; Name of servi$ host_name DEMOCGN002 service_description Postgres State check_command check_nrpe!check_pgsql!192.168.1.135!test!test!test notifications_enabled 1 } On the remote machine I've configured the command: command[check_pgsql]=/usr/lib/nagios/plugins/check_pgsql -H $ARG1$ -d $ARG2$ -l $ARG3$ -p $ARG4$ In the syslog I can see that command is executed, but there is only one argument transmitted: Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Running command: /usr/lib/nagios/plugins/check_pgsql -H 192.168.1.134 -d -l -p Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Command completed with return code 3 and output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Return Code: 3, Output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Why are arguments 2,3 and 4 missing?

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Passing two arguments to a command using pipes

    - by firebat
    Usually, we only need to pass one argument: echo abc | cat echo abc | cat some_file - echo abc | cat - some_file Is there a way to pass two arguments? Something like {echo abc , echo xyz} | cat cat `echo abc` `echo xyz` I could just store both results in a file first echo abc > file1 echo xyz > file2 cat file1 file2 But then I might accidentally overwrite a file, which is not ok. This is going into a non-interactive script. Basically, I need a way to pass the results of two arbitrary commands to cat without writing to a file. UPDATE: Sorry, the example masks the problem. While { echo abc ; echo xyz ; } | cat does seem to work, the output is due to the echos, not the cat. A better example would be { cut -f2 -d, file1; cut -f1 -d, file2; } | paste -d, which does not work as expected. With file1: a,b c,d file2: 1,2 3,4 Expected output is: b,1 d,3 RESOLVED: Use process substitution: cat <(command1) <(command2) Alternatively, make named pipes using mkfifo: mkfifo temp1 mkfifo temp2 command1 > temp1 & command2 > temp2 & cat temp1 temp2 Less elegant and more verbose, but works fine, as long as you make sure temp1 and temp2 don't exist before hand.

    Read the article

  • Optimize shell and awk script

    - by bryan
    I am using a combination of a shell script, awk script and a find command to perform multiple text replacements in hundreds of files. The files sizes vary between a few hundred bytes and 20 kbytes. I am looking for a way to speed up this script. I am using cygwin. The shell script - #!/bin/bash if [ $# = 0 ]; then echo "Argument expected" exit 1 fi while [ $# -ge 1 ] do if [ ! -f $1 ]; then echo "No such file as $1" exit 1 fi awk -f ~/scripts/parse.awk $1 > ${1}.$$ if [ $? != 0 ]; then echo "Something went wrong with the script" rm ${1}.$$ exit 1 fi mv ${1}.$$ $1 shift done The awk script (simplified) - #! /usr/bin/awk -f /HHH.Web/{ if ( index($0,"Email") == 0) { sub(/HHH.Web/,"HHH.Web.Email"); } printf("%s\r\n",$0); next; } The command line find . -type f | xargs ~/scripts/run_parser.sh

    Read the article

  • Why won't this script accept any arguments?

    - by Nate Wagar
    I'm trying to write an SVN post-commit hook and, strangely, am getting hung up on what should be the easiest part. The Script: set REPO="$1" set REV="$2" set SVNBIN="/opt/CollabNet_Subversion/bin/" set SSHBIN="/usr/bin/ssh" set HOST="staging.domain.net" set timeout=30 set USERNAME="svn-usr" set E_NO_CONNECT=2 set E_WRONG_PASS=3 set E_UNKOWN=25 set CHANGED=`"$SVNBIN"svnlook changed --revision $REV $REPOS` echo "Here are changes: $CHANGED" >> /var/svn/repos/www/logs/testing echo "Command: $0; Repo: $REPO; Rev: $REV; Total: $#" >> /var/svn/repos/www/logs/testing set PROJECT "" Yet when I call it, it doesn't seem to be seeing the arguments I pass to it: /var/svn/repos/www/logs> sudo ../hooks/post-commit /var/svn/repos/www 33 svnlook: missing argument: --revision Type 'svnlook help' for usage. /var/svn/repos/www/logs> cat testing Here are changes: Command: ../hooks/post-commit; Repo: ; Rev: ; Total: 1 This is on a Solaris 10 SPARC box. I'm a bit of a script newbie, but shouldn't this be really easy??

    Read the article

  • Millions of files in php's tmp error - how to delete?

    - by Jonatan Littke
    Hey. I've got a tmp-folder with 14 million php session files in my home directory. At least that's what I think it is, it's not like I could ls it or anything. How can I empty this folder? I've tried using find with the -exec rm {} \; commands but that didn't work. ls 'sess_0*' | xargs rm did neither. I'm currently running rm -rf tmp but after two hours the folder appears to be the same size. REFERENCE INFO: I suddenly encountered an error where SESSIONS could no longer be written to disk: [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: open(/var/www/clients/client1/web1/tmp/sess_8e12742b62aa68a3f9476ec80222bbfb, O_RDWR) failed: No space left on device (28) in Unknown on line 0 [Mon Apr 19 19:58:32 2010] [warn] mod_fcgid: stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/www/clients/client1/web1/tmp) in Unknown on line 0 I ran: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/md0 457G 126G 308G 29% / tmpfs 1.8G 0 1.8G 0% /lib/init/rw udev 10M 664K 9.4M 7% /dev tmpfs 1.8G 0 1.8G 0% /dev/shm But as you can see, the disk isn't full. So I had a look in the syslog which says the following 20 times per second: kernel: [19570794.361241] EXT3-fs warning (device md0): ext3_dx_add_entry: Directory index full! This led me thinking to a full folder, obviously, but since my web folder only has 60k files (having counted them), I guessed it was the tmp folder (the local one, for this instance of php) that messed things up. Some commands I ran: $ sudo ls sess_a* | xargs rm -f bash: /usr/bin/sudo: Argument list too long find . -exec rm {} \; rm: cannot remove directory '.' find: cannot fork: Cannot allocate memory I'm running Debian Lenny, php5, ISPConfig, SuEXEC and Fast-CGI.

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • Network to network VPN Centos 5

    - by Atul Kulkarni
    I am trying to follow "http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-vpn.html#s1-ipsec-net2net" I have come up with the following On local router machine: in my ifcfg-ipsec0: ONBOOT=yes IKE_METHOD=PSK DSTGW=10.5.27.1 SRCGW=10.6.159.1 DSTNET=10.5.27.0/25 SRCNET=10.6.159.0/24 DST=205.X.X.X TYPE=IPSEC I have /etc/sysconfig/network-scripts/keys-ipsec0 file in place. On Remote Machine in the cloud if have /etc/sysconfig/network-scripts/ifcfg-ipsec1: TYPE=IPSEC ONBOOT=yes IKE_METHOD=PSK SRCGW=10.5.27.1 DSTGW=10.6.159.1 SRCNET=10.5.27.124/25 DSTNET=10.6.159.0/24 DST=38.x.x.x with its respective /etc/sysconfig/network-scripts/key-ipsec1 file. The DST in both cases are NAT'd external IPs. Is that a problem? I have made changes for port forwarding as well. When I try to bring the interfaces up it gives me output "RTNETLINK answers: Invalid argument". I am confused now and don't know what more to do? Any place I can digup what parameters were wrong? I really appreciate any help I can get. Thanks and Regards, Atul.

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • How to read cell data in excel and output to command prompt

    - by Max Ollerenshaw
    Hi All, I'm a sys admin and I am trying to learn how to use powershell... I have never done any type of scripting or coding before and I have been teaching myself online by learning from the technet script centre and online forums. What I am trying to accomplish is to open an excel spreadsheet get information from it (usernames and password) and then output it into the command prompt in powershell. When ever I try to do this I get an Exception calling "InvokeMember" anyway, here is the code I have so far: function Invoke([object]$m, [string]$method, $parameters) { $m.PSBase.GetType().InvokeMember( $method, [Reflection.BindingFlags]::InvokeMethod, $null, $m, $parameters,$ciUS ) } $ciUS = [System.Globalization.CultureInfo]'en-US' $objExcel = New-Object -comobject Excel.Application $objExcel.Visible = $False $objExcel.DisplayAlerts = $False $objWorkbook = Invoke $objExcel.Workbooks.Open "C:\PS\User Data.xls" Write-Host "Numer of worksheets: " $objWorkbook.Sheets.Count $objWorksheet = $objWorkbook.Worksheets.Item(1) Write-Host "Worksheet: " $objWorksheet.Name $Forename = $objWorksheet.Cells.Item(2,1).Text $Surname = $objWorksheet.Cells.Item(2,2).Text Write-Host "Forename: " $Forename Write-Host "Surname: " $Surname $objExcel.Quit() If (ps excel) { kill -name excel} I have read many different posts on forums and articles on how to try and get around the en-US problem but I cannot seem to get around it and hope that someone here can help! Here is the Exeption problem I mentioned: Exception calling "InvokeMember" with "6" argument(s): "Method 'System.Management.Automation.PSMethod.C:\PS\User Data.x ls' not found." At C:\PS\excel.ps1:3 char:33 + $m.PSBase.GetType().InvokeMember <<<< ( + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException Numer of worksheets: You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:18 char:45 + $objWorksheet = $objWorkbook.Worksheets.Item <<<< (1) + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull Worksheet: You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:21 char:37 + $Forename = $objWorksheet.Cells.Item <<<< (2,1).Text + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:22 char:36 + $Surname = $objWorksheet.Cells.Item <<<< (2,2).Text + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull Forename: Surname: This is the first question I have ever asked, try to be nice! :)) Many Thanks Max

    Read the article

  • Using proxy.pac to access Apache 2 with a hostname?

    - by leeand00
    Note that I do not have a DNS on my network, and that is why I am resorting to using a proxy.pac file. I would like to be able to access my development Apache 2 server using a name instead of an ip without setting up a full blown DNS. I am aware of setting names in the C:\Windows\System32\drivers\etc\hosts file and the /etc/hosts files, however I cannot edit the hosts file on all of the devices that I am testing the site on. I've added a proxy.pac file to my Apache2 server and pointed my browsers settings to it at: http://192.168.2.221/proxyutils/proxy.pac ...where 192.168.2.221 is thehostname's ip address. I set the above URL in Firefox in the following manner: From the menubar selecting "Edit-Preferences" In the resulting "Firefox Preferences" window clicking the "Advanced" tab. Clicking the "Network" tab Clicking the "Settings" button. Selecting the "Automatic proxy configuration URL:" radio button. Entering http://192.168.2.221/proxyutils/proxy.pac and pressing OK. The contents of the proxy.pac file on the Apache server function FindProxyForURL(url, host) { if( dnsDomainIs(host, "thehostname") ) return "PROXY 192.168.2.221:80"; return "DIRECT"; } In Firefox I then access the following URL: http://thehostname/wp-blog/ And instead of the development version of the Wordpress blog I am trying to access I get a URL of http://thehostnamehttp/thehostname/wp-blog/ in my address bar and a 404 Not Found page in the browser window. Looking over proxy.pac, it seems like calling dnsDomainIs shouldn't work considering I don't have a DNS setup on my network, but I've also tried just comparing the host argument with the string "hostname" and it yielded the same result, even after modifying the proxy.pac file and clicking the reload button near the proxy settings. This could also be a Wordpress problem, since I've noticed that directories without Wordpress seem to function perfectly normally. (see cross post here) Is there any way I can modify my configuration so that I can access the site using http://thehostname/wp-blog/ ?

    Read the article

  • How do I delete hardlinks, symbolic links, junction points, etc please?

    - by jonny
    I could be wrong, but I'm yet to hear a valid argument for the exploitability that these things deliver...outweighing their very dubious / debatable functionality. They seem to me to be marginally handy, but I don't think I have any need for them. I do have a need for security, however. How can I delete their entire functionality permanently from my hard drive, please? Microsoft only has pages on how to create them; which seems almost peculiar to the point of being dubious (at least, to me...) And just a dumb command line question, am I correct in assuming fsutil hardlink list c: will enumerate every single hardlink on that drive? C:\Windows\system32>fsutil hardlink list c: \Windows\System32 Also, how do I delete symbolic links please ;) But I'd just rather have all symbolic linking and recursion-creating stuff removed, if that's possible? C:\Windows\system32>fsutil behavior query symlinkevaluation Local to local symbolic links are enabled. Local to remote symbolic links are enabled. Remote to local symbolic links are disabled. Remote to remote symbolic links are disabled.

    Read the article

  • Openldap startup problems after upgrade

    - by Craig Efrein
    I am trying to syncrhonize a ldap slave and master server. The master server is using openldap 2.3.43-12 and the slave server is using openldap 2.4.23. I copied over the files in /var/lib/ldap, started the server and got this error: Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb(dc=myserver,dc=fr): Program version 4.7 doesn't match environment version 4.4 Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb_db_open: database "dc=myserver,dc=fr" cannot be opened, err -30971. Restore from backup! Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb(dc=myserver,dc=fr): txn_checkpoint interface requires an environment configured for the transaction subsystem Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb_db_close: database "dc=myserver,dc=fr": txn_checkpoint failed: Invalid argument (22). Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: backend_startup_one (type=bdb, suffix="dc=myserver,dc=fr"): bi_db_open failed! (-30971) Oct 22 16:16:41 xe-ldap-slave1 slapd[12111]: bdb_db_close: database "dc=myserver,dc=fr": alock_close failed I have used the db_upgrade command to upgrade the database files on the new slave server, but I still get the same error when starting slapd. The master server is Centos 5.5 32bit & openldap 2.3.43-12 The slave server is Centos 6.3 64 bit & openldap 2.4.23 Everything was installed using yum. What is the proper method to synchronize database files from an ldap master server and slave server when the slave server is more recent then the master? I have followed the suggestion from 84104, but I am getting an error on the slave Here is the error on the slave: Oct 23 18:28:30 xe-ldap-slave1 slapd[1415]: slap_client_connect: URI=ldaps://ldap0.lan.myserver.com:636 DN="cn=syncuser,dc=myserver,dc=fr" ldap_sasl_bind_s failed (-1) Oct 23 18:28:30 xe-ldap-slave1 slapd[1415]: do_syncrepl: rid=003 rc -1 retrying Here is the error on the master: Oct 23 18:29:30 ldap0 slapd[15265]: conn=201 fd=35 ACCEPT from IP=192.168.150.100:47690 (IP=0.0.0.0:636) Oct 23 18:29:30 ldap0 slapd[15265]: conn=201 fd=35 closed (TLS negotiation failure) I can do an ldap search on the master just fine with the user configured for synchronization from the new slave server. ldapsearch -LLL -x -H ldaps://192.168.150.99:636 -x -W -b dc=myserver,dc=fr-D"cn=syncuser,dc=myserver,dc=fr"

    Read the article

  • "Security Warning" comes up when I run via another program

    - by Alexander Bird
    If I execute vmmap from the command line it works fine. However, if I call some other program and pass vmmap as a paramater for this other program to start the execution, then I get this "security error" popup -- which makes it hard to automate scripts. In other words, I want to wrap vmmap via another program. In my case, I want to wrap vmmap via another program because whenever vmmap runs, it will bring a window up momentarily and then disappear. So I try passing vmmap as an argument to another program which will start the program "headlessly". I tried this program and this program, and in both cases I get the same popup which defeats the purpose of automation. Why does this happen when the program isn't run directly? Does anyone know the internals of what this warning is? And, utlimately, is there a way to stop this from happening, but only for this instance? I don't want to disable this warning-system on my whole computer. EDIT: I am using Windows Server 2003, and I don't necessarily need solutions for other platforms, but I would like to know what they are if they are platform-dependent solutions.

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >