Search Results

Search found 9170 results on 367 pages for 'world of goo'.

Page 282/367 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • How to know if your computer is hit by a dnschanger virus?

    - by kira
    The Federal Bureau of Investigation (FBI) is on the final stage of its Operation Ghost Click, which strikes against the menace of the DNSChanger virus and trojan. Infected PCs running the DNSChanger malware at unawares are in the danger of going offline on this coming Monday (July 9) when the FBI plans to pull down the online servers that communicate with the virus on host computers. After gaining access to a host PC, the DNSChanger virus tries to modify the DNS (Domain Name Server) settings, which are essential for Internet access, to send traffic to malicious servers. These poisoned web addresses in turn point traffic generated through infected PCs to fake or unsafe websites, most of them running online scams. There are also reports that the DNSChanger virus also acts as a trojan, allowing perpetrators of the hack attack to gain access to infected PCs. Google issued a general advisory for netizens in May earlier this year to detect and remove DNSChanger from infected PCs. According to our report, some 5 lakh PCs were still infected by the DNSChanger virus in May 2012. The first report of the DNSChanger virus and its affiliation with an international group of hackers first came to light towards the end of last year, and the FBI has been chasing them down ever since. The group behind the DNSChanger virus is estimated to have infected close to 4 million PCs around the world in 2011, until the FBI shut them down in November. In the last stage of Operation Ghost Click, the FBI plans to pull the plug and bring down the temporary rogue DNS servers on Monday, July 9, according to an official announcement. As a result, PCs still infected by the DNSChanger virus will be unable to access the Internet. How do you know if your PC has the DNSChanger virus? Don’t worry. Google has explained the hack attack and tools to remove the malware on its official blog. Trend Micro also has extensive step-by-step instructions to check if your Windows PC or Mac is infected by the virus. The article is found at http://www.thinkdigit.com/Internet/Google-warns-users-about-DNSChanger-malware_9665.html How to check if my computer is one of those affected?

    Read the article

  • Running Mathematica-5 remotely

    - by oxinabox.ucc.asn.au
    I have Mathematica 5 - a powerful CAS. I have a cheap netbook (running Windows XP), wich not only is too slow to run mathmatica on, I doubt it has the harddrive space. I do however have remote access to a number of very powerful computers, (most of wich run variose Linuxes, but one of which is Windows Server 2008, though I'ld rather not use this one*). Mostly over SSH but other protocols can be arraged for some, I'm sure. So I'ld like to install Mathematica onto one of these machine and then run it remotely. Either from the command line via Putty or via some other method. I glanced through the mathematical documentation and read something about using some MathLink program, which links the front end installed on my computer to a remote kernel. Anyone have any experience with this? I'm not sure if this belongs here or in SuperUser. At the moment, it's being tinkered with, and when the tinkering stops it'll likely be used to run multiple thin terms. As compared to the Linux machines: I have access to a dual 2.4 Xeon with 3GB RAM, which the rest of the world seems to have completely forgotten about (runs freeBSD!).

    Read the article

  • Is it possible to be a Linux professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • Is there a simple LDAP-to-HTTP gateway out there?

    - by larsks
    We have a local LDAP directory that provides basic contact information about our user community. We would like to integrate this into some third-party hosted services that allow us to implement widgets that run arbitrary Javascript. In order to connect Javascript to our LDAP directory, I would like to set up a simple LDAP-to-HTTP proxy that would accept HTTP GET requests, translate them into an appropriate LDAP query, and respond with directory information as JSON-encoded data. In an ideal world, something like this: GET /[email protected] Would get me something like this: { "cn": "Bob Person", "title": "System Administrator", "sn": "Person", "mail": "[email protected]", "telepehoneNumber": "617-555-1212", "givenName": "Bob" } (And this obviously assumes that the web application has locally configured information about what base DN to use, how to authenticate, etc). I guess I could write one...but surely something like this already exists? UPDATE The consensus seems to be that there isn't a pre-existing solution out there and that I should just get off my lazy derriere and write one. So I did, and it's here. It's not especially pretty, but it works for my prototyping and I figure maybe someone else will find it useful someday.

    Read the article

  • Installinf Xen 4.0.1 in Ubuntu 10.10

    - by Hiranth
    make -f buildconfigs/mk.linux-2.6-pvops build make[3]: Entering directory /home/hirantha/xen-4.0.1' set -ex; \ if ! [ -d linux-2.6-pvops.git ]; then \ rm -rf linux-2.6-pvops.git linux-2.6-pvops.git.tmp; \ mkdir linux-2.6-pvops.git.tmp; rmdir linux-2.6-pvops.git.tmp; \ git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp; \ (cd linux-2.6-pvops.git.tmp; git checkout -b xen/stable-2.6.32.x xen/xen/stable-2.6.32.x ); \ mv linux-2.6-pvops.git.tmp linux-2.6-pvops.git; \ fi + '[' -d linux-2.6-pvops.git ']' + rm -rf linux-2.6-pvops.git linux-2.6-pvops.git.tmp + mkdir linux-2.6-pvops.git.tmp + rmdir linux-2.6-pvops.git.tmp + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /home/hirantha/xen-4.0.1/linux-2.6-pvops.git.tmp/.git/ fatal: Unable to look up git.kernel.org (port 9418) (Name or service not known) make[3]: *** [linux-2.6-pvops.git/.valid-src] Error 128 make[3]: Leaving directory/home/hirantha/xen-4.0.1' make[2]: * [linux-2.6-pvops-install] Error 2 make[2]: Leaving directory /home/hirantha/xen-4.0.1' make[1]: *** [install-kernels] Error 1 make[1]: Leaving directory/home/hirantha/xen-4.0.1' make: * [world] Error 2 hirantha@hirantha-desktop:~/xen-4.0.1$ ^C hirantha@hirantha-desktop:~/xen-4.0. What is this error? How can i solve this?

    Read the article

  • Making OpenSSL work on PHP Windows 2008 server with FastCGI

    - by KacieHouser
    I have been researching all day. Here is what I have done: In C:/PHP/php.ini and C:/PHP/php-cgi-fcgi.ini I have made the extension_dir = "C:/PHP/ext" I uncommented extension=php_openssl.dll I went to http://windows.php.net/download/ and got the thread safe version with the PHP 5.4 (5.4.8) version of DLL's In C:/PHP/ext I replaced the php_openssl.dll with the one I downloaded In System32 and SysWOW64 I added the following DLL's ssleay.dll libeay.dll I restarted the IIS server in the Server Manager under Web Server and stopped and started the World Wide Web Publishing Service That didn't work, so I tried same thing with the unthreaded versions. I still get: Fatal error: Call to undefined function ftp_ssl_connect() in C:\inetpub\wwwroot\REMOVED_dev\save_data.php on line 5 Here are related things from phpinfo(): System Windows NT DEV-WEB1 6.1 build 7601 (Windows Server 2008 R2 Standard Edition Service Pack 1) i586 Compiler MSVC9 (Visual C++ 2008) Architecture x86 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=C:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=C:\php-sdk\oracle\instantclient11\sdk,shared" "--with-enchant=shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze" "--with-pgo" Server API CGI/FastCGI Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\PHP\php-cgi-fcgi.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) Registered PHP Streams php, file, glob, data, http, ftp, zip, compress.zlib, compress.bzip2, https, ftps, sqlsrv, phar Registered Stream Socket Transports tcp, udp, ssl, sslv3, sslv2, tls FTP support enabled Protocols dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8t 18 Jan 2012 OpenSSL Header Version OpenSSL 0.9.8x 10 May 2012 What am I missing here?

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • How to write files in specific order?

    - by Bernie
    Okay, here's a weird problem -- My wife just bought a 2014 Nissan Altima. So, I took her iTunes library and converted the .m4a files to .mp3, since the car audio system only supports .mp3 and .wma. So far so good. Then I copied the files to a DOS FAT-32 formatted USB thumb drive, and connected the drive to the car's USB port, only to find all of the tracks were out of sequence. All tracks begin with a two digit numeric prefix, i.e., 01, 02, 03, etc. So you would think they would be in order. So I called Nissan Connect support and the rep told me that there is a known problem with reading files in the correct order. He said the files are read in the same order they are written. So, I manually copied a few albums with the tracks in a predetermined order, and sure enough he was correct. So I copied about 6 albums for testing, then changed to the top level directory and did a "find . music.txt". Then I passed this file to rsync like this: rsync -av --files-from=music.txt . ../Marys\ Music\ Sequenced/ The files looked like they were copied in order, but when I listed the files in order of modified time, they were in the same sequence as the original files: ../Marys Music Sequenced/Air Supply/Air Supply Greatest Hits ls -1rt 01 Lost In Love.mp3 04 Every Woman In The World.mp3 03 Chances.mp3 02 All Out Of Love.mp3 06 Here I Am (Just When I Thought I Was Over You).mp3 05 The One That You Love.mp3 08 I Want To Give It All.mp3 07 Sweet Dreams.mp3 11 Young Love.mp3 So the question is, how can I copy files listed in a file named music.txt, and copy them to a destination, and ensure the modification times are in the same sequence as the files are listed?

    Read the article

  • remote telnet and email

    - by Mustafa Ismail Mustafa
    This issue has been occupying my work for the last few days and I will be understating when I say its driven me up the blasted walls. Essentially, I can ping and tracert the domain jnrcs.org and the subdomains mail.jnrcs.org and mail.jordanredcrescent.org. All three mentioned point to ip address 212.38.147.97. About 4 days ago, when we registered the domain "jnrcs.org" suddenly all external connection to the mail server from outside was lost. Not just mail, but other http based port-forwarded or natted services (such as camera surveillance and pbx services). I tried good old telnet (I'm a linux user) and I get the following output: telnet> o mail.jnrcs.org 25 Trying 212.38.147.97... telnet: Unable to connect to remote host: No route to host telnet> Tracert gives me: traceroute to mail.jnrcs.org (212.38.147.97), 30 hops max, 60 byte packets 1 192.168.1.2 (192.168.1.2) 0.869 ms 0.944 ms * 2 * * * 3 * * * 4 * * * 5 * * * 6 * 212.38.128.118 (212.38.128.118) 33.875 ms 39.187 ms 7 * * * 8 * * * 9 * * * 10 * * * 11 * * 212.38.147.97 (212.38.147.97) 67.621 ms I am stumped. Other friends from all around the world can telnet no problem. What could have possibly happened to make telnet/smtp/pop/imap/http access stop? Please bear in mind I'm primarily a developer but I [am under the delusion] that I can carry my weight in IT administration :) TIA

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • Running CGI With Perl under Apache Permission Problem

    - by neversaint
    I have the following entry under apache2.conf in my Debian box. AddHandler cgi-script .cgi .pl Options +ExecCGI ScriptAlias /mychosendir/cgi-bin/ /var/www/mychosendir/cgi-bin/ <Directory /var/www/mychosendir/cgi-bin> Options +ExecCGI -Indexes allow from all </Directory> Then I have a perl cgi script stored under these directories and permissions: nvs@somename:/var/www/mychosendir$ ls -lhR .: total 12K drwxr-xr-x 2 nvs nvs 4.0K 2010-04-21 13:42 cgi-bin ./cgi-bin: total 4.0K -rwxr-xr-x 1 nvs nvs 90 2010-04-21 13:40 test.cgi However when I tried to access it in the web browser: http://myhost.com/mychosendir/cgi-bin/test.cgi They gave me this error: [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] (8)Exec format error: exec of '/var/www/mychosendir/cgi-bin/test.cgi' failed [Wed Apr 21 15:26:09 2010] [error] [client 150.82.219.158] Premature end of script headers: test.cgi What's wrong with it? Update: I also have the following entry in my apache2.conf: <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> And the content of test.cgi is this: #!/usr/bin/perl -wT print "Content-type: text/html\n\n"; print "Hello, world!\n";

    Read the article

  • "Options ExecCGI is off in this directory" When try to run Ruby code using mod_ruby

    - by Itay Moav
    I am on Ubuntu, Apache 2.2 Installed the fcgi via apt-get then removed it via apt-get remove. Installed mod-ruby configuration I added to Apache: LoadModule ruby_module /usr/lib/apache2/modules/mod_ruby.so RubyRequire apache/ruby-run <Directory /var/www> Options +ExecCGI </Directory> <Files *.rb> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> <Files *.rbx> SetHandler ruby-object RubyHandler Apache::RubyRun.instance </Files> I have a file in the www direcoty with puts 'baba' I have other files in that directory, all accessible via Apache. Test file has been chmod 777 In the browser I get 403. In Apache error log I get: [error] access to /var/www/t.rb failed for (null), reason: Options ExecCGI is off in this directory If I move this to a sub folder rubytest and modify the relevant config to be: <Directory /var/www/rubytest> Options +ExecCGI </Directory> and making sure the directory has 755 permissions on it, it just try to download the file, as if it does not recognize the postfix *.rb any more If I give directory and files 777 it fails: usr/lib/ruby/1.8/apache/ruby-run.rb:53: warning: Insecure world writable dir /var/www/rubytest in LOAD_PATH, mode 040777 [Tue May 24 19:39:58 2011] [error] mod_ruby: error in ruby [Tue May 24 19:39:58 2011] [error] mod_ruby: /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in load': loading from unsafe file /var/www/rubytest/t.rb (SecurityError) [Tue May 24 19:39:58 2011] [error] mod_ruby: from /usr/lib/ruby/1.8/apache/ruby-run.rb:53:in handler' BUT, IF I USE *.rbx it works like a charm...go figure.

    Read the article

  • Display stretches 4:3 ratios; Adds scrolling to other ratios

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks EDIT: After playing around more, I discovered that the "scrolling" behavior is only present for aspect ratios that are not 4:3. For 4:3 ratios, it just stretches out to fit the wide screen. My monitor's native ratio is 16:9 .. what could be causing it to think it needs to scroll?

    Read the article

  • Is it possible to be a professional studying on your own?

    - by Marc Jr
    I read economics at university(nothing to see with linux, isn't it? :P). I have some basic knowledge about booting process, Linux Kernel compiling from source and stuff like that. But of course I have still much to learn sometimes some errors appears and "voila" I am lost. I had: Ubuntu, Fedora, OpenSuse, Arch.. using Gentoo now. I'd like to know what you linux users, professionals, administrators... would think it is the best way to learn linux in a professional way. Is it worth studying it and passing the LPIC test enough to work in the linux world? or do I need going to IT uni? I've heard LFS is a good way of learning about linux, is that real? I've been thinking about getting to LFS learn about more deeply about the linux process and learning scripts. It is possible to do this way? if anyone has a tip or a good way of doing, maybe someone did it. Any tip is very welcome. Words from a person in love with linux. :D The best, Marc

    Read the article

  • Managing Apache to Compensate for WebDAV's Security Masking

    - by Tohuw
    When a user creates a file via WebDAV, the default behavior is that the file is owned by the user and group running the Apache process, with a umask of 022. Unfortunately, this makes it impossible for unprivileged users to write to the files by other means without being a member of the group Apache runs under (which strikes me as a particularly bad idea). My current solution is to set umask 000 in Apache's envvars and remove all world permissions from the webdav parent directory for the user. So, if the WebDAV share is /home/foo/www, then /home/foo/www is owned by www-data:foo with permissions of 770. This keeps other unprivileged users out, more or less, but it's hokey at best and a security disaster awaiting at worst. From my research and poking around at mod_dav and Apache, I cannot find a reasonable solution short of a cron job flipping all the permissions back (I'd rather not have the load and increased complexity on the server). SuExec won't work, either, because WebDAV operations are not going to execute as a different user. Any thoughts on this? Thank you.

    Read the article

  • Routing WIFI and LAN for specific traffic

    - by jakebird451
    I have two network devices aboard my macbook pro: WIFI (en1): Used for general traffic. Connects to an ip of 192.168.19.* via DHCP LAN (en0): Used for specific traffic. Connects to an ip of 192.168.2.10 as a static IP. Does not connect to a router, only a switch for direct routing connection. I have 4 IP addresses I need to access on the LAN: 192.168.2.1 192.168.2.21 192.168.2.20 192.168.2.30 The rest of the traffic needs to go to WIFI. I have tried setting up a routing table for the specific ip addresses, but I only managed to mess up my network. I do not venture out into the world of networking too often, but this was the latest command I have been trying: sudo route add -host 192.168.2.30 -interface en0 This command killed my ability to use ping. It told me that ping could not allocate memory (is that even possible)? It also killed my wifi access. Logging out and back in fixed the issue. I really do not mind to make this solution permanent, so I am fine with a temporary routing. EDIT: If I currently have been trying: sudo route flush sudo route add default 192.168.19.1 This gets everything to work for about a minute. But after such minute it "forgets" the routing to WiFi while retaining LAN's (en0) routing. If I unplug and replug my LAN (en0) cable, the process works for another minute.

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • Logitech webcam device only recognised by one software, without drivers

    - by Ben Franchuk
    A couple weeks ago I purchased a Logitech webcam at a garage sale; It did not come with any driver DVDs or anything like that. I plugged it in, turned on my computer, and continued work as usual. I did not at the time (and still have not) gotten any drivers for the device. Recently, though, I started up an audio software named as Cubase, only to find that it was picking up audio reads off of... something. I checked my sound card, and everything else plugged into my computer, but couldn't find where in the world this audio was being picked up from. There were no microphones listed in the device managers, and no "unknown devices" or whatever. Everything seemed as it always was. Running out of ideas, i blew an air-horn directly into the general area of the webcam, located directly in front of me. Sure enough, the audio peaked, indicating that the microphone was definitely in the webcam and that Cubase was somehow picking up this audio, even without drivers. The software lists the device as a "Universal USB Microphone". Adobe Audition, Soundbooth, and other audio applications cannot find the device either. Why is it that this one software (Cubase) can use this device without a driver, while every other piece of software on the computer can't? Not even the operating system can recognize it. Windows 7 Professional x64 bit

    Read the article

  • Multi-select menu in bash script

    - by am2605
    I'm a bash newbie but I would like to create a script in which I'd like to allow the user to select multiple options from a list of options. Essentially what I would like is something similar to the example below: #!/bin/bash OPTIONS="Hello Quit" select opt in $OPTIONS; do if [ "$opt" = "Quit" ]; then echo done exit elif [ "$opt" = "Hello" ]; then echo Hello World else clear echo bad option fi done (sourced from http://www.faqs.org/docs/Linux-HOWTO/Bash-Prog-Intro-HOWTO.html#ss9.1) However my script would have more options, and I'd like to allow multiples to be selected. So somethig like this: 1) Option 1 2) Option 2 3) Option 3 4) Option 4 5) Done Having feedback on the ones they have selected would also be great, eg plus signs next to ones they ahve already selected. Eg if you select "1" I'd like to page to clear and reprint: 1) Option 1 + 2) Option 2 3) Option 3 4) Option 4 5) Done Then if you select "3": 1) Option 1 + 2) Option 2 3) Option 3 + 4) Option 4 5) Done Also, if they again selected (1) I'd like it to "deselect" the option: 1) Option 1 2) Option 2 3) Option 3 + 4) Option 4 5) Done And finally when Done is pressed I'd like a list of the ones that were selected to be displayed before the program exits, eg if the current state is: 1) Option 1 2) Option 2 + 3) Option 3 + 4) Option 4 + 5) Done Pressing 5 should print: Option 2, Option 3, Option 4 and the script terminate. So my question - is this possible in bash, and if so is anyone able to provide a code sample? Any advice would be much appreciated.

    Read the article

  • How do I deny access to everybody but me in Windows 7?

    - by GregH
    I am trying to set up a file server on my my Windows 7 Pro system at home. I set up one common "Share" folder that I have shared/published. Within the share folder I want to have individual folders for me and my wife...that is only I can read/write my folder and only my wife can read/write to her folder and neither of us can read the contents of the other person's folder. Then I want to have a "public" folder where we can both read/write to contents of the folder as well as any sub-folders created, but my "kids" account can only read from this folder and sub folders. It seems really confusing to set up something like this and it really shouldn't. I am really confused between the "allow", "deny", and dimmed check boxes in the security tab. It seems that if I "Deny" access to "Everyone" on my private folder, then I don't even have access to it. Windows security seems backwards from the rest of the world's security models. If I am in two groups and I deny access to one of the groups but allow access to the other group then Windows security denies me access as I am in one of the groups that has access disallowed. Very confusing.

    Read the article

  • Read Only Domain Controllers and DNS zone updates

    - by Mike M
    I have a Windows 2003 domain and just added a new DC that runs 2008 R2. I updated the schema accordingly for both forest and domain levels. I also made sure to run /rodcprep at the time I did this. I have a branch office with a 2008 R2 file/print server that is a read-only domain controller (DC). The one problem I have been having is with AD-integrated DNS records updates. In the data center, we had to make an IP address change on a particular server. All our other sites' DCs (2003) updated the record fine. The 2008 R2 DC in the data center also updates its record fine. However, the RODC in the branch office does not. So if I nslookup the target server on a 2003 DC, the IP address is correct. Same with the 2008 R2 DC in the data center. But an nslookup on the branch office RODC still pulls in the old IP address. Moreover, any new records we've created (e.g., just added a new terminal server) do not get updated on the branch RODC either. Is there something simple I'm missing? How do I get the RODC to sync its AD-integrated DNS records with the rest of my world? Thank you in advance for your responses. Mike

    Read the article

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >