Search Results

Search found 13605 results on 545 pages for 'mail header'.

Page 454/545 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • transparently proxying a firewalled web application from a non-standard port to port 80

    - by Terrence Brannon
    I have a web application that serves on port 8088 on $server. However, the only port accessible from remote on $server is port 80. Furthermore, only CGI programs can execute on port 80. I would like to write a CGI program accessible via port 80 that allows one to use the web app running on port 8088. From my view, an ideal solution would be some sort of Java web browser that simply opened up a window and allowed me to use the program running on that port. The CGI program would simply initiate a web browser applet or something. I wrote a Perl CGI program that does it, but I really would like a more transparent solution: my $q = new CGI; print $q->header; use LWP::Simple; use HTML::Tree; my $base = "http://localhost:8088"; my $request = $base; my $qurl = $q->param('url'); if (length($qurl) > 1) { warn "long $qurl"; $request = "$base$qurl"; } else { warn "short $qurl"; } my $content = get($request); my $tree = HTML::TreeBuilder->new_from_content($content); my @a = $tree->look_down('_tag' => 'a'); for my $a (@a) { my $url = $a->attr('href'); next if index($url, '#') > -1 ; $url = "?url=$url"; $a->attr(href => $url); } print $tree->as_HTML;

    Read the article

  • /usr/bin/python (Python 2.4) was deleted on CentOS 5. I compiled from source but yum is still broken. How can I get everything back to the way it was?

    - by Maxwell
    I saw a lot of other questions like this but none of them answered the exact part I am having trouble with (actually installing the Python RPM). Someone on my system deleted /usr/bin/python and /usr/bin/python2.4 on my 64 bit CentOS 5.8 installation. I recompiled Python 2.4 from source, but now whenever I try to yum install anything I get the following error: [root@cerulean-OW1 ~]# yum install httpd There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4 (#1, Dec 16 2012, 09:16:56) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq I checked http://wiki.linux.duke.edu/YumFaq and it said the following: If you are getting a message that yum itself is the missing module then you probably installed it incorreclty (or installed the source rpm using make/make install). If possible, find a prebuilt rpm that will work for your system like one from Fedora or CentOS. Or, you can download the srpm and do a rpmbuild --rebuild yum*.src.rpm I tried going to http://rpm.pbone.net/index.php3/stat/4/idpl/17838875/dir/centos_5/com/python-2.4.3-46.el5.x86_64.rpm.html to install Python, which resulted in the following error: [root@cerulean-OW1 ~]# rpm -Uvh python-2.4.3-46.el5.x86_64.rpm error: Failed dependencies: python-libs-x86_64 = 2.4.3-46.el5 is needed by python-2.4.3-46.el5.x86_64 So I tried installing python-libs-x86_64, which resulted in the following: [root@cerulean-OW1 ~]# rpm -Uvh python-libs-2.4.3-46.el5_8.2.x86_64.rpm warning: python-libs-2.4.3-46.el5_8.2.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 192a7d7d Preparing... ########################################### [100%] package python-libs-2.4.3-46.el5_8.2.x86_64 is already installed file /usr/lib64/libpython2.4.so.1.0 from install of python-libs-2.4.3-46.el5_8.2.x86_64 conflicts with file from package python-libs-2.4.3-46.el5_8.2.x86_64

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    Hi all, I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • Switching from prefork MPM to worker MPM + php-fpm on ubuntu

    - by Shane
    All tutorials I found were how to fresh install worker MPM + PHP-FPM, since my wordpress blog's already up and running with prefork MPM, correct me if I'm wrong in the simulated installation process: I'm on ubuntu and according to some tutorials, the following lines would do all the tricks: apt-get install apache2-mpm-worker libapache2-mod-fastcgi php5-fpm php5-gd a2enmod actions fastcgi alias Then you setup configuration in /etc/apache2/conf.d/php5-fpm.conf: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> After all these, restart: service apache2 restart && service php5-fpm restart Question: 1) Would it cause any down time in the whole process for previously running sites with prefork MPM? 2) Do you have to change any already existent configuration files like php or mysql or apache2(would they take effect immediately after the switch without you doing anything)? 3) I've already have apc up and running, do you have to re-install/re-configure it after the switch? 4) How do you find out if apache2 is working in worker MPM mode as expected? Thanks a lot!

    Read the article

  • Why am I getting blank error messages in my Apache error log?

    - by Jason Lamoreux
    I am running Apache 2.2 on 64bit Windows Server 2008 Std edition with ActivePerl 5.8.9. My error log is filling up with blank error messages like these: [Wed Mar 31 14:08:31 2010] [error] [client 10.6.1.164] [Wed Mar 31 14:10:32 2010] [error] [client 10.6.1.89] [Wed Mar 31 14:13:20 2010] [error] [client 10.6.1.131] By looking in the access log I can tell that it occurs when our client machines issue a GET to a very simple Perl script. #!perl.exe use strict; no warnings; $|=1; use CGI::Carp('fatalsToBrowser'); use CGI qw(:standard); print header; my $CRLF = "\r\n<br>"; my $Port = '10116'; print "Success!${CRLF}PollInterval=5${CRLF}LMProMode${CRLF}Version=7${CRLF}ConnectionPort=$Port"; exit; The weird thing is that it does not look like this error message is inserted every time a GET to this Perl script occurs. What could cause this error message to appear in the Apache error log?

    Read the article

  • SQL 2K5 - Multiple databases vs. Multiple files

    - by Bob Palmer
    Hey all, quick question. Our current legacy system was built using multiple distinct databases (about ten of them). These are all part of the same discreet system, and a large number of SPs and functionalty span multiple databases. There are also key relationships that span (for example, a header table may be in database A with history, etc. in database B). When deploying multiple copies of our app to the same server therefore, we have to use multiple instances (because the database names are coded into so many sprocs). We're evaluating the idea of taking these ten databases (about 30gb total with individual sizes ranging from 100mb to 10gb) and merging them into a single database. Currently, we have our databases spread accross multiple spindles for better IO. The question I have is whether or not there is any performance loss or benefit of having 10 different databases vs. 10 different database files? i.e. rather than having three databases (A, B, and C) Disk D: A.mdf (1gb) Disk E: B.mdf (4gb) Disk F: C.mdf (10gb) Disk G: A_Log.ldf, B_Log.ldf, C_Log.ldf have one database (X) Disk D: X1.mdf (5gb) Disk E: X2.mdf (5gb) Disk F: X3.mdf (5gb) Disk G: X1_log.ldf,X2_log.ldf,X3_log.ldf Thanks! -Bob

    Read the article

  • Installing/enabling PHP Pecl Intl extension on CentOs 5

    - by Marijn Huizendveld
    Original question: I'm having trouble installing the PHP Pecl Intl extension on my CentOs 5 machine. After installing both icu and libicu with the following commands: $ yum install icu $ yum install libicu I tried to install the Intl extension like so: $ /usr/bin/pecl install intl I selected to search for the default location for the ICU libraries and header files. It ends up crashing like this: checking whether to enable internationalization support... yes, shared checking for icu-config... no checking for location of ICU headers and libraries... not found configure: error: Unable to detect ICU prefix or no failed. Please verify ICU install prefix and make sure icu-config works. ERROR: `/tmp/pear/temp/intl/configure --with-icu-dir=DEFAULT' failed update After successfully installing the development version of icu as suggested by RusAlex (thanks RusAlex) like so: $ yum install libicu-devel I ran into a new problem which I also encountered locally the following command: $ /usr/bin/pecl install intl now produces this error: /private/tmp/pear/temp/intl/collator/collator_class.c:92: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:96: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:101: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:107: error: duplicate 'static' make: *** [collator/collator_class.lo] Error 1 ERROR: `make' failed It appears to have something to do with PHP 5.3 being bundled with Intl already. But how can I enable this extension, if I look in my PHP Info than I cannot find any reference to it...

    Read the article

  • apache2 mysql authentication module and SHA1 encryption

    - by Luca Rossi
    I found myself in a setup on where I need to enable some authentication method using mysql. I already have an user scheme. That user scheme is working like a charm with MD5 password and CRYPT, but when I turn to SHA1sum it says: [Fri Oct 26 00:03:20 2012] [error] Unsupported encryption type: Sha1sum No useful debug informations on log files. This is my setup and some info: debian6 apache and ssl installed packages: root@sistemichiocciola:/etc/apache2/mods-available# dpkg --list | grep apache ii apache2 2.2.16-6+squeeze8 Apache HTTP Server metapackage ii apache2-mpm-prefork 2.2.16-6+squeeze8 Apache HTTP Server - traditional non-threaded model ii apache2-utils 2.2.16-6+squeeze8 utility programs for webservers ii apache2.2-bin 2.2.16-6+squeeze8 Apache HTTP Server common binary files ii apache2.2-common 2.2.16-6+squeeze8 Apache HTTP Server common files ii libapache2-mod-auth-mysql 4.3.9-13+b1 Apache 2 module for MySQL authentication ii libapache2-mod-php5 5.3.3-7+squeeze14 server-side, HTML-embedded scripting language (Apache 2 module) root@sistemichiocciola:/etc/apache2/sites-enabled# dpkg --list | grep ssl ii libssl-dev 0.9.8o-4squeeze13 SSL development libraries, header files and documentation ii libssl0.9.8 0.9.8o-4squeeze13 SSL shared libraries ii openssl 0.9.8o-4squeeze13 Secure Socket Layer (SSL) binary and related cryptographic tools ii openssl-blacklist 0.5-2 list of blacklisted OpenSSL RSA keys ii ssl-cert 1.0.28 simple debconf wrapper for OpenSSL my vhost setup: AuthMySQL On Auth_MySQL_Host localhost Auth_MySQL_User XXX Auth_MySQL_Password YYY Auth_MySQL_DB users AuthName "Sistemi Chiocciola Sezione Informatica" AuthType Basic # require valid-user require group informatica Auth_MySQL_Encryption_Types Crypt Sha1sum AuthBasicAuthoritative Off AuthUserFile /dev/null Auth_MySQL_Password_Table users Auth_MYSQL_username_field email Auth_MYSQL_password_field password AuthMySQL_Empty_Passwords Off AuthMySQL_Group_Table http_groups Auth_MySQL_Group_Field user_group Have I missed a package/configuration or something?

    Read the article

  • TFS 2012: Backup Plan Fails with empty log file

    - by Vitor
    I have a Team Foundation Server 2012 installation with Power Tools, and I defined a backup plan using the wizard found in the "Database Backup Tools" in the Team Foundation Server Administration Console. I set the backup plan to do a full database backup on Sunday mornings, to another server in the network. I followed the wizard with no problems and the Backup Plan was set successfully. However when the backup runs it returns Error as result and when I go to the log file I only get the header and no further info: [Info @01:00:01.078] ==================================================================== [Info @01:00:01.078] Team Foundation Server Administration Log [Info @01:00:01.078] Version : 11.0.50727.1 [Info @01:00:01.078] DateTime : 11/25/2012 02:00:01 [Info @01:00:01.078] Type : Full Backup Activity [Info @01:00:01.078] User : <backup user> [Info @01:00:01.078] Machine : <TFS Server> [Info @01:00:01.078] System : Microsoft Windows NT 6.2.9200.0 (AMD64) [Info @01:00:01.078] ==================================================================== I can imagine it's a permission problem, but I have no idea where to start ... Can anyone help? Thank you for your time! EDIT I'm not sure if it is related, but I logged in with "backup user" in "TFS Server" and there was this crash window opened with "TFS Power Tool Shell Extension (TfsComProviderSvr) has stopped working". The full crash log is here: Problem signature: Problem Event Name: APPCRASH Application Name: TfsComProviderSvr.exe Application Version: 11.0.50727.0 Application Timestamp: 5050cd2a Fault Module Name: StackHash_e8da Fault Module Version: 6.2.9200.16420 Fault Module Timestamp: 505aaa82 Exception Code: c0000374 Exception Offset: PCH_72_FROM_ntdll+0x00040DA8 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1043 Additional Information 1: e8da Additional Information 2: e8dac447e1089515a72386afa6746972 Additional Information 3: d903 Additional Information 4: d9036f986c69f4492a70e4cf004fb44d Does it help? Thanks everyone!

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • Installing mod_pagespeed (Apache module) on CentOS

    - by Sid B
    I have a CentOS (5.7 Final) system on which I already have Apache (2.2.3) installed. I have installed mod_pagespeed by following the instructions on: http://code.google.com/speed/page-speed/download.html and got the following while installing: # rpm -U mod-pagespeed-*.rpm warning: mod-pagespeed-beta_current_x86_64.rpm: Header V4 DSA signature: NOKEY, key ID 7fac5991 [ OK ] atd: [ OK ] It does appear to be installed properly: # apachectl -t -D DUMP_MODULES Loaded Modules: ... pagespeed_module (shared) And I've made the following changes in /etc/httpd/conf.d/pagespeed.conf Added: ModPagespeedEnableFilters collapse_whitespace,elide_attributes ModPagespeedEnableFilters combine_css,rewrite_css,move_css_to_head,inline_css ModPagespeedEnableFilters rewrite_javascript,inline_javascript ModPagespeedEnableFilters rewrite_images,insert_img_dimensions ModPagespeedEnableFilters extend_cache ModPagespeedEnableFilters remove_quotes,remove_comments ModPagespeedEnableFilters add_instrumentation Commented out the following lines in mod_pagespeed_statistics <Location /mod_pagespeed_statistics> **# Order allow,deny** # You may insert other "Allow from" lines to add hosts you want to # allow to look at generated statistics. Another possibility is # to comment out the "Order" and "Allow" options from the config # file, to allow any client that can reach your server to examine # statistics. This might be appropriate in an experimental setup or # if the Apache server is protected by a reverse proxy that will # filter URLs in some fashion. **# Allow from localhost** **# Allow from 127.0.0.1** SetHandler mod_pagespeed_statistics </Location> As a separate note, I'm trying to run the prescribed system tests as specified on google's site, but it gives the following error. I'm averse to updating wget on my server, as I'm sure there's no need for it for the actual module to function correctly. ./system_test.sh www.domain.com You have the wrong version of wget. 1.12 is required.

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • Website cannot be accessed with google DNS because of unsigned DNS

    - by Sinan Samet
    I get this error: Inconsistent security for stakeholdergame.com - DS found at parent, but no DNSKEY found at child. On http://dnscheck.pingdom.com/?domain=stakeholdergame.com People can't access my site with google public DNS because of this. How do I solve this problem? dig @ns1.haveabyte.nl stakeholdergame.com DS shows me this ; <<>> DiG 9.8.3-P1 <<>> @ns1.haveabyte.nl stakeholdergame.com DS ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42223 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;stakeholdergame.com. IN DS ;; AUTHORITY SECTION: stakeholdergame.com. 14400 IN SOA ns1.haveabyte.nl. hostmaster.stakeholdergame.com. 2014030300 14400 3600 1209600 86400 ;; Query time: 21 msec ;; SERVER: 79.170.93.174#53(79.170.93.174) ;; WHEN: Tue Jun 10 11:20:41 2014 ;; MSG SIZE rcvd: 100

    Read the article

  • RPM issues after signing JDK 1.6 64-bit

    - by organicveggie
    I'm trying to sign the Java JDK 1.6u21 64-bit RPM on CentOS 5.5 for use with Spacewalk and I'm running into problems. It seems to sign okay, but then when I check the signature it seems to be missing the key I just used to sign it. Yet RPM shows the key in it's list... # rpm --addsign jdk-6u21-linux-amd64.rpm Enter pass phrase: Pass phrase is good. jdk-6u21-linux-amd64.rpm: gpg: WARNING: standard input reopened gpg: WARNING: standard input reopened # rpm --checksig -v jdk-6u21-linux-amd64.rpm jdk-6u21-linux-amd64.rpm: Header V3 DSA signature: NOKEY, key ID ecfd98a5 MD5 digest: OK (650e0961e20d4a44169b68e8f4a1691b) V3 DSA signature: OK, key ID ecfd98a5 Yet I have the key imported (edited for privacy): # rpm -qa gpg-pubkey* |grep ecfd98a5 gpg-pubkey-ecfd98a5-4caa4a4c # rpm -qi gpg-pubkey-ecfd98a5-4caa4a4c Name : gpg-pubkey Relocations: (not relocatable) Version : ecfd98a5 Vendor: (none) Release : 4caa4a4c Build Date: Mon 04 Oct 2010 10:20:49 PM CDT Install Date: Mon 04 Oct 2010 10:20:49 PM CDT Build Host: localhost Group : Public Keys Source RPM: (none) Size : 0 License: pubkey Signature : (none) Summary : gpg(FirstName LastName <[email protected]>) Description : -----BEGIN PGP PUBLIC KEY BLOCK----- Version: rpm-4.4.2.3 (NSS-3) ...key goes here... =gKjN-----END PGP PUBLIC KEY BLOCK----- And I'm definitely running a 64-bit version of CentOS: # uname -a Linux spacewalk.mycompany.corp 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux Without a valid signature, Spacewalk refuses to install the RPM unless I completely disable signature checking. I have tried this with two different keys and two different users on the same machine without any success. Any bright ideas?

    Read the article

  • Update Grub on Squeeze - Kernel downgrade due VMware Server

    - by vodoo_boot
    Hi! I happen to run into various problems regarding grub and kernels. I don't really care about the kernel internas. All I want is VMware server in that dedicated root-server. 1.) What is a bzImage vs. vmlinuz? kaze:~# ls /boot/ System.map-2.6.32-5-amd64 bzImage-2.6.33.2 config-2.6.33.2 initrd.img-2.6.32-5-amd64 System.map-2.6.33.2 bzImage-2.6.35.6 config-2.6.35.6 vmlinuz-2.6.32-5-amd64 System.map-2.6.35.6 config-2.6.32-5-amd64 grub I updated my menu.lst (grub2): timeout 5 default 0 fallback 1 title 2.6.32.5 kernel (hd0,1)/boot/vmlinuz-2.6.32-5-amd64 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.35.6 kernel (hd0,1)/boot//bzImage-2.6.35.6 root=/dev/sda2 panic=60 noapic acpi=off title 2.6.32.3 kernel (hd0,1)/boot//bzImage-2.6.33.2 root=/dev/sda2 panic=60 noapic acpi=off That doesn't do well... I think the vmlinuz file is missing initrd or so. Dunno. In fact I don't give too much about kernel boot voodoo as long as it works. update-grub(2) does not work. Does anybody know what magical trick there is to get the 2.6.32-5 booting? 2.) I thought t follow the Deban wiki.. I cannot get header-files for the installed 35.6 or 33.2 kernel in the repositories. I cannot build foreign headers because they will not match the running kernel. So how does one deal with that situtation? I'd prefer not to have to downgrade the kernel. Thanks for any answers!

    Read the article

  • Monitoring multiple sites on a single server using OpsView

    - by Kev
    We have several web servers. On each of these servers there can be ~250 web sites. I need to add a HTTP check for each site on each server. Each site has a reserved host header that we know can always be resolved in the format of: w10000.hostchecks.mycompany.com w10020.hostchecks.mycompany.com w11992.hostchecks.mycompany.com ..and so on.. What I want is for there to be a master ping check on the web server's main IP address and then separate HTTP checks for each of the sites on the server. If the master ping test fails then I want the HTTP tests to cease until the master ping check goes OK. I had a stab at this and tried do the following: Create a parent host that does a ping check on the server's main ip address (e.g. server is named WEB0001). For each of the sites that reside on WEB0001: Create a separate Host with a Primary Hostname of wXXXXX.hostchecks.mycompany.com Make WEB0001 the parent host Add a monitor (HTTP check to a special url that is mapped into each site using a virtual directory: H- $HOSTADDRESS$ -u /__hostcheck/IsAlive.aspx -w 5 -c 10 -p 80 However I find that if I down the parent server (WEB0001) the http checks seem to continue. Am I going about this completely the wrong way?

    Read the article

  • Unable to compile netmap on Fedora 32 bit

    - by John Elf
    This is the error everytime I try to install netmap: Can someone let me know how to isntall the same on e1000e or ixgbe. I have kernel header and source installed. [root@localhost e1000]# make KSRC=/usr/src/kernels/2.6.35.6-45.fc14.i686/ make -C /usr/src/kernels/2.6.35.6-45.fc14.i686/ M=/media/sf_Shared/netmap-linux/net/e1000 modules make[1]: Entering directory /usr/src/kernels/2.6.35.6-45.fc14.i686' CC [M] /media/sf_Shared/netmap-linux/net/e1000/e1000_main.o /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_tx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:2: error: implicit declaration of function ‘vzalloc’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_rx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1680:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_tx_csum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:2780:2: error: implicit declaration of function ‘skb_checksum_start_offset’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_rx_checksum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:3689:2: error: implicit declaration of function ‘skb_checksum_none_assert’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_restore_vlan’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: error: ‘VLAN_N_VID’ undeclared (first use in this function) /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [/media/sf_Shared/netmap-linux/net/e1000/e1000_main.o] Error 1 make[1]: *** [_module_/media/sf_Shared/netmap-linux/net/e1000] Error 2 make[1]: Leaving directory/usr/src/kernels/2.6.35.6-45.fc14.i686' make: * [all] Error 2

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • calculate AUC (GAM) in R [migrated]

    - by ahmad
    I used the following script to calculate AUC in R: library(mgcv) library(ROCR) library(AUC) data1=read.table("d:\\2005.txt", header=T) GAM<-gam(tuna ~ s(chla)+s(sst)+s(ssha),family=binomial, data=data1) gampred<- predict(GAM, type="response") rp <- prediction(gampred, data1$tuna) auc <- performance( rp, "auc")@y.values[[1]] auc roc <- performance( rp, "tpr", "fpr") plot( roc ) But when I was running the script, the result is: **rp <- prediction(gampred, data1$tuna) Error in prediction(gampred, data1$tuna) : Format of predictions is invalid. > > auc <- performance( rp, "auc")@y.values[[1]] Error in performance(rp, "auc") : object 'rp' not found > auc function (x, min = 0, max = 1) { if (any(class(x) == "roc")) { if (min != 0 || max != 1) { x$fpr <- x$fpr[x$cutoffs >= min & x$cutoffs <= max] x$tpr <- x$tpr[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:length(x$fpr)) { ans <- ans + 0.5 * abs(x$fpr[i] - x$fpr[i - 1]) * (x$tpr[i] + x$tpr[i - 1]) } } else if (any(class(x) %in% c("accuracy", "sensitivity", "specificity"))) { if (min != 0 || max != 1) { x$cutoffs <- x$cutoffs[x$cutoffs >= min & x$cutoffs <= max] x$measure <- x$measure[x$cutoffs >= min & x$cutoffs <= max] } ans <- 0 for (i in 2:(length(x$cutoffs))) { ans <- ans + 0.5 * abs(x$cutoffs[i - 1] - x$cutoffs[i]) * (x$measure[i] + x$measure[i - 1]) } } return(as.numeric(ans)) } <bytecode: 0x03012f10> <environment: namespace:AUC> > > roc <- performance( rp, "tpr", "fpr") Error in performance(rp, "tpr", "fpr") : object 'rp' not found > plot( roc ) Error in levels(labels) : argument "labels" is missing, with no default** Can anybody help me to solve this problem? Thank you in advance.

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Hotmail marking messages as junk

    - by Canadaka
    I was having problems with emails sent from my server being blocked completely by Hotmail, but I found out Hotmail had blocked my IP and by contacting Hotmail I had the block removed. See this question for more info: Email sent from server with rDNS & SPF being blocked by Hotmail But now all emails from my server are going directly to recipients "Junk" folder on hotmail and I can't figure out why. Hotmail says "Microsoft SmartScreen marked this message as junk and we'll delete it after ten days." I tried contacting the same people at Hotmail who had my IP block removed, but I haven't received any reply and its been almost a week. Here are some details: I have a valid SPF record for my domain "v=spf1 a include:_spf.google.com ~all" I have reverse DNS setup I have a Sender Score of 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 I have signed up for Microsoft's SNDS and was approved. My ip says "All of the specified IPs have normal status." Microsoft added my IP to the JMRP Database My IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 my FROM header is being sent in proper format "From: CKA <[email protected]>" Here is a test email source:

    Read the article

  • ipmi - can't ping or remotely connect

    - by Fidel
    I've tried configuring the IPMI controller to accept remote connections, but I can't even ping it. Here is it status: #/usr/local/bin/ipmitool lan print 2 Set in Progress : Set Complete Auth Type Support : NONE PASSWORD Auth Type Enable : Callback : : User : NONE PASSWORD : Operator : PASSWORD : Admin : PASSWORD : OEM : IP Address Source : Static Address IP Address : 192.168.1.112 Subnet Mask : 255.255.255.0 MAC Address : 00:a0:a5:67:45:25 IP Header : TTL=0x40 Flags=0x40 Precedence=0x00 TOS=0x10 BMC ARP Control : ARP Responses Enabled, Gratuitous ARP Enabled Gratituous ARP Intrvl : 8.0 seconds Default Gateway IP : 192.168.1.1 Default Gateway MAC : 00:00:00:00:00:00 802.1q VLAN ID : Disabled 802.1q VLAN Priority : 0 RMCP+ Cipher Suites : 0,1,2,3 Cipher Suite Priv Max : uaaaXXXXXXXXXXX : X=Cipher Suite Unused : c=CALLBACK : u=USER : o=OPERATOR : a=ADMIN : O=OEM # /usr/local/bin/ipmitool user list 2 ID Name Enabled Callin Link Auth IPMI Msg Channel Priv Limit 1 true false true true USER 2 admin true false true true ADMINISTRATOR # /usr/local/bin/ipmitool channel getaccess 2 2 Maximum User IDs : 5 Enabled User IDs : 2 User ID : 2 User Name : admin Fixed Name : No Access Available : callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR # /usr/local/bin/ipmitool channel info 2 Channel 0x2 info: Channel Medium Type : 802.3 LAN Channel Protocol Type : IPMB-1.0 Session Support : multi-session Active Session Count : 0 Protocol Vendor ID : 7154 Volatile(active) Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available Non-Volatile Settings Alerting : disabled Per-message Auth : disabled User Level Auth : disabled Access Mode : always available # /usr/local/bin/ipmitool chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : unknown Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # arp Address HWtype HWaddress Flags Mask Iface 192.168.1.112 ether 00:A0:A5:67:45:25 C bond0 # /usr/local/bin/ipmitool -I lan -H 192.168.1.112 -U admin -P admin chassis power status Error: Unable to establish LAN session Unable to get Chassis Power Status In summary. It exists on the ARP list so arp's are being broadcast. I can't ping it and can't connect to it. Can anyone spot any glaring mistakes in the configuration? Many thanks, Fidel

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >