Search Results

Search found 11703 results on 469 pages for 'dev to production'.

Page 407/469 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • After compiling PHP, I get mod_fcgid: error reading data from FastCGI server

    - by user34295
    I'm trying to add multiple PHP version in Plesk 12. Switching my domain to the new version PHP 5.4.29 result in this error: (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server Here is phpinfo() of the complied PHP version, obtained running php54-cgi index.php from the terminal. The same script placed under document root doesn't work in FastCGI. How can I debug/try to figure out what's the error? Currently running CentOS 6.5 x64, Plesk v12.0.18_build1200140529.2, PHP 5.5.13. I've downloaded PHP 5.4.29: cd /usr/local/src curl -O http://it1.php.net/distributions/php-5.4.29.tar.gz cd php-5.4.29 And configured with: ./configure \ --prefix=/usr/local/php54 \ --with-bz2 \ --with-config-file-path=/usr/local/php54/etc \ --with-config-file-scan-dir=/usr/local/php54/etc/php.d \ --with-curl \ --with-gd \ --with-gettext \ --with-iconv \ --with-layout=PHP \ --with-libxml-dir=/usr/local/php54 \ --with-mhash \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-openssl \ --with-pdo-mysql=mysqlnd \ --with-readline \ --with-xsl \ --with-zlib \ --enable-calendar \ --enable-cgi \ --enable-exif \ --enable-ftp \ --enable-intl \ --enable-mbstring \ --enable-pcntl \ --enable-shmop \ --enable-sockets \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --enable-wddx \ --enable-zip Then: make && make install Installing PHP CLI binary: /usr/local/php54/bin/ Installing PHP CLI man page: /usr/local/php54/php/man/man1/ Installing PHP CGI binary: /usr/local/php54/bin/ Installing PHP CGI man page: /usr/local/php54/php/man/man1/ Installing build environment: /usr/local/php54/lib/php/build/ Installing header files: /usr/local/php54/include/php/ Installing helper programs: /usr/local/php54/bin/ program: phpize program: php-config Installing man pages: /usr/local/php54/php/man/man1/ page: phpize.1 page: php-config.1 Installing PEAR environment: /usr/local/php54/lib/php/ [PEAR] Archive_Tar - installed: 1.3.11 [PEAR] Console_Getopt - installed: 1.3.1 warning: pear/PEAR requires package "pear/Structures_Graph" (recommended version 1.0.4) warning: pear/PEAR requires package "pear/XML_Util" (recommended version 1.2.1) [PEAR] PEAR - installed: 1.9.4 Wrote PEAR system config file at: /usr/local/php54/etc/pear.conf You may want to add: /usr/local/php54/lib/php to your php.ini include_path [PEAR] Structures_Graph- installed: 1.0.4 [PEAR] XML_Util - installed: 1.2.1 /usr/local/src/php-5.4.29/build/shtool install -c ext/phar/phar.phar /usr/local/php54/bin ln -s -f /usr/local/php54/bin/phar.phar /usr/local/php54/bin/phar Installing PDO headers: /usr/local/php54/include/php/ext/pdo/ Copied php.ini-production to /usr/local/php54/etc/php.ini and added a new handler in Plesk: /usr/local/psa/bin/php_handler --add -displayname 5.4.29 -path /usr/local/php54/bin/php-cgi -phpini /usr/local/php54/etc/php.ini -type fastcgi -id php54 Symbolic linking: ln -s /usr/local/php54/bin/php /usr/local/bin/php54 ln -s /usr/local/php54/bin/php-cgi /usr/local/bin/php54-cgi New installed version: php54-cgi -m [PHP Modules] bz2 calendar cgi-fcgi Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext hash iconv intl json libxml mbstring mhash mysql mysqli mysqlnd openssl pcntl pcre PDO pdo_mysql pdo_sqlite Phar posix readline Reflection session shmop SimpleXML sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tokenizer wddx xml xmlreader xmlwriter xsl zip zlib [Zend Modules]

    Read the article

  • lacp, cicso 3550, 3560, help with configuration

    - by Flamewires
    Hey all this is a repost from a question I asked on the cisco forums but never got a useful reply. Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are: BSDServer: #!/bin/sh #bring up both interfaces ifconfig em0 up media 1000baseTX mediaopt full-duplex ifconfig em1 up media 1000baseTX mediaopt full-duplex #create the lagg interface ifconfig lagg0 create #set lagg0's protocol to lacp, add both cards to the interface, #and assign it em1's ip/netmask ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0 The switches are configured as follows: #clear out old junk no int Po1 default int range GigabitEthernet 0/15 - 16 # config ports interface range GigabitEthernet 0/15 - 16 description lagg-test switchport duplex full speed 1000 switchport access vlan 192 spanning-tree portfast channel-group 1 mode active channel-protocol lacp **** switchport trunk encapsulation dot1q **** no shutdown exit interface Port-channel 1 description lagginterface switchport access vlan 192 exit port-channel load-balance src-mac end obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports. With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line. First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase? Second, This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing. In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c server-ip-address), so the source mac(and IP) are different for both connections. Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • Is my Cisco switch port bad?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced last week, however the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link (switchport configuration pastebin). We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • A dusty server room

    - by pauska
    Here's the story.. The owners of the building we lease office space from decided to do a renovation of the exterior. This involved in some pretty heavy work at the level where our server room is, including exchanging windows wich are fit inside a concrete wall. My red alert went off when I heard that they were going to do the same thing with our server room (yes, our server room has a window. We're a small shop with 3 racks. The window is secured with steel bars.) I explicity told the contractor that they need to put up a temporarily wall between our racks and the original wall - and to make sure that the temporary wall is 100 % air and water-tight. They promised to do so. The temporary wall has a small door in it, so that workers can go in/out through the day (through our server room, wich was the only option....). On several occasions I could find the small door half-way shut while working evenings/nights. I locked the door, and thought that they would hopefully get the point soon and keep the door shut. I even gave a electrician a mouthful when I saw that he didn't close the door properly. By this point - I bet that most of you get a picture of what happened. Yes, they probably left the door open while drilling in the concrete. I present you our 4 weeks old EMC VNX: I'll even put in a little bonus, here is the APC UPS one rack further away from the temporary wall. See the nice little landing strip from my finger? What should I do? The only thing that comes to mind is to either call all our suppliers (EMC, HP, Dell, Cisco) and get them to send technicians to check out all the gear in the server room, or get some kind of certified 3rd-party consulant to check all of it. Would you run production systems on this gear? How long? Edit: I should also note that our aircondition isn't exactly enterprise-grade, given the nature of our small room. It's just a single inverter, wich have failed one time before I started working here (failed inverters usually leads to water dripping out).

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • Ubuntu 12.04 crash analysis - strange binary data on all open files at the moment of crash

    - by lanbo
    A couple of hours ago we got a system crash on Ubuntu 12.04. We checked all the log files and there is nothing suspicious to blame to. Last stuff that was logged was some dovecot activity. There are no kernel panic messages. Nothing. It is a new server (new hardware) we are testing before production. And because it is new hard, I'm suspicious the problem may be due to some faulty hardware. We already run memtester with no problem detected. I'll be happy to hear from other hardware testing tools (the machine has SSD). Anyway, the thing I wanted to ask you is a different one. The strange thing is on every open file at the moment of the crash we found the next sequence of symbols was written into them: "@^@^@^@^@^@^@...". For example, on the syslog log file we got: Apr 16 15:53:56 odyssey dovecot: pop3-login: Aborted login (auth failed, 1 attempts): user=<info>, method=PLAIN, rip=46.29.255.73, lip=5.9.58.177 ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^ [these continues for about 1000 chars...] ^@^@^@^@Apr 16 15:55:12 odyssey kernel: imklog 5.8.6, log source = /proc/kmsg started. We got all these symbols in all open files. These include: syslog, mail.log, kern.log, ... But also on some logs that are output by php scripts run in CRONs from user accounts (not root). So, any idea why all open files got these characters written during the crash? This is pretty bad since the crash corrupted many files (we don't even know which other ones may be affected). We are suspicious that all open files (in write mode maybe) at the moment of the crash got all these symbols inserted. Why is that? BTW [in case it helps], the system automatically rebooted after the crash but Apache did not start. There were not traces in /var/apache2/*log why apache did not start. After running a "service apache2 start" it started with no problems. Also, we rebooted the machine manually and Apache also started on reboot. But it did not start after the crash and no errors were reported. Thanks guys!

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • How to Setup Ubuntu Mail Server with Google Apps?

    - by Apreche
    I have a domain, let's call it foobar.com. All of the MX records for foobar.com point to Google's mail servers because I am using Google Apps for your domain to manage it. It's great because everyone gets all the advantages of GMail, but our e-mail addresses aren't @gmail.com. I also have a server. Primarily, it's a web server, but it also serves other things. One of the things it serves is the web site for foobar.com and also sites for various virtual hosts such as shop.foobar.com and forum.foobar.com. The server is running Ubuntu 8.04, because I like using LTS releases in production. The thing is, there are various applications running on the server that need the ability to send out emails. Various applications, like the cron jobs, send me e-mails in case of errors. Some of the web applications need to send e-mail to users when they forget their passwords, to confirm new registered users, etc. Lastly, it's nice to be able to send e-mail from the command line using the mail command, or mutt. How can I setup the mail on the web server to go through the Google apps mail servers? I don't need the web server to receive mail, though that would be cool. I do need it to be able to send mail as any legitimate address @foobar.com. That way the forum application can send mails with [email protected] in the from field, and the ecommerce application will have [email protected] in the from field. Also, by sending the mail through the Google servers, we can avoid a lot of the problems with the e-mails being blocked by various spam filters on the web. Google's SMTP servers are trusted a lot more than mine would be. I'm pretty good with administering Linux systems, but I am absolutely brain dead when it comes to e-mail. I need step by step directions from beginning to end on how to set this up. I need to know every thing to install, and every single change to the configuration files that is necessary. I have tried following various howtos and guides in the past, but none of them were quite right. Either they didn't work at all, or they offered a configuration that is not what I wanted. Please help. Thanks.

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • XCP Project Kronos syslog error: "irq ... : nobody cared" on Dom0 host

    - by Vlad Fedin
    One of our production clusters driven by XCP suddenly went uresponsive. After restart and some investigation we found such logs in dom0 machine syslog: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659040] irq 339: nobody cared (try booting with the "irqpoll" option) Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659058] Pid: 0, comm: swapper/3 Tainted: G C O 3.2.0-24-generic #37-Ubuntu Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659060] Call Trace: Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659062] <IRQ> [<ffffffff810db37d>] __report_bad_irq+0x3d/0xe0 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659071] [<ffffffff810db605>] note_interrupt+0x135/0x190 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659074] [<ffffffff810d8e69>] handle_irq_event_percpu+0xa9/0x220 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659078] [<ffffffff8130ff3b>] ? radix_tree_lookup+0xb/0x10 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659081] [<ffffffff810d9031>] handle_irq_event+0x51/0x80 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659084] [<ffffffff810dc187>] handle_edge_irq+0x87/0x140 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659089] [<ffffffff813a8829>] __xen_evtchn_do_upcall+0x199/0x250 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659092] [<ffffffff813aa96f>] xen_evtchn_do_upcall+0x2f/0x50 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659096] [<ffffffff81666d3e>] xen_do_hypervisor_callback+0x1e/0x30 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659097] <EOI> [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659104] [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659107] [<ffffffff8100a1d0>] ? xen_safe_halt+0x10/0x20 Oct 26 20:32:03 hetzner-2-mrx kernel: [1797931.659110] [<fff IRQ 339 in cat /proc/interrupts: 339: ... xen-pirq-msi-x eth0 where eth0 is hardware NIC. While host machine seems to hang, guest machines continue to work, so our tiny internal monitoring on one of the virtual hosts logged something like that: [2012-10-26 20:31:51] [OK......] 200 OK : 113159149 ns [2012-10-26 20:32:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 47763284432 ns ... [2012-10-26 20:34:40] [DISASTER] 500 Can't connect to [hostname]:80 (No route to host) : 46894835070 ns [2012-10-26 20:34:57] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 16821741955 ns ... [2012-10-26 20:38:18] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 20103298289 ns [2012-10-26 20:38:37] [DISASTER] 500 Can't connect to [hostname]:80 (Bad hostname) : 17895754943 ns Host and guest OS: Ubuntu 12.04 LTS, 05:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection Subsystem: ASUSTeK Computer Inc. Device 8369 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+ Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fe500000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at e000 [size=32] Region 3: Memory at fe520000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: e1000e Kernel modules: e1000e Any hints how to debug this?

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Add Machine Key to machine.config in Load Balancing environment to multiple versions of .net framework

    - by davidb
    I have two web servers behind a F5 load balancer. Each web server has identical applications to the other. There was no issue until the config of the load balancer changed from source address persistence to least connections. Now in some applications I receieve this error Server Error in '/' Application. Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Source Error: The source code that generated this unhandled exception can only be shown when compiled in debug mode. To enable this, please follow one of the below steps, then request the URL: Add a "Debug=true" directive at the top of the file that generated the error. Example: or: 2) Add the following section to the configuration file of your application: Note that this second technique will cause all files within a given application to be compiled in debug mode. The first technique will cause only that particular file to be compiled in debug mode. Important: Running applications in debug mode does incur a memory/performance overhead. You should make sure that an application has debugging disabled before deploying into production scenario. How do I add a machine key to the machine.config file? Do I do it at server level in IIS or at website/application level for each site? Does the validation and decryption keys have to be the same across both web servers or are they different? Should they be different for each machine.config version of .net? I cannot find any documentation of this scenario.

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Need help recovering a corrupt SQL database

    - by user570079
    I have a very special case that I have been working on for several days. I have a very large SQL Server 2008 database (about 2 TB) that contains 500 filegroups to support very large partitioned tables. Recently we had a catastophic failure on one of the drive and lost several filegroups and the database became in-accessible. We have been doing filegroup backups on a daily basis, but due to other issues, we lost our most recent backup of the log and the primary filegroup. We have all the data backed up but the primary filegroup backup is old. There have been no schema changes since the primary filegroup backup, but the lsn's are now all out of sync and we cannot recover the data. I have tried everything I could think of (and have tried just about every trick and hack I could google) but I still end up at the same point where I get messages saying that the files for filegroup x do not match the primary filegroup. I am now at the point of trying to edit the system tables (we have a separate temporary environment to do this so we are not worried about corrupting any production databases). I have tried updated sys.sysdbreg, sys.sysbrickfiles, and sys.sysprufiles to try to trick SQL into thinking all the files are online, but a "Select * From OPENROWSET(TABLE DBPROP, 5)" shows a different database state from what I see in sys.sysdbreg. I am now thinking I need to somehow edit the headers of the actual data files to try to line up the lsn's with the primary. I appreciate any help anyone can give me here, but please do not respond with things like "you are not supposed to do edit mdf, ndf files...." or "see msdn article....", etc. This is an advanced emergency case and I need a real hack so we can just get to the data in this corrupt database and export to a fresh new database. I know there is a way to do this, but not knowing what the DBPROP system functions does (i.e. does it look at system tables or does it actually open the file) is keeping me from trying to figure out how to fool SQL into allowing me to read these files. Thanks for any help.

    Read the article

  • Incremental RPM package version "numbers" for x.y.z > x.y.z-beta (or alpha, rc, etc)

    - by Jonathan Clarke
    In order to publish RPM packages of several different versions of some software, I'm looking for a way to specify version "numbers" that are considered "upgrades", and include the differentiation of several pre-release versions, such as (in order): "2.4.0 alpha 1", "2.4.0 alpha 2", "2.4.0 alpha 3", "2.4.0 beta 1", "2.4.0 beta 2", "2.4.0 release candidate", "2.4.0 final", "2.4.1", "2.4.2", etc. The main issue I have with this is that RPM considers that "2.4.0" comes earlier than "2.4.0.alpha1", so I can't just add the suffix on the end of the final version number. I could try "2.4.0.alpha1", "2.4.0.beta1", "2.4.0.final", which would work, except for the "release candidate" that would be considered later than "2.4.0.final". An alternative I considered is using the "epoch:" section of the RPM version number (the epoch: prefix is considered before the main version number so that "1:2.4.0" is actually earlier than "2:1.0.0"). By putting a timestamp in the epoch: field, all the versions get ordered as expected by RPM, because their versions appear to increment in time. However, this fails when new releases are made on several major versions at the same time (for example, 2.3.2 is released after 2.4.0, but their version for RPM are "20121003:2.3.2" and "20120928:2.4.0" and systems on 2.3.2 can't get "upgraded" to 2.4.0, because rpm sees it as an older version). In this case, yum/zypper/etc refuse to upgrade to 2.4.0, thus my problem. What version numbers can I use to achieve this, and make sure that RPM always considers the version numbers to be in order. Or if not version numbers, other mechanism in RPM packaging? Note 1: I would like to keep the "Release:" field of the spec file for it's original purpose (several releases of packages, including packaging changes, for the same version of the packaged software). Note 2: This should work on current production versions of major distributions, such as RHEL/CentOS 6 and SLES 11. But I'm interested in solutions that don't, too, so long as they don't involve recompiling rpm! Note 3: On Debian-like systems, dpkg uses a special component in the version number which is the "~" (tilde) character. This causes dpkg to count the suffix as "negative" ordering, so that "2.4.0~anything" will come before "2.4.0". Then, normal ordering applies after the "~", so "2.4.0~alpha1" comes before "2.4.0~beta1" because "alpha" comes before "beta" alphabetically. I'm not necessarily looking to use the same scheme for RPM packages (I'm pretty sure no such equivalent exists), so this is just FYI.

    Read the article

  • Bridging and iptables SNAT conflict

    - by sad_admin
    Hello I am working on a setup here and have it working with one minor exception. Devices on one side of my bridge aren't getting SNAT'd to the Internet. The Diagram / Overview: Primary_Network (Site_A) | | Internet ------- Linux_Bridge_GW (GW) | | Secondary/CoLo Site (Site_B) Here is the setup: 1.) Site_A has all the production servers and workstations. 2.) Site_B has a set of servers that we would like to fail-over to and also serve our internet facing services from. 3.) GW has two interfaces that are trunked and carrying the appropriate VLAN traffic (allow layer-2 propagation of traffic between sites) //this all works perfectly fine. 4.) The problem that is being encountered is, hosts from Site_B have their default GW at Site_A (same subnet) GW does not have IPs on the VLANs that are being passed. 5.) All hosts at Site_A can reach the Internet without problem. 6.) GW has an addresses on a subnet that is ONLY for Internet destined traffic. (This was done so that Websense would not have to parse unnecessary traffic. We use this VLAN as the monitor port's source on the switch where Websense is sitting). What I think is happening: 1.) Packet/Frame comes in on physdev at Site_B destined for Internet. 2.) Kernel sees packet, and forwards it out the other side of the bridge to that host's default GW. 3.) Site_A (containing core-network's Default-GW) sees that packet is destined for a host it doesn't know about, so it sends it to it's default GW (the linux bridge, since it's Internet bound). 4.) The kernel says "Hey, I've seen you before" and therefore doesn't do SNAT'ing on the packet and sends it out to the Internet where it's black-holed. Why I think it's happening: 1.) A tcpdump on the internet facing NIC shows the packet leaving the interface with the private address as it's source. What I would like: 1.) Have the packet SNAT'd. 2.) Something like the below would be awesome a.) packet comes in from Site_B b.) kernel sees that the packet is NOT destined for itself or any private address c.) kernel says "OK, well since you're destined for the Internet I'm going to send you out this interface rather than forward you to your normal default GW that's WAAAY over there." d.) packet comes in from internet and is sent out the appropriate bridge physdev depending on which site the host it's destined for is at. Thanks for any assistance or guidance that you are willing to offer. Best Regards, Sad Admin

    Read the article

  • FTP script needs blank line

    - by Ones and Zeroes
    I am trying to determine the reason for some FTP servers requiring a blank line in the script as follows: open server.com username ftp_commands bye Refer to blank line required after username credentials. Example from: FTP from batch file another reference to the same: http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.ibm.as400.misc/2008-05/msg00227.html Also discussed here: archive.midrange.com/midrange-l/200601/msg00048.html "The behavior I'm observing is the same as if I didn't specify the password to login." with an answer referring to our same fix... archive.midrange.com/midrange-l/200601/msg00053.html and archive.midrange.com/midrange-l/200601/msg00065.html Note: It is my experience that FTP questions attract uncouth responses. Admittedly FTP is outdated, but many clients still have legacy systems, which they cannot upgrade or replace. The reason thereof should not be discussed here. The intention of this question is to invite a positive response. Please do not respond if you disagree with the above. If you have never encountered this same issue, please do not respond. I suspect this may be limited to FTP scripts executed from Windows machines, but have been told that this happens often and with many different servers. My specific interest is to understand what may cause this as I have a real world example of a production system suddenly requiring this as a workaround fix, after running for many years without issue. The server belongs to a third party who claims no change on their end. Server details unknown and cannot be determined. Any help or encouragement from someone who has come across the same, would be appreciated. ps. Sorry for the many words and references to painful responses, but I have asked similar questions on serverfault and elsewhere and unfortunately got back kneejerk responses to FTP and respondents debating the validity of the question. I would truly not ask, or re-post this question online if I had a better understanding of the issue. I know of people who have seen this issue, but don't know what causes it. I am wary that this question would again turn into another irrelevant discussion. Please, I ask very nicely: Please do not respond if you have not encountered a similar issue. FURTHER EDIT: Please do not suggest changing the product. The problem is not the blank line requirement. We know this fixes the issue. The problem is not being able to explain the reason for the blank line in the first place. Slight difference, but a critical point to note wrt the answering of this question.

    Read the article

  • Add Your Own Domain to Your WordPress.com Blog

    - by Matthew Guay
    Now that you’ve got a nice blog on WordPress.com, why not get your own domain to brand your site?  Here’s how you can easily register a new domain or move your existing domain to your WordPress site. By default, your free WordPress address is yourblog’sname.wordpress.com.  But whether this is a personal or a company blog, it can be nice to have your own domain to really brand your site and make it your own.  Or, if you already have another website and want to use WordPress as a blog for it, you could even add blog.yoursite.com or any other subdomain. Adding a domain to your WordPress.com is a paid upgrade; registering and mapping a new domain to your account costs $14.97 a year, while mapping a domain you already own to your WordPress blog costs $9.97 a year. Getting Started Login to your blog’s dashboard, click the arrow beside Upgrades in the sidebar, and select Domains. Enter the domain or subdomain you want to add to your site in the text box, and click Add domain to blog.   If you entered a new domain you want to register, WordPress will make sure the domain is available and then present you a registration form to register the domain.  Enter your information, and then click Register Domain.   Or, if you enter a domain that’s already registered, you will see the following prompt. If this domain is a domain you own, you can map it to WordPress.com.  Login to your domain registrar account and switch your nameserver to: NS1.WORDPRESS.COM NS2.WORDPRESS.COM NS3.WORDPRESS.COM Your DNS settings page for your domain may be different, depending on your registrar.  Here’s how our domain settings looked. Alternately, if you’re wanting to map a subdomain, such as blog.yoursite.com to your WordPress blog, create the following CNAME record on your domain register.  You may have to contact your domain registrar’s support to do this.  Substitute your subdomain, domain, and blog name when creating the record. subdomain.yourdomain.com. IN CNAME yourblog.wordpress.com. Once your settings are correct, click Try Again in your WordPress dashboard.  The DNS settings may take a while to update, but once WordPress can tell your DNS settings point to it, you will see the following confirmation screen.  Click Map Domain to add this domain to your WordPress blog. Now you’re ready to pay for your domain mapping or registration.  Depending on your purchase, the information and price shown may be different.  Here we’re mapping a domain we already have registered, so it costs $9.97.  Select your method of payment, enter your payment information or signin with your Paypal account, and continue as usual. Once your purchase is finished, you’ll be returned to the Domains page on WordPress.  Try going to your new domain, and make sure it opens your blog.  If it works, then click the bullet beside the new domain, and click Update Primary Domain.  Now, when people visit your WordPress site, they’ll see your new domain in the address bar.  You can still access your blog from your old yourname.wordpress.com address, but it will redirect to you new domain. Conclusion Having a personalized domain is a great way to make your blog more professional, while still taking advantage of the ease of use that WordPress.com offers.  And, if you have your own domain, you can easily move to your site traffic to a different hosting provider in the future if you need to.  The process is slightly complicated, but for $15/year we found this one of the best upgrades you could do to your WordPress.com blog. If you want to see an example of a site created with Wordpress, check out Matthew’s tech site techinch.com. And, if you’re just getting started with WordPress, check out our series on how to Start your WordPress.com blog, Personalize it, and Easily Post Content to it from anywhere. Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareHow To Start Your Own Professional Blog with WordPressDisable Logon to Windows Computers When Not Connected to a DomainMake a Backup Copy of your Production Wordpress Blog on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >