Search Results

Search found 21235 results on 850 pages for 'www'.

Page 687/850 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • Setting up logging for a remote backup script

    - by Brian Dainis
    So I wrote up a short script that I am planning to run via a cron job daily to package up my site files and send them to a remote location. I also plan to incorporate DB dumps, but I have not gotten that far yet. My issue today however is that Im am uncertain how to log the output of each command for errors, warnings, or other pertinent information the command may output. I would also like to install sometype of fail safe so if something goes horribly wrong the script will stop dead in its tracks and notify me via email or something. Ok the email thing is not as critical, but would be nice. Does anybody have any ideas for that? Here is what I have so far. By the way, both servers are CentOS 6.2 running standard LAMP. #!/bin/sh ################################# ### Set Vars ################################# THEDATE=`date +%m%d%y%H%M` ################################# ### Create Archives ################################# tar -cf /root/backups/files/server_BAK_${THEDATE}.tar -C / var/www/vhosts gzip /root/backups/files/server_BAK_${THEDATE}.tar ################################# ### Send Data to Remote Server ################################# scp /root/backups/files/server_BAK_${THEDATE}.tar.gz user@host:/home/bak1/ftp/backups/ ################################# ### Remove Data from this Server ################################# rm -rf /root/backups/files/server_BAK_${THEDATE}.tar.gz

    Read the article

  • mod_rewrite changes case even if not matching RewriteCond?

    - by kirdie
    I have a really strange problem with my MediaWiki which I want to have articles of the form mywiki.org/MyArticle. Now I got most of it to work using the following code but it mysteriously cannot display the logo anymore. RewriteEngine On # don't rewrite valid requests to files and directories RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-d # mywiki.org/MyArticle gets rewritten to mywiki.org/index.php/MyArticle RewriteRule ^/(.*)$ /index.php/$1 [L,QSA] Now when I type in mywiki.org/img/logo.jpg in my browser the adress changes to http://wiki.geoknow.eu/Img/logo.jpg (capital I) and I get to the empty article page but the image is definitely there (in my document root under the img folder): /var/www/mywiki.org$ ls img logo.jpg So far so bad. But now it gets really crazy: When I add RewriteCond %{REQUEST_URI} !^/.*\.jpg my adress still gets rewritten and my access log says - - [05/Dec/2012:16:30:21 +0100] "GET /Img/geoknow_logo.jpg HTTP/1.1" 404 509 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Firefox/17.0" Where does that capital I in Img come from? The rule is not even executed because at least one condition is definitely not met now and I also don't have any to lowercase-transformation defined anywhere. What is happening there and how can I repair this? P.S.: Now all of the sudden the problem went away (the image is displayed as it should and there is no capital replacement anymore. What can cause this and why does it spontanously appear and disappear?

    Read the article

  • ASA access lists and Egress Filtering

    - by Nate
    Hello. I'm trying to learn how to use a cisco ASA firewall, and I don't really know what I'm doing. I'm trying to set up some egress filtering, with the goal of allowing only the minimal amount of traffic out of the network, even if it originated from within the inside interface. In other words, I'm trying to set up dmz_in and inside_in ACLs as if the inside interface is not too trustworthy. I haven't fully grasped all the concepts yet, so I have a few issues. Assume that we're working with three interfaces: inside, outside, and DMZ. Let's say I have a server (X.Y.Z.1) that has to respond to PING, HTTP, SSH, FTP, MySQL, and SMTP. My ACL looks something like this: access-list outside_in extended permit icmp any host X.Y.Z.1 echo-reply access-list outside_in extended permit tcp any host X.Y.Z.1 eq www access-list outside_in extended permit tcp any host X.Y.Z.1 eq ssh access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp access-list outside_in extended permit tcp any host X.Y.Z.1 eq ftp-data established access-list outside_in extended permit tcp any host X.Y.Z.1 eq 3306 access-list outside_in extended permit tcp any host X.Y.Z.1 eq smtp and I apply it like this: access-group outside_in in interface outside My question is, what can I do for egress filtering? I want to only allow the minimal amount of traffic out. Do I just "reverse" the rules (i.e. the smtp rule becomes access-list inside_out extended permit tcp host X.Y.Z.1 any eq smtp ) and call it a day, or can I further cull my options? What can I safely block? Furthermore, when doing egress filtering, is it enough to apply "inverted" rules to the outside interface, or should I also look into making dmz_in and inside_in acls? I've heard the term "egress filtering" thrown around a lot, but I don't really know what I'm doing. Any pointers towards good resources and reading would also be helpful, most of the ones I've found presume that I know a lot more than I do.

    Read the article

  • sporadic routing to another website when opening a common url

    - by user226098
    I have a strange problem in our office: Sometimes when opening a url from one of our projects random url in any browser not the right website shows up but some other website. In most of the cases it redirects to google.com with some parameters like https://www.google.de/?gfe_rd=cr&ei=krOOU8_kGcSKswadyYDQBw&gws_rd=ssl or just the ugly google 404 page). But today it remains on the origial url but shows up the the content of http://debug.netdna-cdn.com/. This happens about 1 time a week and for no apparent reason. Even stranger it only occurs on a single pc in the network. It now happens on two different computers in the network. Both use windows 8. The problem cannot be fixed by clearing the browser cache but by rebooting the pc or using ipconfig /flushdns. So I think it has something to do with the dns cache of the machine. But I have no idea what the reason is for this and how i can figure out how to solve it. Any ideas?

    Read the article

  • VPS with multiple domains, can EXIM send mail from a different domain?

    - by Mike L.
    I building a site for a client on a VPS running CentOS 5.5 with cPanel WMH 11.28.60. The original domain is XXXXXinvestmenttrust.com. He has about a dozen domains on this server. The site I am building will have confirmation emails as well as provide users to anonymize their email address (like craigslist) I set up email piping to forward emails, but they are all being trapped in the spam folder. A close look at the headers, the emails appear to be comming from [email protected] rather than the actual domain. The IP has a rating of Neutral on www.senderbase.com. I believe it is the conflicting information in the header (the fields set by me, specify the actual domain where the headers put in place by EXIM specify to name of the server) Somewhere I read about SPF & MX entries can fix this, but I have been unable to figure out how. Also, All of the domains use the same IP, and the other websites do not send emails. So I could possibly make the domain in question, the primary (where all emails are sent from that domain by default) Is that possible?

    Read the article

  • hi, beginner issue with socat [closed]

    - by ams
    the main question is how to begin sending request to server with some data(send request with number) and get data from server?? and second question is how can i solve this simple question (author said) ? In this part, you should write a simple Shell script which receives URL of a website by sending your student number to a server and after creating and sending HTTP request for this URL, receives the desired content. Finally the content should be saved in an HTML file. Steps 1. Connect to port 4000 of the server and send the massage which includes your student number (e.g. 89207704) to the server. Receive the URL in the form of http://www.example.com. Create the HTTP request and send it to the website's server. Receive the content of the URL from the website. Save the content in the HTML file. what i can do? how i begin? thank u all the topology that exercise is speaking about is here Topology is here Is there any easy way to do this?

    Read the article

  • Google Chrome won't install via IE9/Windows7

    - by purir
    Using IE9, I've tried installing Google Chrome on Windows 7 from this url http://www.google.com/chrome/eula.html?hl=en-GB&platform=win But get the following error (apologies for uncouthly dumping this nonsense...) Any ideas dearly appreciated! PLATFORM VERSION INFO Windows : 6.1.7601.65536 (Win32NT) Common Language Runtime : 4.0.30319.235 System.Deployment.dll : 4.0.30319.1 (RTMRel.030319-0100) clr.dll : 4.0.30319.235 (RTMGDR.030319-2300) dfdll.dll : 4.0.30319.1 (RTMRel.030319-0100) dfshim.dll : 4.0.31106.0 (Main.031106-0000) SOURCES Deployment url : _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser resulted in exception. Following failure messages were detected: + The system cannot find the file specified. (Exception from HRESULT: 0x80070002) COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [25/06/2011 11:41:04] : Activation of _http://dl.google.com/update2/1.3.21.57/GoogleInstaller_en-GB.application?appguid%3D%7B8A69D345-D564-463C-AFF1-A69D9E530F96%7D%26iid%3D%7B26C55C3A-B26A-0484-FEDD-78443D269DA1%7D%26lang%3Den-GB%26browser%3D2%26usagestats%3D0%26appname%3DGoogle%2520Chrome%26needsadmin%3Dfalse%26installdataindex%3Ddefaultbrowser has started. ERROR DETAILS Following errors were detected during this operation. * [25/06/2011 11:41:04] System.IO.FileNotFoundException - The system cannot find the file specified. (Exception from HRESULT: 0x80070002) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IsolationInterop.GetUserStore(UInt32 Flags, IntPtr hToken, Guid& riid) at System.Deployment.Application.ComponentStore..ctor(ComponentStoreType storeType, SubscriptionStore subStore) at System.Deployment.Application.SubscriptionStore..ctor(String deployPath, String tempPath, ComponentStoreType storeType) at System.Deployment.Application.SubscriptionStore.get_CurrentUser() at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS No transaction information is available.

    Read the article

  • iptables logging not working?

    - by vps_newcomer
    OS: Ubuntu 10.04 Logging daemon: rsyslog For some reason i'm not getting any iptables logs, even thought i don't look through them very often i'd still like to get it working for the sake of it working XD Here is my /etc/ryslog.d/iptables.conf :msg, contains, "[IPTABLES]" -/var/log/iptables.log & ~ My iptables logging prefix is "[IPTABLES]" followed by whatever else (example [IPTABLES] Denied xyz) the /var/log/iptables.log file is being created, however its not getting any entries. I can see the logging entries in dmesg but not in syslog or messages. Whats going on? EDIT: My iptables logging rules: # logging limit LoggingLimit=5/min LoggingPrefix=IPTABLES # Logging chain iptables -N LOG_REJECT iptables -A LOG_REJECT -j LOG # join INPUT to LOG_REJECT iptables -A INPUT -j LOG_REJECT # logging iptables -A LOG_REJECT -p tcp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied TCP: " #--log-level 7 iptables -A LOG_REJECT -p udp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied UDP: " #--log-level 7 iptables -A LOG_REJECT -p icmp -m limit --limit $LoggingLimit -j LOG --log-prefix "$LoggingPrefix Denied ICMP: " #--log-level 7 Update: I found a thread that has the same symptoms as i do, apparently is a kernel bug. I am using a VPS so could anyone point me on how to upgrade my kernel or apply a workaround? I couldn't find a 2.6.34 kernel listed in apt-cache. Thread: http://www.linode.com/forums/viewtopic.php?t=5533

    Read the article

  • Proxypass issue in apache

    - by user116992
    I am reverse proxying one site to another site. It's working nicely, but on some links it displays the original link. Here is my configuration Original Site "mysite.com" Second site configuration which is proxypassed on "mysite.com" <VirtualHost *:80> ServerName test.mysite.com ServerAlias www.test.mysite.com test.mysite.com ErrorLog /var/log/apache2/test.mysite_log.log TransferLog /var/log/apache2/test.mysite-access_log.log LogLevel info LogFormat "%h %l %u %t \"%r\" %>s %b %T" common ProxyPass / http://mysite.com/ ProxyPass / http://mysite.com/ ProxyPassReverse / http://mysite.com/ </VirtualHost> Now, everything is working well, but the problem is that when I go to some specific link it redirects me to original link. For Example, there are two sections on my page: "about-us" and "inquiry". When I click on "about-us" it takes me to "http://test.mysite.com/about-us" which is ok When I click on "inquiry" it takes me to "http://mysite.com/inquiry" which is not correct it must be "http://test.mysite.com/inquiry". I think I've missed some thing to add in configuration file but I can't figure it out.

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • Removing HttpModule for specific path in ASP.NET / IIS 7 application?

    - by soccerdad
    Most succinctly, my question is whether an ASP.NET 4.0 app running under IIS 7 integrated mode should be able to honor this portion of my Web.config file: <location path="auth/windows"> <system.webServer> <modules> <remove name="FormsAuthentication"/> </modules> </system.webServer> </location> I'm experimenting with mixed mode authentication (Windows and Forms). Using IIS Manager, I've disabled Anonymous authentication to auth/windows/winauth.aspx, which is within the location path above. I have Failed Request Tracing set up to trace various HTTP status codes, including 302s. When I request the winauth.aspx page, a 302 HTTP status code is returned. If I look at the request trace, I can see that a 401 (unauthorized) was originally generated by the AnonymousAuthenticationModule. However, the FormsAuthenticationModule converts that to a 302, which is what the browser sees. So it seems as though my attempt to remove that module from the pipeline for pages in that path isn't working. But I'm not seeing any complaints anywhere (event viewer, yellow pages of death, etc.) that would indicate it's an invalid configuration. I want the 401 returned to the browser, which presumably would include an appropriate WWW-Authenticate header. A few other points: a) I do have <authentication mode="Forms"> in my Web.config, and that is what the 302 redirects to; b) I got the "name" of the module I'm trying to remove from the inetserv\config\applicationHost.config file; c) I have this element in my Web.config file: <modules runAllManagedModulesForAllRequests="false">; d) I tried a <location> element for the path in which I set the authentication mode to "None", but that gave a yellow exception page that the property can't be set below the application level. Anyone had any luck removing modules in this fashion?

    Read the article

  • How to track things that SHOULD happen, but might not have

    - by Kamiel Wanrooij
    I am running into a couple of issues with some applications we've deployed and maintain. I have the feeling we have approached this with some anti-patterns up to now, but I would like to see how to make this more flexible and stable. In one situation, we have a server at a client which pushes data to us to parse every night (yes, Windows Task Scheduler). This is highly unstable however, so once every month this doesn't happen because of reasons out of our control. This heavily impacts our business since we run with stale data in that situation. In another scenario we have a lot of background job processes that should be running. We already keep them up using bluepill ( http://www.github.com/arya/bluepill ) but obviously restarts happen, both automatically and manually, and people forget things or systems mess up. What I would like to track is events that should occur or should be available. Like the existence of a process, the execution of a program, or the creation/age of a file, and track it when they don't happen or exist. We develop most things in Ruby on Rails, use NewRelic, Bluepill and Munin, and run on Ubuntu. I've been toying around with counting ps aux | grep processname | wc -l in Munin scripts, or capturing the age of a file and raising alerts over 24-26 hours, stuff like that. Is there better tooling to track things that should happen, and raise alerts if they don't? P.S. I know some things are suboptimal, like manually having to define bluepill for applications and then forgetting to do so. The same goes for the push based approach of the first application, a dedicated daemon that manages that on the client side that we control and can track its connection to us might be a much better solution.

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by user57069
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • Custom PCI bracket with support, for custom PCB?

    - by newbiez
    I am considering to put a custom PCB card that I made, into my computer. It won't go on any PCI connector, it plugs in on a USB connector on the motherboard, via a ribbon cable. I need thou to plug a device to it; which means that either I leave the PCB outside the case, hanging by the ribbon (bad idea), or I could put it in a PCI slot, using a bracket. The issue is that the brackets that I have, do not have tabs, so I have no way to screw the PCB on them. I was hoping to find something that would allow me to put the PCB on it, and then just fit it in the PCI bracket opening, like this: http://www.idotpc.com/TheStore/pc/viewPrd.asp?idproduct=1203&idcategory=0 This one won't fit the bill since the holes are too close apart, compared to the one that I have already on the PCB (and can't make more holes). Do you know if there is a place where they make universal PCI bracket mounting systems for custom PCB? I just need one, so can't even order a custom one (they ask me 120 dollars for one). Thanks in advance!

    Read the article

  • Ways of file copy

    - by Tim
    I sometimes found that when using simple right-click and copy-and-paste, some files/directories are not copied completely or not at all, because of various reasons, such as some saved webpage files/directories have some strange characters in their names or their names are too long. For example, in Windows 7, I save this webpage http://www.howtogeek.com/howto/windows-vista/working-around-windows-vistas-shrink-volume-inadequacy-problems/ completely in a very deep directories whose parent directories may have long names, I cannot copy its top ancestry directory, as Windows complains the filename for the saved webpage directory is too long. In Ubuntu, sometimes I can save a file with some special character such as newline under some directory. But when I copy that directory, it will say the file name has some special character and I will have to manually remove the character. Such cases are complained in both Windows and Ubuntu. I was wondering what some better ways to accomplish the copy job in both Windows and Ubuntu. For example, will archiving all to be copied into a single archive help? If yes how to do that? Thanks and regards!

    Read the article

  • What is the bash syntax to create a new directory in the directory above?

    - by mozerella
    I aim to make a script for mogrify. The mogrify command will resize images in a directory and put the resized images into a directory on the same directory level, with the same name as the work directory, but with a suffix (_a). The new directory will be moved to another collection later on. Something like this, #!/bin/bash mkdir ../n_a for file in *{.JPG|.jpg}; do mogrify -path ../n_a -resize 1200x1200 -quality 96;done I'm guessing ../ denotes the parent dir when working in a child directory, but I need help here. Edit: "n" needs to be replaced with the syntax for the working directory name. Sorry there was a typo as well third script line, should have read n not x Edit2: This script does exactly what I need and it's silent. #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p $DEST mogrify -path $DEST -resize 1200x1200 -quality 96 *.jpg *.JPG thanks to vgoff for the correct PWD syntax and cesareriva http://www.cesareriva.com/archives/722 for showing me the DEST function. Something else: ${PWD##*/}_a is not caring for spaces in the directory name and the script fails. An empty dir is created in the same dir as the images. Found it out now, it needs quotations on the $DEST too, presumably to help mkdir create the dir with a space in the name, and mogrify to write the files to the right place, like this #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p "$DEST" mogrify -path "$DEST" -resize 1200x1200 -quality 96 *.jpg *.JPG

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • Software for RAID Failure Alerts?

    - by QF_Developer
    I have two 256 GB Samsung 840 Pro SSD disks in a RAID 1 array. I would like to receive a notification if one of the disks in the array fails. Can anybody recommend an application I can install on the server to fire an email if such an event occurs? Here are some additional specs: Supermicro X9SCM-IIF motherboard utilising the hardware RAID controller. OS = Windows 2012 Standard Also is it possible to simulate a disk failure by pulling it out of the bay? SSDs appear to fail close together when in a mirrored config so I'd like to know ASAP if one goes down so I can swap them out with minimum delay. UPDATE 26th June 2013 ------------------------ None of the software that ships with the Supermicro X9SCM-* motherboards offer support for RAID monitoring. As has been pointed out here, these boards are built on an Intel chipset for RAID and so I installed Intel Rapid Storage Technology that supports automated email notifications on RAID failure http://www.intel.com/support/chipsets/imsm/sb/cs-020784.htm One small issue, the software only allows you to send email notifications without SMTP authentication. There's a bunch of different workarounds here: http://communities.intel.com/thread/30771

    Read the article

  • Why does windows (file) explorer try to connect to port 80 (http) instead just using smb?

    - by Erik
    Background: On an almost freshly installed pc I get a message along the lines of : "windows cannot find some-file-server-name. Check the spelling and try again"... when trying to access any fileshare. Troubleshooting so far: pinging works. Both by ip and by name the almost identical pc next to this one can access the file server everyone else can access the file server the pc in question can not access other open fileshares but it can connect to the internet And now for what I think is the interesting part: running wireshark with ip.addr == local.ip.add.ress and ip.addr == server.ip.add.ress tells me that it tries to connect over http. the server replies but after a few messages back and forth it stops the other machine of course just uses smb I guess port 80 just means it defaults to webdav, but I haven't been able to find anything that can cause this. Googling it the closest thing I found was this http://www.techrepublic.com/article/get-vista-and-samba-to-work/6353849 but then again this was an XP pc and I wasn't able to connect to other native Windows shares (and I tried the solution anyway and it didn't work.)

    Read the article

  • lamp -- edit PHP file but doesn't change web output -- including die()

    - by Reid W
    Server is standard Linux server on Amazon Web Services. Cent OS 5/Apache/PHP 5.3. No APC. It's worked fine for over a year, but now when I edit some but not all PHP files on the server using vi, the changes don't affect the web output. For example, I edit myfile.php and put a die() at the top, but when I load the page in my web browser, instead of the die() I see the content that would show up if the die() weren't there. svn updating the file in question doesn't help either. Files are on an Amazon EBS partition symlinked to /var/www/html. Just to reiterate -- this has worked fine for a long time. Restarting apache didn't help, nor did rebooting the server. What's weird is that it's just some of the files but not all. File ownership/permissions are the same for the "good" and "problem" files. I'm not a Linux newbie but am at a complete loss with this, and couldn't find anything on Google either. Any hints would be much appreciated!

    Read the article

  • Force Installing a Radeon HD 2100 on Windows 8

    - by Click Ok
    I'm trying force installing a Radeon HD 2100 on Windows 8. I've found that link from AMD with the drivers for Windows7: http://support.amd.com/us/gpudownload/windows/legacy/Pages/legacy-radeonaiw-vista64.aspx I know too that AMD will stop support Radeon HD 400 and older: http://www.techspot.com/news/48321-amd-drops-windows-8-support-for-radeon-hd-4000-and-older.html Now, let's go to the problem. If I try install the 12.6 driver, Windows will stick with the "basic display adapter", and this is bad for 3d games like Minecraft, that runs really slow now compared with the previous Windows7 installation. Forcing install the catalyst driver can help to fix it. So, I follow that steps: Extract the Catalist Driver (C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql) Right click the "basic display adapter" on device manager, and "update driver" Search on PC I will choose the driver "With Disk" "C:\AMD\Support\12-6-legacy_vista_win7_64_dd_ccc_whql\Packages\Drivers\Display\W76A_INF" There is a big list of drivers and the nearest driver to HD 2100 is "Radeon HD 2350 Series" My questions: Why isn't "Radeon HD 2100 Series" listed? (or Where is it listed?) In theory it must be listed" The first link above show that "This article applies to the following configuration(s):" (...) "AMD Radeon HD 2000 Series" Am I doing something wrong?

    Read the article

  • Linux as a gateway (no NAT)

    - by Hugo
    I'm trying to configure a linux server as a gateway/router, but I can't get it to work, and all information I've managed to find is NAT-related. I have a public IP block for the gateway and devices behind it, so I want the gateway to simply route packets to the internet - again: no NATing! I've managed to get the gateway to access the internet successfully (that was just a matter of configuring the IP and GW), and the computers behind it can communicate with it. [EDIT: more info] This is actually an IPv6 block (2800:40:403::0/48) (but I've found that most utilities and instructions can be easily adapted from IPv4 to IPv6 with little hastle). The server has too ports: wan: 2800:40:403::1/48 lan: 2800:40:403::3/48 One of the computers behind it is connected to it via a switch; 2800:40:403::7/48 The wan interface on the server can ping6 www.google.com without issues. The lan interface on the server and the client can mutually ping each other without issues (as well as SSH, etc). I've tried setting the server as a default gateway for the client, with no luck: client # route -A inet6 add default gw 2800:40:403::3 dev eth1 server # cat /proc/sys/net/ipv6/conf/all/forwarding 1 I don't want any filtering/firewalling/etc, just plain routing. Thanks.

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • Router as primary DNS server, Server as alternate? (or vice versa)

    - by Jakobud
    We have a very small business network, with a typical cable modem hooked into a DD-WRT router. We also run a basic CentOS server that does a variety of things, including acting as the primary DNS server for the office. The reason we need an internal DNS server is because we do a lot of internal web development and use the DNS server to add/remove various local network URLs for internal website testing (like www.testsite.com.local). It's very important for us to be able to add/remove URL aliases easily to the DNS. The problem with this setup is that if we ever need to restart the CentOS server or take it offline for upgrades or whatever, then internet access for all computers on the network is lost. That's because each computer relies on that DNS server to access the Internet I guess? The router is online all the time and very very rarely has to be restarted. It would be nice if we could setup my router to be the primary DNS server but still be running DNS on my server. So we could still add my local testing website URLs to the DNS server in CentOS, but be able to also take down the CentOS server without loosing Internet access on the network. How would this be setup? Would I simply need to add both router + server IP addresses to each computer's IP settings? Is the router primary DNS and server secondary DNS server? Or vice versa? Or can one of the two serve as a fallback for the other? What (if anything) needs to be configured on both the router and server in order for them to recognize that the other DNS server exists on the network? Does anyone have any newb-friendly resources for setting up something like this?

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >