Search Results

Search found 16628 results on 666 pages for 'setup kit'.

Page 520/666 | < Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >

  • Multiple Servers + One MailServer

    - by theomega
    Hy, I got several Linux-Servers (running Debian) where different services run: Database-Servers, Webservers, Applicationsservers, Tools and so on. All Servers are connected to the same internal network. There is also one special Server which is the Mail-Server: All Mailaccounts are stored on this server, it is also the outbound Mailserver for all the other servers. I want all Mails for all servers to get saved on the Mailserver. For example if an cron-job fails on one of the web-servers the mail should not be delivered to the local user but instead to the Mailserver so I get a centralized place for mail storage. How do you set up this scenario? My current setup is: Using postfix as MTA on the Mailserver and using ssmtp on all the other servers. SSMTP is configured to send the mails to the Mailserver. The Mailserver is configured to allow the whole internal network to relay mails using itself. Is this the right way to choose? I also thought about setting up a MTA (postfix) on every server and configure it somehow to forward the mails. What would be the advantage of this solution?

    Read the article

  • USB 3.0 hard disk not detected on a particular host controller?

    - by Alvin Wong
    I have a USB 3.0 hard disk which has always been working on my desktop with an XHCI. Now I just bought a notebook with an XHCI (something with Intel's Ivy Bridge setup). The first time I plug the hard disk in its 3.0 port it is detected and working. A few hours later I try to connect it again, but seems that the notebook just ignored it! The light on the hard disk didn't blink as usual (instead it is hold at on). I then tested it with my desktop again and it is working perfectly. It gets trickier when I plug it in the USB 2.0 port of that notebook it is detected and working perfectly (despite the slower speed). Then I try to plug in an USB 2.0 USB flash drive to that USB 3.0 port, and it is detected (of course as USB 2.0). So, there are two USB 3.0 ports on my notebook's XHCI. Both of them are not working with my hard disk but working perfectly fine with my USB 2.0 UFD. What's wrong with it? When I plug in the hard disk, device manager doesn't change. I've tried re-installing the driver for the XHCI, but it changes nothing. Had I broke the USB 3.0-specific pins of both USB 3.0 ports?

    Read the article

  • Cannot browse remote networks even with WINS configured

    - by paradroid
    As the NetBIOS protocol acts on Layer 2 and so is not routable, In order to enable network browsing of remote networks, WINS has been installed and configured on two domain controllers, both of which are on different networks. The WINS servers seem to be replicating with eachother, and each has 127.0.0.1 set as the Primary WINS Server in each of their LAN interface properties, with nothing entered for Secondary WINS Server. The DC which holds the PDC Emulator FSMO role has the Computer Browser service running and set to Auto start, and it has the WINS/NBT node type network setting at 0x8 (H-node - Hybrid node). Remote network browsing does not work. Is the WINS/NBT node type correct for this scenario? The reason why I think it may not be the right one is because I set the DHCP Server's 046 WINS/NBT node type option to 0x8 as well, after which the DHCP clients started to disappear from the Network folders. When that option is not set, does it default to B-node (Broadcast node)? Or could it be a problem with the WINS servers setup?

    Read the article

  • svn post-commit not performing

    - by davin
    ive been sitting on this for about 7 hours, and ive aged close to 7 years... ahhh, server admin does that to me. i have svn wired through apache2 with webdav in the usual manner (basically like http://www.howtoforge.com/setting-up-subversion-with-webdav-post-commit-hook-and-multiple-sites-on-jaunty-jackalope-ubuntu-9.04). ive had endless problems with this (i didnt on my previous ubuntu server install, although this is ubuntu 10.10): this happened, and was fixed like in the post: http://stackoverflow.com/questions/2547400/how-do-you-fix-an-svn-409-conflict-error this looks like my issue, although its not my solution: http://serverfault.com/questions/135494/apache-svn-on-ubuntu-post-commit-hook-fails-silently-pre-commit-hook-permis my commit to svn works (finally). although the post-commit hook which is supposed to svn update the working copy of the repo on the server, doesn't work. the post-commit hook itself executes, and has sudo permissions (as in the setup url above. testing with whoami somelogfile.log or sudo whoami somelogfile.log shows www-data and root, respectively), although it wont perform the svn update (sudo svn update /var/www/gameServer /var/svn/gameServer.log). similar to the serverfault url above, when i perform the exact command it does update the working copy to the latest revision, just not through the post-commit hook. an age old question that is 90% of the time a permissions issue. but in pure frustration i chmod 777 lots of stuff not to mention the fact that www-data is in /etc/sudoer so it shouldnt even need that. im collapsing in front of the screen partly out of frustration and partly out of sleepiness. any direction would be appreciated.

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • nginx error page and internal directives not working as expected

    - by Romain
    I'd like to setup my nginx server to return a specific error page on HTTP 50x status codes, and I'd like this page to be unavailable by a direct request from users (e.g., http//mysite/internalerror). For that, I'm using nginx's internal directive, but I must be missing something, as when I put that directive on my /internalerror location, nginx returns a custom 404 error (which isn't even my own 404 error page) when a page crashes. So, to summarize, here's what seems to happen: GET /Home nginx passes the query to Python I'm simulating an application bug to get the 502 error code nginx tries to return /InternalError from its error_page rule because of the internal rule, it finally fails back to a custom 404 error code <-- why? the documentation says error_page directives are not concerned by internal: http://wiki.nginx.org/HttpCoreModule#internal Here's an extract from nginx.conf with a few comments to point things out: error_page 404 /NotFound; error_page 500 502 503 504 =500 /InternalError; # HTTP 500 Error page declaration location / { try_files /Maintenance.html $uri @pythonbackend; } location @pythonbackend { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; } location ~* \.(py|pyc)$ { # This internal location works OK and returns my own 404 error page internal; } location /__Maintenance.html { # This one also works fine internal; } location ~* /internalerror { # This one doesn't work and returns nginx's 404 error page when I trigger an error somewhere on my site internal; } Thanks very much for your help!!

    Read the article

  • Enabling hardware acceleration and Xinerama for multi-monitor/multi-GPU in Linux

    - by mynameiscoffey
    My current setup is three monitors connected as follows (monitors listed from left to right): GPU0 (nVidia GTX 280): - Dell 2405FPW (1920x1200) - Dell U2410 (1920x1200) GPU1 (nVidia 210): - Dell 2405FPW (1920x1200) Works like a charm in Windows 7, not so much in Linux. I seem to only have three real options: Run all three monitors as a seperate X screen, I get hardware acceleration but as they are all independent X sessions I cannot move windows between them and can only have firefox open on one at any given time. Run the two on GPU0 in TwinView mode and have GPU1 as a seperate X screen. Same limitation as 1 but at least two monitors work together ok. I did have an issue where occasionally Linux saw both monitors on GPU0 as a single large monitor however. Enable Xinerama and have everything work as I want it to but hardware acceleration is gone and the display is Windows 95 style choppy. My ideal solution would be to have all screens working as they do under Xinerama without the limitation of having hardware acceleration disabled. I don't even care if that means rendering all three on GPU0 and somehow farming out the display of the third monitor to GPU1, whatever works. My question is this: is there any way to accomplish this? I don't feel like my use case is so out there that there shouldn't be at least some form of support (beyond the three limited options presented above), or is my best option going to be to just suck it up and pick up a better card to replace both that can handle three outputs by itself?

    Read the article

  • How do I migrate Exchange 2007 to new hardware?

    - by Graeme Donaldson
    As per my previous question, I have an Exchange 2007 box which is also a DC. Since I can't demote it while Exchange is installed, I want to move Exchange to a different server. Does anyone have any articles, tips or experiences to share on this? The last time I did this it was with Exchange 2003 and even that is a little rusty in my head. The setup is a single Exchange 2007 Hub/Edge/Mailbox/CAS server. Its currently on Windows Server 2008, I can migrate it to the same OS, or I can go to 2008 R2, I'm not really picky on that. We're running OWA/ActiveSync/POP3(S)/IMAP(S) for client access. I already have another fully functional DC/GC/DNS box in the same site and clients in the site are already using that for DNS. It's also the preferred site bridgehead for AD replication. Update: After reading Evan's answer I realised that my original question wasn't worded correctly. I'm not looking to do a swing migration, I actually need to move Exchange completely over to a new box. I have done swing migrations in the past, i.e. moving over to a temporary box and back to the original hardware afterwards, and I'm not really sure why I used that term in the original question since it's not what I intended. Any tips?

    Read the article

  • CloudFlare dashboards empty, or performance issues

    - by Katafalkas
    I wanted to test CloudFlare performance so I set my image gallery domain on it and started testing. I have added PageRules for caching. And chose the Security: Essentially Off. I checked NS check tools and they say that my domain name is propagated with CloudFlare. For testing purpose I created a link that loads 200 images from that server, and was using loads.in website to determine how much it is faster. After trying few regions, I noticed that there were no improvement in loading speed. So I looked up the dashboards, and it was empty. I am not sure if I am doing something wrong, or made some error in my setup, or it takes few days to start caching or working properly, but at the moment - after a day of testing - dashboards are empty. Also the NS check tools sais that all name servers are propagated to CloudFlare and working fine. So I assume I got a bad performance because it is simply not working. I sent a letter to CloudFlare support team, but did not get any straight answer. So essentially my question is: Anyone has any experience with CloudFlare ? How long does it take for it to start caching static content to CDN ? Or there is simply something I am doing wrong ?

    Read the article

  • Vista laptop 1 connects to network but Vista laptop 2 doesn't

    - by Jaips
    (Sorry if this is a dulipcate. I did a hunt but couldn't find matching question) We recently got a new router for our network (thomson TG585v8). It was already pre-configured by the ISP and was easy to setup. Both Vista laptops in this home network connected without trouble as well a iPod touch, all via wifi. In the last two days one of the laptops has been unable to connect cleanly to the network (connects as unidentified network). The other laptop, and iPod have not had any issues. I can't think of anything that has changed to make this happen now. I have rebooted the laptop and the router. I have updated the laptop wireless driver I have checked the laptop is set to automatically get IP I have logged into the router which correctly identifies all three wireless devices. The problem laptop connects via LAN without issue. Some things that may or may not matter: The problem laptop also sometimes uses a 3G dongle Both laptops use Windows Live Mesh to sync folders Laptop is usually set to go to sleep NB: it seems this is an intermittent issue, after an hour (since i noticed it) it seems to have fixed itself. Would love ideas though to solve this problem for good.

    Read the article

  • Solaris 10: cannot ping to/from server

    - by anurag kohli
    All, I have a Solaris 10 server which is not reachable by IP (ie can't ping to/from the server). I believe I have the default route setup correctly. See below: # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.62.100 netmask ffffff00 broadcast 192.168.62.255 ether 0:14:4f:b1:9b:30 # netstat -rn Routing Table: IPv4 Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 192.168.62.0 192.168.62.100 U 1 40 bge0 224.0.0.0 192.168.62.100 U 1 0 bge0 default 192.168.62.1 UG 1 0 127.0.0.1 127.0.0.1 UH 1 4 lo0 # # cat /etc/defaultrouter 192.168.62.1 I have verified layer1 and layer 2 are up on the switchport, and that it's on the correct VLAN. I have also checked the default gateawy (192.168.62.1) is in fact reachable since I can ping it from my PC: Pinging 192.168.62.1 with 32 bytes of data: Reply from 192.168.62.1: bytes=32 time=1ms TTL=254 Reply from 192.168.62.1: bytes=32 time=1ms TTL=254 Reply from 192.168.62.1: bytes=32 time=3ms TTL=254 Reply from 192.168.62.1: bytes=32 time=6ms TTL=254 I'm at a loss as to what is wrong. I would highly appreciated your assistance. Thank you very much.

    Read the article

  • Persistent routes for DD-WRT PPTP VPN client

    - by Tim Kemp
    My home network in the USA is behind a Buffalo router (G300NH) running their version of DD-WRT. I use the built-in PPTP VPN client to connect to a VPN provider in the UK. I route certain traffic over the VPN (so it has a UK source address, for various entirely legal reasons) which I achieved by following the instructions in the DD-WRT docs and my VPN provider's own instructions. I placed two commands like this in the firewall script: route add -net xxx.xxx.0.0 netmask 255.255.0.0 dev ppp0 route add -net yyy.yyy.0.0 netmask 255.255.0.0 dev ppp0 I didn't put any of the iptables rules in since it my setup doesn't seem to need them. It works like a charm. Traffic to the xxx subnets goes over the VPN, everything else goes out over my ISPs own pipes. The problem comes when the VPN drops, which it does occasionally. DD-WRT does a fine job of reconnecting it automatically, but the routes are trashed every time that happens. How do I automate the process of re-establishing my routes? I thought about static routes, but the IP address of the VPN connection is dynamically assigned (which is why I'm using dev ppp0). Many thanks, Tim

    Read the article

  • Web Application Publishing on Citrix with Restricted Access

    - by Kanini
    We have a Citrix setup enabling users to access our applications from home. Basically, they login to our site using the Windows Authentication. Once, the are successfully logged in, they see the following icons Desktop - Full Screen (which provides them the Desktop as they would see when the login in our office) We now have a requirement where we would like to publish a web application, hxxp://ourlibrary on Citrix with the following security requirement. (this application is already accessible if the users launch the desktop and launch IE within it and navigate to it) The requirement is this - When the are successfully authenticated to our site, they should be able to see The Internet Explorer icon only, NOT the Dekstop - Full Screen icon. On clicking on the icon, Internet Explorer should open up and should automatically navigate to hxxp://ourlibrary They should not be able to access any other URL, such as Google, Hotmail etc., They should not be able to go FileOpen and Browse They should not be able to do FileSave and Browse In effect, they should be able to view the site and that should be it. Any ideas on how to accomplish the security feature? We have already published the application.

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • Cisco Router - Add a missing MIB file

    - by Jonathan Rioux
    I have a Cisco 881w, and I would like to setup NBAR in my NetFlow Analyzer. But it says that my router misses this MIB in order to allow NFA to poll the router with snmp to get NBAR infos. From the FAQ page of the NetFlow Analyzer website, it responds to my error: Q. I am able to issue the command "ip nbar protocol-discovery" on the router and see the results. But NFA says my router does not support NBAR, Why? A. Earlier version of IOS supports NBAR discovery only on router. So you can very well execute the command "ip nbar protocol-discovery" on the router and see the results. But NBAR Protocol Discovery MIB(CISCO-NBAR-PROTOCOL-DISCOVERY-MIB) support came only on later releases. This is needed for collecting data via SNMP. Please verify that whether your router IOS supports CISCO-NBAR-PROTOCOL-DISCOVERY-MIB. The missing MIB is: CISCO-NBAR-PROTOCOL-DISCOVERY-MIB I found it here: ftp://ftp.cisco.com/pub/mibs/v2/CISCO-NBAR-PROTOCOL-DISCOVERY-MIB.my But how can I add this MIB into the router? The IOS of my router is: c880data-universalk9-mz.151-3.T1.bin

    Read the article

  • NetInstall working on some systems, not working on others

    - by cduruk
    Hi, I'm having an issue where my NetInstall setup works on some computers and fails on others. I am not able to diagnose the issue. I created an image of a Mac Mini and then created a NetRestore image using the System Image Utility found on Snow Leopard Server. NetBoot and NFS all seem to be working fine on the server, which is an XServe. Then I select the NetInstall image from the Startup Disk on a machine. On some of the machines, the process works as expected. On some of them, I see the globe icon blink a few times and then the system boots to the regular hard drive. I have captured the tracedump and the system.log logs from the server on both cases where NetInstall seems to work and fail. Here is the link that has all the logs http://gist.github.com/232232 The gist of the failure seems to be from the lack of BSDP DISCOVER in the failure but I'm not able to identify why that exactly is happening. I'd really appreciate any help on this issue.

    Read the article

  • Computer won't go into standby

    - by Robert
    When I select Start-Turn off computer-Standby the 'turn off computer' option window closes, and then nothing else happens. I can start new applications, and Windows acts like I never selected standby. I ran it for several hours after that. If I have a TV program scheduled to record when I select standby I get a window (the Pinnacle TV software) asking if I'm sure, there are programs scheduled to record - and the computer just keeps running after I select yes, never going into standby. I added that detail as it shows the standby process is starting. [This problem also happens if a TV program is not scheduled, so the scheduler task in not running/in memory. This problem happens regardless of whether I'm not watching TV. This problem happens regardless of whether Media Center is running (it usually isn't, I'm using Pinnacle to watch TV).] I looked at "How to troubleshoot hibernation and standby issues in Windows XP" http://support.microsoft.com/kb/907477 - ACPI is enabled, and "standby" is an option in "Power Options Properties." So it appears to be setup correctly. Windows XP SP3 Media Center Edition, all current updates installed.

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Munin graphing by CGI

    - by Vaughn Hawk
    I have Munin working just fine, but any time I try to do cgi graphing - it just stops graphing... no errors in the log, nothing. I've followed the instructions here: http://munin-monitoring.org/wiki/CgiHowto - and it should be working - here's my munin.conf setup, at least the parts that matter: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates graph_strategy cgi cgiurl /usr/lib/cgi-bin cgiurl_graph /cgi-bin/munin-cgi-graph And then the host info yada yada - graph_strategy cgi and cgrurl are commented out in munin.conf - that's because if I uncomment them, graphing stops working. Again, I get no errors in logs, just blank images where the graphs used to be. Comment out cgi? As soon as munin html runs again, everything is back to normal. I'm running the latest version of munin and munin-node - I've tried fastcgi and regular cgi - permissions for all of the directories involved are munin:www-data - and my httpd.conf file looks like this: ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory /usr/lib/cgi-bin/> AllowOverride None SetHandler fastcgi-script Options ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Location /cgi-bin/munin-cgi-graph> SetHandler fastcgi-script </Location> Does anyone have any ideas? Without this working, at least from what I understand, Munin just graphs stuff, even if no one is looking at them - you add 100 servers to graph, and this starts to become a problem. Hope someone has ran into this and can help me out. Thanks!

    Read the article

  • when should be choose simple php mail and when smpt with loggin+password?

    - by user43353
    Hi, My Case: web application that need to send 1,000 messages per day to main gmail account. (Only need to send email, not need receive emails - email client) 1. option - use php mail function + sendmail + config php.ini php example: <?php $to = '[email protected]'; $subject = 'the subject'; $message = 'hello'; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: [email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers); ?> php.ini config (ubuntu): sendmail_path = /usr/sbin/sendmail -t -i pros:don't need email account, easy to setup cons:? 2. option - use Zend_Mail + transport on smpt+ password auto php example(need include Zend_Mail classes): $config = array('auth' => 'login', 'username' => 'myusername', 'password' => 'password'); $transport = new Zend_Mail_Transport_Smtp('mail.server.com', $config); $mail = new Zend_Mail(); $mail->setBodyText('This is the text of the mail.'); $mail->setFrom('[email protected]', 'Some Sender'); $mail->addTo('[email protected]', 'Some Recipient'); $mail->setSubject('TestSubject'); $mail->send($transport); pros:? cons:? Questions: Can 1 option be filtered by gmail email server as spam? please can you add pros + cons to options above Thanks

    Read the article

  • Configuring CakePHP on Hostgator

    - by yaeger
    I have absolutely no idea what I am doing wrong here. I have followed just about every guide there is with installing cakephp on shared hosting and I am still having problems. I have also started over each time when following a guide. Maybe someone can help me out here as I am out of options. Here is my current setup: / app webroot vendors lib cake public_html .htaccess index.php plugins I have configured the index.php file in the public_html to point to the correct files. I have also done this in the index.php file located in webroot folder. I am getting an Internal 500 server error and it says to check my logs for what the error specifically is. However there are no logs being generated. I removed the .htaccess file from the public_html folder and I get the following errors: Warning: require(/app/webroot/index.php) [function.require]: failed to open stream: No such file or directory in /home/user/public_html/index.php on line 40 Fatal error: require() [function.require]: Failed opening required '/app/webroot/index.php' (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/user/public_html/index.php on line 40 line 40 is require APP_DIR . DS . WEBROOT_DIR . DS . 'index.php'; DS = "/" WEBROOT_DIR = "app" Anyone have any suggestions? I am at lost at what I am doing wrong.

    Read the article

  • PHP-FPM issue on LEMP Stack and WordPress

    - by jw60660
    I'm very much a NGINX and Server Admin beginner. I used this tutorial to install NGINX / PHP / mySQL / WordPress: C3M Digital Tutorial In this tutorial the backend php-cgi setup is configured using fastcgi. php5-fpm was installed during this tutorial: apt-get install nginx-full php5-fpm php5 php5-mysql php5-apc php5-mysql php5-xsl php5-xmlrpc php5-sqlite php5-snmp php5-curl After reading that the NGINX configuration on the WordPress codec was more secure than most tutorials, I decided to use the codex configuration: WordPress NGINX configuration in Codex The Codex configuration uses php-fpm for backend php-cgi. When opening the browser I got a 502 Bad Gateway error. The error log was: "2012/06/10 21:18:27 [crit] 14009#0: *4 connect() to unix:/tmp/php-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 12.3.456.789, server: mywebsite.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", hos t: "mywebsite.com"" In the main NGINX configuration file supplied by the codex I noticed the line starting "server unix:" in the upstream php block which point to the empty directory: # Upstream to abstract backend connection(s) for PHP. upstream php { server unix:/tmp/php-fpm.sock; # server 127.0.0.1:9000; } I checked the folder at /tmp and it was empty. Seems I missed configuring php-fpm to play with NGINX. Can someone point me in the right direction? Much appreciated!

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Tcpreplaying using VMware

    - by Methos
    This is more like a testbed setup question. I want to use VMware to debug some networking code in the linux kernel in the VM. My VM has two network interfaces. What I want to do is replay the capture file in the host and receive the packets in the VM. My problem is I do not see replayed packets in the VM. I am running VMware and tcpreplay on the host as sudo. Hence I think there should not be any problem access devices files. I am running VMware workstation 7.0 a. I first began with Custom networking as that provides option of creating your own virtual network name. I wrote /dev/vmnet3 and /dev/vmnet4 for the two interfaces respectively. However, after booting the guest, I did not see any of these interfaces or devices files (in /dev) created on the host. b. Then I tried 'Host Only', but that does not show what bridge/device file is associated with the interface. c. Finally I tried bridged networking mode. I see vmnet1, vmnet8 and vboxnet0 on the host. I have tcpreplayed the capture file on each of these interfaces, for all the above three cases. I tried to capture packets in the VM using "tcpdump -i any". However, I do not see any packets. Any ideas/pointers?

    Read the article

< Previous Page | 516 517 518 519 520 521 522 523 524 525 526 527  | Next Page >