Search Results

Search found 11640 results on 466 pages for 'figure'.

Page 302/466 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Understanding what needs to be in place for a server to send outgoing email from a linux box

    - by Matt
    I am attempting to configure an openSuse 11.1 box to send outgoing email for a domain that the same server is hosting. I don't understand enough about smtp servers and the like to know what needs to be in place and working. The system already had Postfix installed, and I confirmed it was running via a > sudo /etc/init.d/postfix status I examined the Postfix config file in /etc/main.cf and configured a couple of items regarding the domain/host name and such, but left it largely default. I attempted to send an email from the command line with the following command: > echo "test 123" | mail -s "test subject" [email protected] Where differentdomain.com was not the same domain as the one best hosted on the server. However, the email never reaches the target account. Any suggestions? EDIT: In the postfix log, (/var/log/mail.info, there's nothing in .err) I see that postfix is trying to connect to what appears to be a different smtp server on our network, with a connection refused: connect to ourdomain.com.inbound15.mxlogic.net[our ip address]:25: Connection refused However, I can't figure out why it is 1) trying to connect to that server and 2) not just sending the messages itself... I mean, isn't postfix an smtp server? I did a grep -ri on ourdomain from /etc and see no configuration files anywhere telling it to do this. Why is it?

    Read the article

  • email dropbox between two mutually untrusted sites

    - by user52874
    I've an interesting problem that I thought was straightforward, but turns out I think I'm whistling down the wrong path. It has to do with (shudder) email. I thought I was done with needing to know about email guts ten years ago; I was wrong. Anyway. Simply put, I need to figure out how to relay outgoing email that is not targetted in our domain from our domain into a 'dropbox' in a DMZ, and the Other Guys can retrieve that email from their side of the DMZ and distribute it accordingly, even out to the public internet if need be. There will be no [un-established] traffic coming back to Our side from anywhere; any attempts to do so are dropped with malicious prejudice. Our side is postfix running on scilinux6.1. The DMZ boxes are redhat5.4. The Other Guys are M$ Exchange. The firewalls are set up such that data can go from Our Side downsec to the DMZ, but not upsec from the DMZ into Our Side. Same for the Other Guys. My first thinking was simply to set up postfix on a box in the DMZ and tell them to set up fetchmail or whatever the M$ equivalent is, but then I started remembering that postfix wants to actively relay email onwards, rather than hold it and wait for someone to 'reach in' and retrieve it. I'm not sure I've explained this well, but hopefully it's clear enough that someone can point me in the right direction. I seem to remember having done this before, but it was a looong time ago. thanks!

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • A complicated nginx/php-fpm chroot setup

    - by Rsaesha
    I'm running nginx and php-fpm, and I want to set up jails for each host. My setup is a little complicated, so following tutorials on the web gets me nowhere. Each site has a directory /var/www/domain.name/ Inside that directory, there will be a public/ directory which will be the website root, a logs/ directory which will store nginx logs for that site specifically, and the chroot filesystem (etc/, usr/, etc.) The first problem I've run into is that nomatter how I configure it, PHP-FPM cannot find the files that are passed to it via nginx. They result in a "Primary script unknown" error, and to make matters worse, the error messages from PHP-FPM are no more verbose than that, so I can't figure out what path is being passed by nginx. A php-fpm pool configuration for a host looks like this: [host] user = host group = www-data chroot = /var/www/domain.name chdir = /public listen = 127.0.0.1:900x 'x' is incremented for each pool. The nginx config for this host looks like this: server { listen 80; server_name domain.name *.domain.name; root /var/www/domain.name/public; index index.php index.html index.html; location ~ \.php$ { expires epoch; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:9001; } } I'm guessing that the problem is the SCRIPT_FILENAME parameter, but I've changed it to just $fastcgi_script_name, and various other combinations, but to no avail. Can anyone help?

    Read the article

  • ADSL with RFC 2684 Bridging

    - by Axel Isouard
    My new ADSL line is now enabled, I can finally use my Netgear DM111Pv2 to use to the Internet. My ISP has told me a big surprise : I don't need to use a login and a password to connect to the Internet, then I must use the RFC 2684 bridging mode. It works pretty fine on the ADSL modem's side, but I've spent one night trying to figure out how to connect to the Internet through this modem. I only have a Fonera 2.0n and a computer running Gentoo Linux. I've been trying to use the br2684ctl utility with brctl on my Gentoo, first I've configured my kernel in that way : CONFIG_PPP=y CONFIG_PPP_BSDCOMP=y CONFIG_PPP_DEFLATE=y # CONFIG_PPP_FILTER is not set CONFIG_PPP_MPPE=y # CONFIG_PPP_MULTILINK is not set CONFIG_PPPOATM=y CONFIG_PPPOE=y CONFIG_PPP_ASYNC=y CONFIG_PPP_SYNC_TTY=y [...] CONFIG_ATM=y CONFIG_ATM_CLIP=y CONFIG_ATM_CLIP_NO_ICMP=y CONFIG_ATM_LANE=y CONFIG_ATM_MPOA=y CONFIG_ATM_BR2684=y # CONFIG_ATM_BR2684_IPFILTER is not set And I still get these messages : cirus nais # br2684ctl -b -c 0 -e 0 -a 8.35 br2684ctl[8041]: Interface "nas0" created sucessfully br2684ctl[8041]: Communicating over ATM 0.8.35, encapsulation: LLC br2684ctl[8041]: Fatal: failed to connect on socket; No such device The brctl utility keeps telling me "Invalid argument" each time I try to add the nas0 interface into my bridge, I'm honestly hoping I'm doing wrong. I've been following this README carefully and this tutorial on setting up a PPPoE connection with Gentoo, but the PPPoE interface just tries to start, and nothing special related to PPP happens, I can't see the interface when I do ifconfig. So, I'm asking you if there's something huge I've been missing since the beginning ! Maybe I should wait to buy a new router fully supporting the RFC2684 bridging mode, but I'm more interested in setting up this mode on my Fonera 2.0n and even my Raspberry Pi !

    Read the article

  • I can't download or stream for more then 3 sec, and then the conection activity just dies

    - by JMHein
    Just got a new internet connection installed at my sisters place, but it randomly just stops working. At first it was only affecting flash videos. they would randomly just stop buffering. I did a lot of research on this and found that there can be many things that cause this exact trouble. I then tried IE and some flash would stream fine, but still random deaths. So I told my brother in law to reset the router and modem and that fixed the problem for them but not my laptop. I then started trying to fix the flash problem only to fined that downloads of any kind were affected. Now it is so bad that 50% of page loads will never finish because the connection drops to 0% usage with in a split sec. I can't get flash reinstalled because the installer is trying to download but the download dies at 8% I tried up loading a large file by FTP to a web server with no troubles. Yet any activity on my end that takes longer then about 1 sec to finish, just never finishes I can watch the network log in the taskmanager and it spikes for ruffly one sec then drops back to zero and when I go back to the web page it says it is still loading and no matter how long I let it sit it never does any thing more till I reload then it will again create a very short spike of activity on the connection and then drop to zero. Also if I start a download and it does drop off I can restart the download where it left off and get up to 100Kb/s for around the same one sec then it drops to around 14Kb/s then zero a sec latter... I am running Win 7 home prem x64 with FF11 and IE8 I have simply tried every thing I can short of calling up the ISP which very likely will get me no where fast. any advice on what step to take to figure this out would be nice. I am not even sure it is not just an ISP problem. (at least I should be able to get flash reinstalled once I get back home)

    Read the article

  • Can't find disk usage in one directory

    - by Xster
    Similar questions are asked frequently but no suggested answers solved my issue. I have some disk space usage that I can't find as well. In df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 144183992 136857180 2652 100% / udev 2013316 4 2013312 1% /dev tmpfs 808848 876 807972 1% /run none 5120 0 5120 0% /run/lock none 2022116 76 2022040 1% /run/shm overflow 1024 0 1024 0% /tmp I checked the inodes, I checked lsof for +L1 or deleted files, I rebooted, I checked for files hidden behind mounts but none of them were the issue. It grows periodically and I'm running out of things to delete to feed the beast. It's all in the home directory of the only user I have. In du in ~ du -h --max-depth=1 192K ./.nv 2.1M ./.gconf 12K ./Pictures 1.6M ./.launchpadlib 12K ./Public 24K ./.TemporaryItems 8.9M ./.cache 12K ./Network Trash Folder 28K ./.vnc 11M ./.AppleDB 48K ./.subversion 1.9G ./.xbmc 8.0K ./.AppleDesktop 12K ./.dbus 81M ./.mozilla 12K ./Music 160K ./.gnome2 44K ./Downloads 692K ./.zsh 236K ./.AppleDouble 64K ./.pulse 4.0K ./.gvfs 1.4M ./.adobe 44K ./.pki 44K ./.compiz-1 168K ./.config 1.4M ./.thumbnails 12K ./Templates 912K ./.gstreamer-0.10 8.0K ./.emacs.d 92K ./Desktop 1.3M ./.local 12K ./Ubuntu One 12K ./Documents 296K ./.fontconfig 12K ./.qt 12K ./.gnome2_private 20K ./.ssh 20K ./.mission-control 12K ./Videos 12K ./Temporary Items 640K ./.macromedia 124G . I can't find a way to figure out how it got to that 124G in that directory. There are no mount points in home.

    Read the article

  • Can't Connect To Local Mysql Using IP Address, but CAN connect from remote server

    - by user1782041
    Here's an interesting one that does not seem to fall into any of the mysql connection issues I've read about or searched for: On an Ubuntu 12.04 box I had some system updates waiting to install, and I took care of that this evening. After the install, I started seeing some errors in my syslog complaining about a particular php script that could no longer connect to the mysql instance on the box. Here is the specific error: PHP Warning: mysql_connect(): Can't connect to MySQL server on '192.168.0.40' (4) Now, the server's IP address is 192.168.0.40, and I've checked to make sure that I have mysql listening on 0.0.0.0 so that I can connect using either "localhost" or "192.168.0.40". Here's where things get odd: From the local machine, if I try the following: mysql -uroot -p -h192.168.0.40 I get this error: ERROR 2003 (HY000): Can't connect to MySQL server on '192.168.0.40' (110) I've checked, and error 110 indicates an OS timeout, and error 2003 is the mysql generic "can't connect" error. This indicates that it is not permissions with the user. However, if I do the same thing from a remote machine (say, from 192.168.0.30), I log right in with no problems. Futher, other scripts on the local machine that connect to mysql using "localhost" for the host rather than "192.168.0.40" connect with no problems. Also, I can connect via the mysql socket with no problems both from the command line and php scripts. So, this feels like a networking issue of some kind on the local box, but there are no iptables rules on this box (it is firewalled externally) and I can't figure out what else may be causing this. This problematic script worked perfectly prior to the latest system update. For now, I'll simply change the script to connect via localhost, but I'd really like to know why it broke for 2 reasons: There may be other scripts that connect using 192.168.0.40 that don't run very often which are now broken. Auditing them all will take more time than I feel like devoting at the moment. I'm curious, and want to know why it broke so I can fix it correctly. Any help?

    Read the article

  • Splitting Servers into Two Groups

    - by Matt Hanson
    At our organization, we're looking at implementing some sort of informal internal policy for server maintenance. What we're looking at doing is completing maintenance on our entire server pool every two months; each month we'll do half of the servers. What I'm trying to figure out is some way to split the servers into the two groups. Our naming convention isn't much to be desired (but getting better) so by name or number doesn't really work. I can easily take a list of all the servers and split them in two, but with new servers are being added constantly, and old ones retired, that list would be a headache to maintain. I'd like to look at any given server and know if it should have its maintenance done this month or next. For example, it would be nice to look at the serial number. If it started with an even number, then it gets maintenance done on even months and vice-versa. This example won't work though as a little over half of the servers are virtual. Any ideas?

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • Unable to authenticate to Windows Server 2003 for file browsing as non-administrator user.

    - by Fopedush
    I've got a windows server 2003 box containing a raid 5 array I use for mass storage. I want to set up a special non-administrator account that can be used to browse files over the network, with only read access. Ideally I'll map my network drive as this user to avoid accidentally hosing my data, and mount as an administrator user on occasions where I actually need write access. I've created a non-administrator user on the Windows Server box (called "ReadOnly)", and granted the user read permissions on the folders I need. However, when I try to browse to the files, and authenticate as this user, I'm told "Permission denied". If I throw the readOnly user into the administrators group, however, I can authenticate and browse just fine. I am, of course, only attempting to browse to folder for which I have given this user read permissions. Obviously my ReadOnly user is missing some privilege here, but I can't figure out what it is. I've been digging around in group policy editor all day to no avail. What am I missing? Fake Edit: I'm doing my browsing from a Windows 7 box, but I don't think that is relevant.

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • Weird .#filename files on remote ssh-connected systems after mcedit

    - by etranger
    I'm using MacFusion sshfs in combination with Midnight Commander, and when I edit remote text files with mcedit, weird symlinks are created on the remote system. $ ls -l .* lrwxr-xr-x 1 user group 34 Jun 27 01:54 .#filename.txt -> [email protected] where etranger is my local login name, and mbp is a hostname of my notebook running MacOS. symlinks can be removed by running remote rm command, but cannot be deleted on the mac-fuse mounted volume and thus pollutes the filesystem. I cannot figure what part of software is responsible for this, and how I could fix this, any help is appreciated. EDIT: This appears to be mcedit behavior as documented here: https://dev.openwrt.org/ticket/8245 Apparently, sshfs fails to remove symlink to the lock file for some reason (".#" in filename, perhaps), and it pollutes the filesystem. A quick workaround is possible, using another bug of Midnight Commander: editing (F4) the broken symlink effectively converts it to a missing lock file it was supposed to point to, and removes the symlink itself. The newly created file may then be deleted normally. EDIT 2: Unchecking "Follow symlink" in MacFusion apparently allows sshfs to remove dead symlinks, so the problem disappears completely.

    Read the article

  • Using bind (named) as a public proxy server

    - by TrentDavis
    We have a Python DNS server that does a bunch of stuff to figure out values it should return for various DNS records. This works nicely, however as it is Python, the performance under high load won't be great. What I would like to do is have a "proxy" bind server sit in front of it to return results to the public internet. This will cache the results (typically 15 minutes, some records are a few seconds), so the load on the Python server will be greatly reduced as it will only see one look up per domain (only about 100 domains) every 15 minutes. The data in these domains changes a lot, so using a master won't work as it will constantly be changing. I have something setup that looked like it would work great (using a forwarder for the zone), and tested it with dig etc, all going great. However when we went to go live with it, things weren't working, and we figured out that named is not setting the "Authoritative" bit (fair enough, it is a forwarder). So my question is, can we tell bind to set the Authoritative bit for forwarded domains? I have looked at all the doco I can find, and can't find anything about doing things this way. Most of the doco about using it as a proxy if for a LAN to the internet. Ideally I would like to use bind as it is there and installed (CentOS 5 servers). But at a pinch we could look at a different name server to do the work if it just can't be done with bind. Thanks.

    Read the article

  • find the next due date after today within a group in an Excel PivotTable

    - by Dennis George
    I have got a table set up in one sheet with "transactions". Each row contains a name of a vendor, the amount owed or paid depending on transaction type, and the due date/transaction date. Here is some simplified sample data: Vendor Date Invoice Payment Vendor A 6/30 $200 Vendor A 6/30 ($200) Vendor B 7/5 $500 Vendor B 7/5 ($500) Vendor C 10/28 $50 Vendor A 10/30 $100 Vendor C 11/15 $50 I have already built a PivotTable from that table to group these transactions by vendor and sum the remainder owed. What I'm trying to figure out is how to, for each vendor, get the next due date (min date of the group, excluding dates < Today()), or if there is no next due date then I want to see the max date for that group. Here is what my PivotTable looks like, plus the date column I'd like to add (assuming Today() = 10/23): Vendor Date Owed Vendor B 7/5 - Vendor C 10/28 $100 Vendor A 10/30 $100 I know calling it next due date might not be so accurate if I end up with the date of a payment in that column, but I'm ok with that. tl;dr : I want to find the next earliest date within each group, or the last date. How do I do this?

    Read the article

  • Get Squid to pass X-Requested-With header

    - by tftd
    I have configured a squid 3.1 proxy server. Everything works great except for the X-Requested-With header. I can't manage to figure out how to pass that header to the site I'm attempting to open via the proxy. This is my current configuration: request_header_access Allow allow all request_header_access Authorization allow all request_header_access WWW-Authenticate allow all request_header_access Proxy-Authorization allow all request_header_access Proxy-Authenticate allow all request_header_access Cache-Control allow all request_header_access Content-Encoding allow all request_header_access Content-Length allow all request_header_access Content-Type allow all request_header_access Date allow all request_header_access Expires allow all request_header_access Host allow all request_header_access If-Modified-Since allow all request_header_access Last-Modified allow all request_header_access Location allow all request_header_access Pragma allow all request_header_access Accept allow all request_header_access Accept-Charset allow all request_header_access Accept-Encoding allow all request_header_access Accept-Language allow all request_header_access Content-Language allow all request_header_access Cookie allow all request_header_access Mime-Version allow all request_header_access Retry-After allow all request_header_access Title allow all request_header_access Connection allow all request_header_access User-Agent allow all request_header_access All deny all #remove all other headers # delete "x-forwarder-for.." headers forwarded_for delete request_header_access Via deny all request_header_access X-Forwarded-For deny all I tried to add this line request_header_access X-Requested-With allow all to the configuration but apparently X-Requested-With is an unknown header name... Apparently I'm missing something?

    Read the article

  • Linux centos trouble with egrep command in words folder

    - by seth
    i need the commands to list these things for a class but for the life of me i cannot figure it out if anyone could offer any insight on how to get so specific with the egrep command or just answer the questions it would be highly appreciated some i have already figured out but if they look wrong any corrections may help too List all words that have the letter a followed immediately by the letter z. egrep {a,}{z,} words List all words that have the letter a followed sometime later by the letter z (there must be at least one letter in between). Egrep {a,?,z} words List all words that start with the letter a and end with the letter z. egrep "^a.*z$" words List all five letter words that start with the letter a and end with the letter z. List all words that start with two capital letters followed immediately by at least one lower case letter. List all words with two consecutive a’s or i’s or u’s. Use {2} to denote “two consecutive” and the pipe character, |, to denote “or”. egrep [a|i|o] {2} words List all words that contain a q where the q is not immediately followed by a u. For instance, queen should not be in your list but Iraqi should be. List all entries in the file that contain at least one non-letter.

    Read the article

  • fax server farm architechture

    - by Brian Postow
    I'm not sure this is the right forum for this, but it's not Stackoverflow so... I'm trying to figure out an architecture to solve the following problem, maybe someone here can help: I have a T1 with 23 fax lines coming into the building. I have a computer (Macintosh XServe) running Hylafax. If I had one POTS line, I'd be done. However, I have no idea how to get the T1 into the Mac... Options I've considered: some sort of PCI T1-modem (Does that exist?) Splitting the T1 into 23 POTS lines and then connecting 23 analog modems to the mac, either via an external modem bank (Do they still make those?) or via some sort of external PCI bank, which will allow me to use more than 2 4-port modem cards. Either the T1 or the split POTS lines going into some intermediate device and then transfering the images over IP, or USB to the mac. Really, any other option I can come up with This has GOT to be a problem that someone has already solved, right?

    Read the article

  • MSSQL Server Management Studio Express 2005 license limit error should not be displayed

    - by Erik Vold
    I just tried restoring a 250MB database from a backup on my local machine, and got the following message: TITLE: Microsoft SQL Server Management Studio Express ------------------------------ Restore failed for Server 'MULTIVIS-A0D9F3\SQLEXPRESS'. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Restore+Server&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: System.Data.SqlClient.SqlError: CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 4096 MB per database. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I had 3 db's on the machine, one ~4.1gb db and two other dbs < 10mb each. So I did some googling on this error and saw the suggestion to try shrinking my other dbs to free up some space. So I did so on the 4.1gb db and now when I go to 'properties' for that db it says it is taking/using ~2.4gb. So I should have space now I figure, but whenever I try to restore the ~250mb database now I still get the error message above.. I tried restarting as well but that wasn't helpful. Any idea what the issue is?

    Read the article

  • Can't add service account to domain group during SQL cluster install

    - by Sam
    I'm installing a 2008 instance on a Server 2003 machine which is already running SQL 2005. I need to set up domain groups for the security setup step: http://msdn.microsoft.com/en-us/library/ms179530.aspx On Windows Server 2003, specify domain groups for SQL Server services. All resource permissions are controlled by domain-level groups that include SQL Server service accounts as group members. Much more info on this here: http://support.microsoft.com/kb/910708 I've had problems with being able to add the windows service accounts to the groups at install time. The security admins had to make my account a domain admin - which they were hesitant to do. The account under which SQL Server Setup is running must have permissions to add accounts to the domain groups. Is there a specific security setting which would allow my account to add accounts to a group? UPDATE: I'm looking for specific instructions. I have a global group called domain\servicegroup - what do I tell the security folks to do. I'd love to figure it out myself, but I don't have access to this stuff.

    Read the article

  • How to eliminate overscan on Ubuntu HTPC

    - by Norman Ramsey
    I'm using an Ubuntu box with Nvidia graphics card as an HTPC. My HDTV is a Sony Bravia KDS-R50XBR1; this is a rear-projection unit with many inputs. I am using the HDMI input. I'm using the proprietary Nvidia drivers and they recognize the 1080x1920 resolution just fine. The display is a little fuzzy but at 50 inches it's perfect for movies. My problem is that the TV has three overscan settings, and none of them reduces overscan to zero. When I was using dual-screen this was fine, but I'm moving to where the TV is my only screen, and the Gnome panels are not visible because of the overscan. I'd like to figure out how to eliminate the overscan, without trying to scale my 1080p content down to 920p or something ridiculous like that. Ideally there would be some scurvy trick, perhaps involving the TV service menu, to get rid of the overscan on the TV side. Or I could move the Gnome panels, but I still would be missing the edges of my movies. Suggestions most welcome.

    Read the article

  • Patching Synaptics Driver In Ubuntu

    - by danben
    I've got an HP Envy with the new Synaptics Clickpad - the device is not supported out of the box by Ubuntu, so I downloaded this patch (https://patchwork.kernel.org/patch/67335/), applied it to synaptics.c in the linux kernel source, rebuilt the kernel and booted from it. Nothing changed. I'm new to Linux and don't know much about driver configuration; how can I check that the newly-produced synaptics.o file is actually being used somewhere? Or if it is obvious, what step did I leave out? Separately, I tried following the instructions in this thread: http://georgia.ubuntuforums.org/showpost.php?p=8989185&postcount=11 There, someone posted the patch along with a Makefile and instructions for installation. When I followed those, my mouse stopped working altogether. Not desirable, but at least I know it had some effect. The Makefile is posted below; I get the gist of what it's doing, but don't have enough knowledge to figure out why it would cause the device to stop responding entirely. obj-m := psmouse.o psmouse-objs := psmouse-base.o synaptics.o psmouse-$(CONFIG_MOUSE_PS2_ALPS) += alps.o psmouse-$(CONFIG_MOUSE_PS2_ELANTECH) += elantech.o psmouse-$(CONFIG_MOUSE_PS2_OLPC) += hgpk.o psmouse-$(CONFIG_MOUSE_PS2_LOGIPS2PP) += logips2pp.o psmouse-$(CONFIG_MOUSE_PS2_LIFEBOOK) += lifebook.o psmouse-$(CONFIG_MOUSE_PS2_TRACKPOINT) += trackpoint.o psmouse-$(CONFIG_MOUSE_PS2_TOUCHKIT) += touchkit_ps2.o KDIR := /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules install: @cp psmouse.ko /lib/modules/$(shell uname -r)/kernel/drivers/input/mouse/

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >