Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 421/617 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • How to inspect remote SMTP server's TLS certificate?

    - by Miles Erickson
    We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.

    Read the article

  • Linux Distro for Beginners

    - by XLR3204S
    Well... I know that's the question arising all over the Internet, but I couldn't find an answer to suit me after googling for quite some time. I'd like to get a Linux distribution, and start learning using the CLI. I'm looking for a distribution already having GNOME installed, as I'll be using Linux-Command.org as my learning resource, and I'm not very familiar with CLI-based web browsers. I'd mainly like to get to know my way around a UNIX-based system, and then I think I'd like to pick up a CLI-only distribution, and start doing more complex stuff. I've tried Ubuntu, Fedora Core, OpenSolaris and FreeBSD (the last two aren't linux distros, I know). Ubuntu and FC are fine, they do come with Firefox, but I'm not really sure they're meant for learning purposes. OpenSolaris was OK as well, but I haven't got to play with it enough. FreeBSD 7.2 did not want to install itself on my 13" MacBook Pro, it generated a kernel panic everytime while copying the files to the disk. So to sum this up, I'm trying to learn Linux, and I'm willing to invest time into this (that is, not giving up when the first problems arise). I also have intermediate knowledge of C++, if it helps, and I'm also using the CLI-vim to write small C++ CLI-based programs, so text editing should be any problem. And... speaking of Macs, how am I going to be limited if I try to learn how to use UNIX-based systems using the OS X Terminal? It uses bash 3.2, isn't this the same shell as the one found on most of the Linux machines? How does the fact that OS X is based on FreeBSD 4.4, if I'm not mistaking, affect this? Thanks in advance, and hopefully, I'll have a starting point ASAP.

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

  • Configure IPv6 routing

    - by godlark
    I've got IPv6 addresses from SIXXS. My host is connected with SIXXS network over a AICCU tunnel ("sixxs" interface). My host address is 2001:::2, the host on the end has address 2001:::1. On my host IPv6 is fully accessible. I have problem with configuring IPv6 network on VMs. I use VirtualBox, the VM (Ubuntu) uses tap1 (on the host bridged by br0) #!/bin/sh PATH=/sbin:/usr/bin:/bin:/usr/bin:/usr/sbin # create a tap tunctl -t tap1 ip link set up dev tap1 # create the bridge brctl addbr br0 brctl addif br0 tap1 # set the IP address and routing ip link set up dev br0 ip -6 route del 2001:6a0:200:172::/64 dev sixxs ip -6 route add 2001:6a0:200:172::1 dev sixxs ip -6 addr add 2001:6a0:200:172::2/64 dev br0 ip -6 route add 2001:6a0:200:172::2/64 dev br0 Host: routing table: 2001:6a0:200:172::1 dev sixxs metric 1024 2001:6a0:200:172::/64 dev br0 proto kernel metric 256 2001:6a0:200:172::/64 dev br0 metric 1024 2000::/3 dev sixxs metric 1024 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sixxs proto kernel metric 256 fe80::/64 dev br0 proto kernel metric 256 fe80::/64 dev tap1 proto kernel metric 256 default via 2001:6a0:200:172::1 dev sixxs metric 1024 Guest: interface eth1 (it is connected with tap1): auto eth1 iface eth1 inet6 static address 2001:6a0:200:172::3 netmask 64 gateway 2001:6a0:200:172::2 Guest: routing table 2001:6a0:200:172::/64 dev eth1 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth1 proto kernel metric 256 default via 2001:6a0:200:172::2 dev eth1 metric 1024 The guest pings to the host, the host pings to the guest, the host pings to 2001:6a0:200:172::1, but the guest doesn't ping to 2001:6a0:200:172::1. The guest tries to ping, on the host (by tcdump) I can capture its packets, but the host doesn't send them to 2001:6a0:200:172::1. What have I missed in configuration?

    Read the article

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Need a new mouse: clicky scroll wheel and additional thumb-operated scroll wheel

    - by cksubs
    I'm having some carpel tunnel syndrome or RSI issues, and I need to get a new mouse. I've tried a friend's Logitech MX Revolution and it's almost perfect. I LOVE the thumb-operated scroll wheel, and I think that that alone would solve many of my hand pain issues. The problem is that it uses their newfangled smooth/click scroll wheel. It can be either smooth or clicky, and is switched by pressing in the scroll wheel. I really question Logitech's user research on that one. I've been using middle-click to open new links and close tabs for years now in my browser, and I'm not willing to give that up for this cool technology. In fact I HATE smooth scroll wheels in general, and need a clicky one. The option would be ok, but not if it's activated via the middle-click button. In closing, I'm looking for these items in my mouse: Thumb-operated scroll wheel or 'joystick'-like apparatus. Clicky, as-stiff-as-possible scroll wheel on top Normal middle-click functionality. Comfortable, preferably ergonomic design. Anyone have any suggestions? Logitech, want to get rid of that smoothing-scrolling crap so I can buy your mouse?

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • Monitor Exchange Email Address and run scripts

    - by WernerCD
    Okay... Not sure how "out there" this thought is... Right now to send a pager message (aka text message), a user logs into our AS400... logs into the program... enters user name and message and hit's F10 to send. With a little looking, it seems that you can run remote commands to the AS400 via FTP. So I'm working on building a script (batch or otherwise) that, given two parameters (user, message), will FTP into the AS400 and run a remote command: c:\>ftp server user: admin password: ***** ftp> quote rcmd SNDPGRMSG TOPGR(JDOE) MSG('This is a Test') ftp> quit So... what I want to do is setup an email account on our Exchange server Monitor the account for incoming mail upon receipt of incoming mail, parse it... say for example subject is defined as "Recipient" and email text is defined as "Pager message" run a batch that uses the above mentioned TOPGR and MSG as parameters... via FTP to the AS400 mark email as "read" The main thing I'm not sure about is monitoring an exchange account and running a script on incoming emails. I'm sure what I want to do is possible... but where would I start? EDIT: Clarification The main reasons for using this four part system are logging (messages sent via this are logged and reported by the AS400 program) and the existing scheduler for redirecting pages (For example, the weekly on-call person = TOPGR(oncall) gets updated by the AS400 program). I'm also trying to remove duplicate work. If I can get this setup working, I can redirect pages from OTHER systems into this one. I then don't have to update 2, soon to be 3, systems with current phone numbers, carriers, on-call schedules, etc. System #2 and #3 can just "email" [email protected].

    Read the article

  • How to back up non-standard directories in my user profile with Windows Backup?

    - by James Johnston
    I'm using Windows Backup to back up my Win7 Pro laptop. I'd like to use it to back up my complete user profile, but I only see standard profile directories (e.g. C:\Users\JohnstonJ\Documents) in the list. Non-standard ones aren't there (e.g. C:\Users\JohnstonJ\MyCustomDirectory). What's the best way to handle this? The only thing I can think of is to browse under the "Computer" entry and navigate directly to C:\Users\JohnstonJ and check off the entire profile (to get what's in there, and any new directories that come up). But is that going to back up the profile twice? Cause other unforeseen problems given that I checked it off by navigating through the computer, rather than picking it under the "Data Files" category? (e.g. back up temporary file garbage, files in use problems, etc. that the "Data Files" category might be handling better). Looking for solutions that other people use that are known to work well and still uses the Windows Backup software - I don't really want to fuss with 3rd-party backup software. Example - as you can see, I have two directories in my profile that Windows Backup is not offering to back up: "Dropbox" and "New folder": (Link to images album because I don't have enough reputation to directly embed them: http://imgur.com/a/Xyv5u)

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • cpanel dns only / rdns questions

    - by Clear.Cache
    I started getting IPs from ARIN directly, instead of the data center I'm colocated at. Now I have to start applying rdns myself for my clients upon request, instead of having the NOC at the DC do this. That is obvious, since I am in full control over the IP delegation and therefore have nameserver authority. The question is, how do I "create" ptr / rdns records for my clients? My current server uses Cpanel / WHM with ns1/ns2.mycompany.com I also applied those as dns nameservers in the ARIN IP's whois record. How do I create rdns for my clients? Should I install Cpanel DNS Only on a entirely separate server and use this method instead? http://layer1.cpanel.net/ If so, how can I seamlessly transition over the dns records to that new dns server, retaining my ns1/ns2.mycompany.com and their ns1 and ns2 IP addresses? Even more important: I have to change the ns1/ns2 IPs to the new ones I retrieve from ARIN. How can this be done, avoiding downtime during the dns transition? On a side note, would it be easier to just install Cpanel DNS Only on a dedicated server and just use dns1.mycompany.com and dns2.mycompany.com with their own dedicated ns1/ns2 IPs from ARIN - and utilize this dns server for customers who request rdns? Would this be a more viable solution than using our current ns1/ns2.mycompany.com Nameservers? Is Cpanel DNS Only a standalone software that does not require Cpanel/WHM on another server? Is it possible to have redundant dns servers setup using this software solely, ns1 on one server and ns2 on another? Thanks.

    Read the article

  • NVidia ION and /dev/mapper/nvidia_... issues.

    - by Ritsaert Hornstra
    I have an NVidia ION board with 4 SATA ports and want to use that to run a Linux Server (CentOS 5.4). I first hooed up 3 HDs (that will be a RAID5 array) and a forth small boot HD. I first started to use the onboard RAID capability but that does not work correctly under Linux: the raid capacity is not a real RAID but uses lvm to define some arays. After setting the BIOS back to normal SATA mode and whiping the HDs, the first boot harddisk (/dev/sda) is seen as /dev/sda BEFORE mounting and after mounting as /dev/mapper/nvidia_. CentOS is unable to install on it (and grub is not installable on it either). So somehow the harddisk is still seen as if it belongs to some lvm volume. I tried to clean out the HD by issuing a few dd if=/dev/zero of=/dev/sda commands to wipe the starting cylinders and final cylinders but to no avail. Did anyone see this problem and did anyone find a solution? UPDATE When I create only a single ext3 partition on the first HD (/dev/mapper/nvidia_...) no LVM partitions are seen and I can boot from /dev/mapper/nvidia_.... Now the next step is to see how I can get rid of this folly.

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • Apache2/Shibboleth TCP connections stuck in CLOSE_WAIT

    - by RJT
    I run an Apache2 server which uses the Shibboleth daemon (shibd) as federated authentication module. Certain server connections using Shibboleth seem to stick permanently in CLOSE_WAIT state. tcp 38 0 blah.blah:57346 shib.server.:8443 CLOSE_WAIT tcp 38 0 blah.blah:45601 shib.server2:8443 CLOSE_WAIT tcp 38 0 blah.blah:41737 shib.server3:5057 CLOSE_WAIT From what I can find out, CLOSE_WAIT means that when the remote server disconnects, the local application is failing to close the connection, as it should. I suspect shibd is responsible somehow. Needless to say, if enough CLOSE_WAIT connections accumulate, I have a problem. Trying to get rid of the CLOSE_WAIT connections by simply using /etc/init.d/networking restart does not work. In fact networking seems to refuse to close down and restart, and I get a SIOCADDRT: File exists error (ie networking is trying to start without having stopped first). Same problem with ifup -a So I have two questions - one may be easy, and one harder. What's a good way to force networking to restart, and force whatever connections are stuck in CLOSE_WAIT to clear? Any ideas about how to fix shibboleth and force shibd module to behave?

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • Exim rejects recipient address on my domain

    - by Nicolas
    Hi, I have a dedicated server (debian) on which I have installed Exim and Dovecot. Everything worked fine until around a month ago. I tried to reinstall and reconfigure exim but I keep having all the incoming emails rejected. Outlook says: A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after RCPT TO:: host mail.mydomain.com [94.76.##.##]: 550 relay not permitted GMAIL: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 relay not permitted (state 14). On the server side, my rejectlog file shows: 2011-01-04 17:09:21 H=mail-qw0-f53.google.com [209.85.216.53] F=<####@gmail.com rejected RCPT : relay not permitted ... and the mainlog file: 2011-01-04 17:00:01 1PaAEr-0007vN-DX <= root@ETC_MAILNAME U=root P=local S=869 2011-01-04 17:00:01 1PaAEr-0007vN-DX ** root@etc_mailname: Unrouteable address 2011-01-04 17:00:01 1PaAEr-0007vY-Kn Error while reading message with no usable sender address (R=1PaAEr-0007vN-DX): at least one malformed recipient address: root@ETC_MAILNAME - malformed address: _MAILNAME may not follow root@ETC 2011-01-04 17:00:01 1PaAEr-0007vN-DX Process failed (1) when writing error message to root@ETC_MAILNAME (frozen) 2011-01-04 17:09:21 no IP address found for host MAIN_RELAY_NETS (during SMTP connection from mail-qw0-f53.google.com [209.85.216.53]) 2011-01-04 17:09:21 H=mail-qw0-f53.google.com [209.85.216.53] F=<####@gmail.com rejected RCPT : relay not permitted then after the message becomes frozen: 2011-01-04 17:28:44 1PaAEr-0007vN-DX Message is frozen Thank you for your help, any idea/comment is welcomed as I am really running out of idea to fix this issue, Nicolas. Oh and the PHP mail() function does not do anything as well, would it be linked to? I think mail() uses sendmail from my php.ini.

    Read the article

  • SQL Server 2005 Express: upgrading to SP3 in mixed-mode installations

    - by Jeroen Pluimers
    I'm having trouble upgrading SQL Server 2005 Express SP1 to SP3. The SP1 install uses mixed mode authentication (so there is an sa password). This is the message I get: TITLE: Microsoft SQL Server Setup ------------------------------ None of the selected features can be installed or upgraded. Setup cannot proceed since no effective change is being made to the machine. To continue, click Back and then select features to install. To exit SQL Server Setup, click Cancel. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft+SQL+Server&ProdVer=9.00.4035.00&EvtSrc=setup.rll&EvtID=SQLSetup90&EvtType=28108 ------------------------------ BUTTONS: OK ------------------------------ The link then tells me To continue you must provide a strong sa password. I tried some searching, and found something about BPAClient.dll, but this batch-file does not fix it: mkdir "%ProgramFiles%\Microsoft SQL Server\90\Setup Bootstrap\BPA\BPAClient" copy "%ProgramFiles%\Microsoft SQL Server\90\Setup Bootstrap\BPA\bin\BPAClient.dll" "%ProgramFiles%\Microsoft SQL Server\90\Setup Bootstrap\BPA\BPAClient\" So I think the clue is the strong in the link above. Am I on the right track? Where do I find more information on the strongness of an sa password? --jeroen (who will adjust the question when he has dug further)

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

  • Apache, Tomcat 5 and problem with HTTP basic auth

    - by Juha Syrjälä
    I have setup a Tomcat with a webapp that uses http basic auth in some of its URLs. There is a Apache server in front of the Tomcat. I have setup Apache as a proxy like this (all traffic should go directly to tomcat): /etc/httpd/conf.d/proxy_ajp.conf: LoadModule proxy_ajp_module modules/mod_proxy_ajp.so ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ There is a webapp installed to root of Tomcat (ROOT.war), so I should be able to use http://localhost/ to access my webapp. But it is not working with http basic auth. The problem is that everything works until I try to access URL that are protected by the HTTP basic auth. URLs without authentication work just fine. When accessing this url via apache I am getting an error message from Apache. If I access the same URL directly from tomcat, everything works just fine. I am getting this to Apache error log: [Wed Sep 01 21:34:01 2010] [error] proxy: dialog to [::1]:8009 (localhost) failed access log looks like this: ::1 - - [01/Sep/2010:21:34:01 +0300] "GET /protected_path/ HTTP/1.0" 503 360 "-" "w3m/0.5.2" I am using: Fedora release 13 (Goddard) httpd-2.2.16-1.fc13.x86_64 tomcat5-5.5.27-7.4.fc12.noarch The basic auth is implemented in the webapp (not in Apache or Tomcat). The webapp is actually implemented in Scala/Lift, but that shouldn't matter. The auth works if I access the tomcat directly. Error message that I am getting from Apache. It is curious that the title is Unauthorized and not Internal error: Unauthorized The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.16 (Fedora) Server at my.server.name.com Port 80 It could be that Apache is seeing a some thing else than 200 OK response and thinks that it is an error when it actually should pass the received 401 Unauthorized response directly to browser. If this is the problem, how to fix it?

    Read the article

  • SMB access from XP to Windows 2008 R2

    - by Pablo
    Here's the thing... I have a very slow file copy performance from Windows XP clients to Windows 2008R2 servers. Here are the facts: Windows XP to Windows 2K3: Fast Windows XP to Windows 2K8: Very Slow Windows 7 to Windows (any): Fast Despite the fact that the obvious solution would be to upgrade to Windows 7, well, we have 900 desktops so it's not an option in the short time. I have tried everything: Disabling SMB2.0, disabling security signatures, changing the TCP Window size, disabling the W2K8 auto tuning, upgraded the drivers, etc. We eliminated the network; both the server and the client are connected to the same core switch (no hops, no routers, same VLAN). Upon monitoring the network with a packet capture utility, we see that the SMB packets being exchanged between the W2K8 and the XP machines are very small packets (256 bytes); despite the fact that the MTUs are properly set (1500) and there is no fragmentation whatsoever. In fact, those SMB packets show, on the IP datagram, that the window is 65535 or close. The same trace, made using the same application but instead of using a W2K8 share uses a Windows XP share (and that goes FAST) shows SMB packets of 4096 bytes. I can post the traces if necessary. So, why does XP-W2K8 negotiation arrange for 24-bytes SMB payload, whereas the XP-XP negotiation arranges for 4096 SMB packets? Any ideas? I am running short of those...

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >