Search Results

Search found 38522 results on 1541 pages for 'single source'.

Page 491/1541 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • New SSD, is the MBR broken? DISK BOOT FAILURE

    - by Shevek
    I've been running Windows 7 on a WD 500gb SATA single drive, single partition setup for some time with no issues. I've just installed a new Kingston V Series 64gb SSD and performed a clean install of Win7 to it, deleting the partitions on the 500gb and using that as a data drive. All was well for a few reboots but then I started to get "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" messages. If I put the Win7 install DVD back in the drive it boots fine. Tried a clean install again, after replacing SATA cables and swapping SATA ports, with a complete partition wipe of both drives. Again, rebooted fine a few times then back with the "DISK BOOT FAILURE" error. Looked on the web and found some discussions about it so I then started from scratch again. This time I wiped the MBR on both drives using MBRWork, disconnected the 500gb and reinstalled to the SSD. Removed the install DVD and installed all the drivers which involved many reboots, all with no problem. To make sure I also did a few cold boots as well. Reconnected the 500gb, initialised, partitioned and formatted it. Copied data to it and did some more reboots and shutdowns. All was ok. Then out of the blue comes another "DISK BOOT FAILURE" and again, if the Win7 install DVD is in the drive it boots fine. So, is the SSD a bad'un? TIA UPDATE: It was a BIOS issue! I found a hidden away option for HDD boot order, which was separate from the usual HDD/CDRom/FDD boot order option. The WD was set to boot before the SSD... Swapped them round and all is well. Still don't understand how it worked at first though... Thanks Solaris

    Read the article

  • Are my iptables secure?

    - by Patricia
    I have this in my rc.local on my new Ubuntu server: iptables -F iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 9418 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 9418 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 5000 -m state --state NEW,ESTABLISHED -j ACCEPT # Heroku iptables -A INPUT -i eth0 -p tcp --sport 5000 -m state --state ESTABLISHED -j ACCEPT # Heroku iptables -A INPUT -p udp -s 74.207.242.5/32 --source-port 53 -d 0/0 --destination-port 1024:65535 -j ACCEPT iptables -A INPUT -p udp -s 74.207.241.5/32 --source-port 53 -d 0/0 --destination-port 1024:65535 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT iptables -P INPUT DROP iptables -P FORWARD DROP 9418 is Git's port. 5000 is a port used to manage Heroku apps. And 74.207.242.5 and 74.207.241.5 are our DNS servers. Do you think that this is secure? Can you see any holes here? Update: Why is it important to block OUTPUT? This machine will be used only by me.

    Read the article

  • Can two mocha for After Effects X-Spline Layers be merged ?

    - by George Profenza
    Hello, I'm new to mocha for After Effects, but like the auto tracking feature. There are a few markers I'm adding, but there's one which is giving me a bit of a headache. I'm tracking a circle that moves around a larger object, so it moves in front of it initially, then behind, being occluded by the larger object, then in front again. I tried to track the circle in the first part(when it's in front, before being occluded) and in the second part(when it's in front again, coming out on the other side of the large object) using a single X-Spline. The problem is the second part starts in a different location and I don't know how to 'move' the X-Spline to the new position and track from there, without affecting the previous keyframes. As a workaround I use two X-Spline Layers, export the data to .txt files, then manually merge the two files into a new one containing keyframes from both X-Spline Layers. Is there an easier way to do this (either merging two X-Spline Layer, or using a single X-Spline Layer that can move to a new location without affecting previous keyframes) ? Any suggestion would help.

    Read the article

  • Bad Intel DQ965GF motherboard? Fails memtest, but memory is good.

    - by Boden
    I've got a machine with a DQ965GF motherboard. Two days ago it started locking up hard. Ran memtest 3.3, 1.7, and TestMem 4. TestMem just freezes, memtest failed on moving 8 bit inversions. Letting memtest run eventually causes the system to restart. I pulled memory sticks one by one, and then replaced the memory with a couple of known good sticks. No luck. I switched power supplies, didn't help. Swapped video cards just to be safe. No help. When I start the machine I get a single beep before it POSTs. According to the manual, a single beep means: 1 beep - Refresh Error (with nothing on the screen and it is not a video problem) I'm assuming that the motherboard has failed since it's obviously not a RAM or power issue. Do you agree? NOTE: I also tried resetting BIOS defaults, and even flashed the BIOS to the latest version. I also ran the Mersenne Prime Test and the CPU seems to click along just fine. (Tried logging in to superuser with openid but it's not working for me today. Hope this gets through)

    Read the article

  • Preventing DDOS/SYN attacks (as far as possible)

    - by Godius
    Recently my CENTOS machine has been under many attacks. I run MRTG and the TCP connections graph shoots up like crazy when an attack is going on. It results in the machine becoming inaccessible. My MRTG graph: mrtg graph This is my current /etc/sysctl.conf config # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_syncookies = 1 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.rp_filter = 1 net.ipv4.tcp_max_syn_backlog = 1280 Futher more in my Iptables file (/etc/sysconfig/iptables ) I only have this setup # Generated by iptables-save v1.3.5 on Mon Feb 14 07:07:31 2011 *filter :INPUT ACCEPT [1139630:287215872] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1222418:555508541] Together with the settings above, there are about 800 IP's blocked via the iptables file by lines like: -A INPUT -s 82.77.119.47 -j DROP These have all been added by my hoster, when Ive emailed them in the past about attacks. Im no expert, but im not sure if this is ideal. My question is, what are some good things to add to the iptables file and possibly other files which would make it harder for the attackers to attack my machine without closing out any non-attacking users. Thanks in advance!

    Read the article

  • AWS elastic load balancer basic issues

    - by Jones
    I have an array of EC2 t1.micro instances behind a load balancer and each node can manage ~100 concurrent users before it starts to get wonky. i would THINK if i have 2 such instances it would allow my network to manage 200 concurrent users... apparently not. When i really slam the server (blitz.io) with a full 275 concurrents, it behaves the same as if there is just one node. it goes from 400ms response time to 1.6 seconds (which for a single t1.micro is expected, but not 6). So the question is, am i simply not doing something right or is ELB effectively worthless? Anyone have some wisdom on this? AB logs: Loadbalancer (3x m1.medium) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 11.668 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 4285.10 [#/sec] (mean) Time per request: 23.337 [ms] (mean) Time per request: 0.233 [ms] (mean, across all concurrent requests) Transfer rate: 1661.35 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 4.3 2 63 Processing: 2 21 15.1 19 302 Waiting: 2 21 15.0 19 261 Total: 3 23 15.7 21 304 Single instance (1x m1.medium direct connection) Document Path: /ping/index.html Document Length: 185 bytes Concurrency Level: 100 Time taken for tests: 9.597 seconds Complete requests: 50000 Failed requests: 0 Write errors: 0 Non-2xx responses: 50001 Total transferred: 19850397 bytes HTML transferred: 9250185 bytes Requests per second: 5210.19 [#/sec] (mean) Time per request: 19.193 [ms] (mean) Time per request: 0.192 [ms] (mean, across all concurrent requests) Transfer rate: 2020.01 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 9 128.9 3 3010 Processing: 1 10 8.7 9 141 Waiting: 1 9 8.7 8 140 Total: 2 19 129.0 12 3020

    Read the article

  • WWNs,WWPNs and Fibre Channel addresses

    - by user238230
    Lots of contradictory on these subjects and I don't know why. My first question is about the 64 bit WWN. One reference claims the terms WWN and WWPN are synonymous. An online source seems to refute this. They say: A WWPN (world wide port name) is the unique identifier for a fibre channel port where a WWN (world wide name) the unique identifier for the node itself. A good example is a dual port HBA. There will be two WWPN's (one for each port) and only a single WWN for the card itself. Question #1: Which is correct? I’m almost positive I read that every “Port” has a WWN. My next question is about the 24 bit FC address that is dynamically allocated to a port when it is introduced to the switch. The Domain ID field is defined as: "a unique number provided to each switch in the fabric." Question #2: Do Domain IDs only apply to switch ports? For example what would the Domain ID be for a HBA? None? The same as the switch port it is connected to? Question #3: My last question is about the Name Server of a switch. A book example shows the routing of a message through the switch. It uses the WWNs of the source and destination ports to route the message. I am assuming that the Name Server must associate the WWN and the FC address in some way in order to route the message, correct?

    Read the article

  • Diagnosing Logon Audit Failure event log entries

    - by Scott Mitchell
    I help a client manage a website that is run on a dedicated web server at a hosting company. Recently, we noticed that over the last two weeks there have been tens of thousands of Audit Failure entries in the Security Event Log with Task Category of Logon - these have been coming in about every two seconds, but interesting stopped altogether as of two days ago. In general, the event description looks like the following: An account failed to log on. Subject: Security ID: SYSTEM Account Name: ...The Hosting Account... Account Domain: ...The Domain... Logon ID: 0x3e7 Logon Type: 10 Account For Which Logon Failed: Security ID: NULL SID Account Name: david Account Domain: ...The Domain... Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x154c Caller Process Name: C:\Windows\System32\winlogon.exe Network Information: Workstation Name: ...The Domain... Source Network Address: 173.231.24.18 Source Port: 1605 The value in the Account Name field differs. Above you see "david" but there are ones with "john", "console", "sys", and even ones like "support83423" and whatnot. The Logon Type field indicates that the logon attempt was a remote interactive attempt via Terminal Services or Remote Desktop. My presumption is that these are some brute force attacks attempting to guess username/password combinations in order to log into our dedicated server. Are these presumptions correct? Are these types of attacks pretty common? Is there a way to help stop these types of attacks? We need to be able to access the desktop via Remote Desktop so simply turning off that service is not feasible. Thanks

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

  • Streaming music throughout the house on a budget?

    - by greggannicott
    I was wondering whether anyone knew of a way to stream music throughout a house on a budget? I want to avoid spending any money on this (eg. I don't want to buy a d-link style device). It would be ideal if I could use my existing hardware and some open source software. I have three old(ish) PCs knocking around. I'm happy to stick either Windows or Linux on them. They can all be hooked up to speakers. The ideal solution would result in: the same audio being heard from every device (eg. when you hear a beat on one device, you'd hear it at the exact same time on another (so you don't get any echo)) I'd be able to control the source application (eg. the songs lined up) with my iPhone. I realise I'm being cheeky with those two wishes - but you never know your luck. Am I asking for too much, or is there a piece of software/protocol out there with this purpose in mind? I've been searching for sometime now, but haven't had any joy. Thanks in advance.

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • Unable to logon using terminal server connection

    - by satch
    I have several W2K3 SP2 servers, admin TS enabled. I discovered this morning, I was unable to logon into some of them. I've a couple of Citrix servers in different farms, a SAP (IA64) app server and a cvs server. All of them show same sympthoms; remote connections are refused. I've been able to logon locally, and terminal server service is up, there are no users (so connections are not depleted). There are no errors in log in most servers. One of the Citrix ones, reported following errors: Event ID 50 Source TermDD Type Error Description The RDP protocol component X.224 detected an error in the protocol stream and has disconnected the client. and Event ID 1006 Source TermService Type Error Description The terminal server received large number of incomplete connections. The system may be under attack. Anyway, I suppose these errors appear because server isn't working, and Citrix users try to logon massively. (I nmap'ed server and port seems up). I've solved this problem rebooting before, but with so many servers affected it seems like a crappy workaround. Any idea about troubleshooting it properly? Thanks in advance

    Read the article

  • SQL Error (1064) when importing data from SQL file

    - by mejpark
    I have a MySQL database, which was originally set up with the default latin1 character set and latin1_swedish_ci collation. I was using the database like this for sometime, until I noticed strange characters on my production web site, which is powered by a database exported from my development machine. At this point, I changed the default character set of the database and tables to utf8 and the collation to utf8_unicode_ci, converted the latin1 data inside each table to utf8 (using the 'convert data' option) and exported the database as a single SQL file using HeidiSQL. When the resulting SQL file is opened in Notepad++, several characters are rendered incorrectly. For example, en dashes (-) are displayed as – and e with accent (é) are displayed as é. I changed the encoding of the file from ANSI to UTF-8 (using the encoding menu option in Notepad++) and the offending characters are rendered correctly. I saved the new utf8-encoded SQL file and attempted to import the contents into the MySQL database on my production server. The import process fails with following error: /* SQL Error (1064): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?# -------------------------------------------------------- # Host: ' at line 1 */ /* Error with snippets directory: The specified path was not found */ The head of the SQL file: # -------------------------------------------------------- # Host: 127.0.0.1 # Server version: 5.1.33-community # Server OS: Win32 # HeidiSQL version: 6.0.0.3773 # Date/time: 2011-04-20 09:48:36 # -------------------------------------------------------- It chokes on the first line of the file, which is commented out. Why is this happening? I didn't have a problem loading data from SQL files until I changed the character set and collation of the database. I came up with an ugly workaround to this problem by performing following steps: Export database as single SQL file using HeidiSQL Open resulting file in Notepad++ and convert from ANSI to UTF-8 encoding Create new empty file in Notepad++, paste in UTF-8 and save file normally What am I missing here?

    Read the article

  • PHP 5.4.9 Mysqli issue

    - by Vitaly
    On Ubuntu 12.04 server I had PHP 5.4.9 installed from the source: ./configure --prefix=/etc/php --with-apxs2=/etc/apache2/bin/apxs --with-config-file-path=/etc/php --with-config-file-scan-dir=/etc/php/conf.d --with-libxml-dir=/usr/local/libxml2 --with-xsl=/usr/local/libxslt --with-mysql --with-zlib --with-pdo-mysql --enable-calendar --with-gd --with-iconv-dir --enable-mbstring --enable-soap --enable-sockets --enable-zip --with-curl --with-openssl --with-kerberos --with-tidy' Then, using apt-get, I had mysql server and phpMyAdmin installed. Unfortunatelly phpMyAdmin keep saying that 'mysqli' and 'mcrypt' not installed. php -m | grep mysqli just confirms it. So I tried to install mysqli with "apt-get install php5-mysqli", but just got message to do it by means of "php5-mysqlnd" or "php5-mysql". Even though they are already installed (according to phpinfo()) I tried - doesn't work. However, in php.ini, there's mysqli staff like "extension=php_mysqli.dll", but no "extension=mysqli.so". And block [MySQLi] with some uncommented settings also present. Since this is my first attempt to build php from source I reckon I did some silly mistake. Any help is greatly appreciated.

    Read the article

  • Exchange 2010 sends out spam.

    - by Magnus Gladh
    Hi. I have an Exchange Server 2010, that uses a smart host to send out mails. A day ago the owner of smart host contact us and told us that we send out spam. I have try different open relay test on the net and all of them come back saying that this server is secured and can not be used as relay server. But I can see in my Exchange Queue Viewer that it keeps coming in new messages. Here is an example of how it looks. Identity: mailserver\3874\13128 Subject: Olevererbart:: [email protected] Pfizer -75% now Internet Message ID: <[email protected]> From Address: <> Status: Ready Size (KB): 6 Message Source Name: DSN Source IP: 255.255.255.255 SCL: -1 Date Received: 2010-12-09 21:46:22 Expiration Time: 2010-12-11 21:46:22 Last Error: Queue ID: mailserver\3874 Recipients: [email protected] How can I secure our exchange server more, to stop this from happening? Could I have got an virus that hooks up to our exchange server and send mail throw that? As I can see the From Address is always <, is there someway that I can stop sending mails that don't have a from address that I describe? Pleas help

    Read the article

  • diskmgmt.msc: Cannot delete volume from USB

    - by Notinlist
    I have an USB drive with about 8GB of size. It has a single partition of size 169MB. Don't know why, I got it that way. I wanted to delete this small (FAT32) partition and create a single NTFS volume on it. First, I noticed that the "Delete volumme" option is disabled (grayed out). I then tried "Change drive letter and paths..." and removed "F:", that way I made sure that there are no open files on it. The "Delete volume" was still disabled. Then I got suspicious, and right clicked on the "Unallocated" area and I noticed that I did not have any useful option. All "New * volume" items are disabled. I exited from diskmgmt.msc, ran a cmd.exe with administrator privileges, ran the diskmgmt.msc from it, same experiences. Why can't i do anything with this disk? I've read some advices about downloading some alternative free software, but I rather not do it if possible. I still hope that Windows 7 Enterprise 64bit alone can reinitialize an USB drive without external help. I also cannot do anything with my other 8GB pendrive. It's all an NTFS volume, I tried to delete it, but the option is disabled here too. Maybe I have some settings somewhere that prevents my from partitioning USB disks. (I have the freedom to remove my D: partition which is the second - not counding the "System reserved" - on my SSD disk.)

    Read the article

  • Is there a screen sharing/remote desktop app for mac that lets you use a different host screen resolution?

    - by MarqueIV
    Ok, there are tons and tons of questions about remote desktop for mac and they're all being closed as duplicates. I however am specifically looking for one that will let me use a different resolution than the host, the way you can with Remote Desktop for Windows. For instance, when I connect to my 11" Macbook Air booted into Windows7 from my quad-screen desktop, also booted into Win7 using Microsoft's Remote Desktop Client, it blanks out the screen on the notebook, then virtualizes the video across all four of my desktop's monitors at their native resolutions (2560x1600, 2 x 1920x1200 and 1600x1200) and the notebook now acts as if it has four physical monitors connected to it. All of this from a notebook that only has a 1366 x 768 native resolution. Even when running OS X on the client running RDC, while it doesn't support multi-monitors like its Win counterpart, it still lets me run at the native resolution of the client screen of 2560x1600. Again, it just blanks out the host screen while doing so. However when using Mac's screen sharing, since that is just glorified VNC, it just mirrors what's already on the host's screen, meaning it will always be a single screen with the resolution of 1366x768. This of course makes sense since VNC is a mirroring solution, not a video-virtualizing one like RDC, but it means that on my quad-monitor setup, the remote window isn't even large enough to fill up a single monitor, let alone four (unless you have a client that can scale it up, but that's video scaling. It's still only 1366x768.) So what I'm looking for is if there is a solution on the Mac that lets me do the same thing as RDC in a Win environment. Don't care if I have to pay. I'd gladly pay several hundred dollars for this. I just need that specific feature. Note: People have suggested various VNC clients, but the VNC host still runs at 1366x768 so that will not work here. Ever. Also, people have suggested Synergy/Synergy+/Teleport and such which share the keyboard and mouse, not video. Completely different animal unrelated to what I'm looking for.

    Read the article

  • Does IIS Sometimes Allocate More Worker Processes Than Configured?

    - by Paul Williams
    We have an IIS 7.5 web service on Windows Server 2008 that handles WCF requests from C# clients. This service is configured to have Maximum Worker Processes = 1, so it is not a web garden. IIS is setup to recycle itself at the same time every day (3 AM). I am trying to debug gnarly connection issues, so I wanted to be sure the application pool was not recycling itself. I configured the pool to log an event when it recycles itself. To my surprise, I see the following entries in the System event log: Level: Information Date/Time: 3/23/2012 3:00:00 AM - Source: WAS - Event ID: 5076 A worker process with process id of '6636' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. Level: Information Date/Time: 3/23/2012 2:59:39 AM - Source: WAS - Event ID: 5076 A worker process with process id of '9364' serving application pool 'MyAppPool' has requested a recycle because it reached its scheduled recycle time. IIS is correctly recycling the application pool at 3 AM. However, I do not understand why I would be getting two recycle events in the log within a few seconds of each other. The maximum number of processes is 1. Does IIS sometimes allocate multiple processes for an application pool that is specified as having 1 process? -- edit -- I connected at about 4 PM today and only saw 1 w3wp.exe process. There are no other event log entries that would indicate a crash.

    Read the article

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

  • Exchange 2010 issuing NDRs to Hotmail/Live & few other domains on receipt of message

    - by John Patrick Dandison
    I'm working through a beast of an issue at the moment. Exchange 2010 single server on prem Hybrid deployment to Office 365 ESMTP filtering turned off on ASA Certain domains (most consistently, Hotmail/Live) cannot send us mail. At one point, we couldn't send out either, but I created a new Send Connector that forces HELO instead of EHLO. I turned on SMTP logging, an example of the failed inbound message connection is below. I've read that it could be that reverse DNS is the problem, i.e., the exchange banner smtp address needs to reverse-DNS back to the same IP. Since it's the default exchange connector, its banner is the server's name, but the DNS name of the MX record is different. I'm waiting for the PTR records to update to reflect the internal name as well. Is that the right direction? Is this all DNS or something different? SMTP Session Log (single failed session for illustration): SMTPSubmit SMTPAcceptAnySender SMTPAcceptAuthoritativeDomainSender AcceptRoutingHeaders 220 ExchangeServerName.internalSubDomain.example.com Microsoft ESMTP MAIL Service ready at Mon, 15 Oct 2012 09:57:24 -0400 EHLO col0-omc3-s4.col0.hotmail.com 250-ExchangeServerName.internalSubDomain.example.com Hello [65.55.34.142] 250-SIZE 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-STARTTLS 250-X-ANONYMOUSTLS 250-AUTH NTLM LOGIN 250-X-EXPS GSSAPI NTLM 250-8BITMIME 250-BINARYMIME 250-CHUNKING 250-XEXCH50 250-XRDST 250 XSHADOW MAIL FROM:<[email protected]> 08CF5268DABBD9AA;2012-10-15T13:57:24.564Z;1 250 2.1.0 Sender OK RCPT TO:<[email protected]> 250 2.1.5 Recipient OK XXXX 1282 LAST Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXX from COL002-W38 ([65.55.34.135]) by col0-omc3-s4.col0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command " XXXX 15 Oct 2012 06:57:24 -0700" Tarpit for '0.00:00:05' 500 5.3.3 Unrecognized command XXXXXXXXXXX <[email protected]> Tarpit for '0.00:00:05'

    Read the article

  • Is a Hyperthreaded CPU more powerful and more efficient than a Dual-core CPU? [closed]

    - by user1811864
    which computer to choose with Pentium processor hello they are getting rid of the old computer equipment in the office and i have to choose the computer to take home i get first choice to pick. -15 inch lcd screen 4 gb of ram core 2 duo dual Core E8400 3.00 GHz dvd writer windows vista/ linux -15 inch crt monitor with 2 gb ram and pentium 4 2 ghz single core HT technology windows xp hardisks both 250 GB my friend is telling me to choose the second one Pentium single core HT because he told me it runs faster becuase of HT technology and cooler and consumes less current electricity so it wont get overheated because it has HT technology so it's high definition for encoding and watching HD movies and HD sound and is like a gaming pc to play internet games. And also he said the dual core 8400 runs at 3 ghz compared to the 2 ghz so it heats very much because of the two extra cores so it takes more current raising electricty bills and is not good for gaming and watching HD movies and internet flash animations and games because of getting heated everytime. And he wants to choose and take the E8400 because he has air conditioning at home so it will be safe from heating. So which one computer should i take is it really faster because of the HT High definition technology and will i be able to play internet flash card games better and watch good HD movies Youtube etc and play all the music and songs.

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >