Search Results

Search found 11226 results on 450 pages for 'reverse thinking'.

Page 338/450 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • Suggestions for Windows 8 migration [closed]

    - by Big Endian
    I'm thinking of migrating to Windows 8. At first I hated it, but I'm pretty sure the Windows 8 model is the future, and I don't particularly want to end up hating the future like my parents, frustrated and bewildered by anything past Windows XP. I'm currently running Windows 7 and my system has been accumulating some problems. It's probably an accumulation of issues from installing too much software, changing firewall settings, installing Ubuntu alongside Windows, and... well I'm not sure, but my computer has been buggy in unexpected ways lately (freezing and unfreezing, display driver crashing and recovering, and what I call "deep freeze/thaw cycle" where the mouse won't even move for a while). I'm good at solving computer problems, but I can't seem to get to the root of these and my best idea for fixing them is making sure I've backed up every file then re-installing the entire OS. Luckily for me, a new OS is just around the corner so this would be a good time to get two things out of the way at once. The problem I see is that the upgrade options I see are all "seamless". I don't want a seamless upgrade. I want to wipe the slate clean and start all over. Does this mean I will have to buy a full, new copy of Windows 8 rather than one of the cheaper upgrading options? Or does it not make since for me to go to Windows 8 given that I have a laptop, not a tablet? Maybe I should just re-install Windows 7, or even call good enough good enough, try to eliminate the bugs, and start with a fresh slate in 2-3 years after this computer eventually dies entirely from (inevitable) hardware failure. What would be the advantages or disadvantages and costs of each option, how would I go about upgrading to Windows 8 if that's the option I choose, and what is your personal opinion about my situation?

    Read the article

  • WSUS trying to download all updates again

    - by Tim Alexander
    The server hosting WSUS had a catastrophic failure and we have had to rebuild the system drives. Luckily the DB and content store for WSUS are on a seperate drive so were unaffected. During the rebuild process we thought it was time to update the server to 2008 R2 (from 2003 R2). Have got the server running and installed the WSUS role, detached the DB form SQL Express 2008 R2 and attached the original. Carried out the wsusutil.exe movecontent command with a -skipcopy switch pointing to the original content store. All looked good until I saw the front page stating it is trying to download files for 6,436 updates at around 344,565 MB!!!!!! Oops, I thought, something not right here. The content store I have on disk is only 75GB but I am thinking that some vital step has been missed in the restoration process. Either way is there a way to make WSUS reindex its local content store or something as I am unsure that downloading 344 gigabytes is a viable way forward! EDIT: Never rains but it pours. AM now getting a CLSID: FX {8b6499ed-0241-e032-6508-da4b1c879d7e} error could not create snap in. think a reinstall of WSUS is in order.

    Read the article

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Selective Pointer device remapping in linux

    - by user6368
    I just got an HP 2710p (hp tablet, with digitizer), and I've played around with linux for a while now, and thought I would go ahead and install it. Everything works fine, excepting normal tablet functions, which is to be expected. I'm working on the screen rotation, and there are on-screen keyboards, etc, but I'm having issues with the stylus. I can tap and left click with the stylus as normal, but the side button (which in windows functions as a right mouse button) appears as a 'button 2' to xev (a middle/scroll wheel button). I can switch 'button 2' and 'button 3' universally using xmodmap, but I'd like to do so exclusively for stylus so I don't screw up regular pointing devices. Altering xorg.conf (which is surprisingly bare) with the recommended sections (adding sections for each of the stylus buttons) does nothing. I'm running crunchbang, which is an ubuntu/debian varient with openbox as the windows manager. Thanks Also, as a seperate note, does anybody know how to detect when I rotate and/or latch the lid shut? I was thinking maybe I could run a script to switch the buttons when I close it, but I can't find any information.

    Read the article

  • printing parallel over ethernet cable

    - by Crudler
    I have a bit of an interesting challenge :) I have a machine with a parallel printer output, i want it to be able to print instead to a printer in a different room and i know that parallel isnt great over big distances. i found this: http://www.amazon.com/over-Cat5-Extension-Cable-Adapter/dp/B002WJ9S6Y%3FSubscriptionId%3DAKIAINHICTCYYZGJWT4Q%26tag%3Dusbprintercables.net-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB002WJ9S6Y which will let me connect over cat5, but its usb to cat 5. my machine can only output on parallel (its not a computer) so what i was thinking of getting is a parallel(f) to usb and usb to parallel (M) for either side i.e. machine - parallel - usb - cat5 - usb - parallel -printer just seems a bit messy :) suggestions? another thing i would like to try is to get rid of the old school parallel printer and instead use a network based multi function. would this be possible? i.e. machine - parallel -usb - cat5 - ethernet print server - network printer this might be rougher because the machine cannot "know" that we are using a network printer. it can ONLY print to LPT1 Thanks!

    Read the article

  • How to serve Rails application with Passenger/Apache without domain name?

    - by grifaton
    I am trying to serve a Rails application using Passenger and Apache on a Ubuntu server. The Passenger installation instructions say I should add the following to my Apache configuration file - I assume this is /etc/apache2/httpd.conf. <VirtualHost *:80> ServerName www.yourhost.com DocumentRoot /somewhere/public # <-- be sure to point to 'public'! <Directory /somewhere/public> AllowOverride all # <-- relax Apache security settings Options -MultiViews # <-- MultiViews must be turned off </Directory> </VirtualHost> However, I do not yet have a domain pointing at my server, so I'm not sure what I should put for the ServerName parameter. I have tried the IP address, but when I do that, restarting Apache gives apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:26 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Sun Jan 17 12:49:36 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results and pointing the browser at the IP address gives a 500 Internal Server Error. The closest I have got to something sensible is with <VirtualHost efate:80> ServerName efate DocumentRoot /root/jpf/public <Directory /root/jpf/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> where "efate" is my server's host name. But now pointing my browser at the server's IP address just gives a page saying "It works!" - presumably this is a default page, but I'm not sure where this is being served from. I might be wrong in thinking that the reason I have been unable to get this to work is related to not having a domain name. This is the first time I have used Apache directly - any help would be most gratefully received!

    Read the article

  • Hard Drive benchmark values show write very very slow

    - by John
    I recently started to have issues with my laptop being very slow. I ran a hard drive benchmarking tool (by ATTO) that showed that the write speed was very very slow on my boot drive. I ran the same benchmark on my usb drive and it was 650 times faster than my boot drive when it came to writing. Reading is very fast/normal on both. I swapped out an identical drive and ran the same benchmark. This time the drive showed proper write speed. Thinking that I had a hard drive going bad I cloned the old one onto the new one. I managed to clone the problem too. Anyone have any ideas on what in WinXP SP3 might be causing the write issues? I am on a corporate network and we have commercial anti-virus software installed. (AVG I think) I regularly run defraggler and have about 40 gig free on a 100 gig drive. The machine has 4 gigs of memory. Any ideas? TIA J

    Read the article

  • who has files open on a linux server

    - by Robert
    I have the fairly common task of finding who has files open on our Linux (Ubuntu ) file server in our Windows environment. We use Samba on the network and I use Putty from my workstation to establish a shell window to run bash scripts. I have been using something like this to find what files are open: (this returns a list of process ids with each open file) Robert:$ sudo lsof | grep "/srv/office/some/folder" Then, I follow up with something like this to show who owns the process: (this returns the name of the machine on the network using the IP4 protocol who owns the process) Robert:$ sudo lsof -p 27295 | grep "IPv4" Now I know the windows client who has a file open and can take action from there. As you can tell this is not difficult but time consuming. I would prefer to have a windows application I can run that would just give me what I want. So, I have been thinking about creating some process I can run on Linux that listens on a port and then returns a clean list of all open files with the IP address of the host who has the file open. Then, a small windows client application that can send the request on the port. It seems like this should be a very common need but I can not find anything like this that has been done before. Any suggestions?

    Read the article

  • How seriously should I take ECC correctable error warnings?

    - by David Mackintosh
    I have a pile of Sun X2200-M2 servers. These servers have ECC memory. In some of these servers, I am getting warnings in the eLOM about "correctable ECC errors detected", eg: # ssh regress11 ipmitool sel elist 1 | 05/20/2010 | 14:20:27 | Memory CPU0 DIMM2 | Correctable ECC | Asserted 2 | 05/20/2010 | 14:33:47 | Memory CPU0 DIMM2 | Correctable ECC | Asserted ...some more frequently than others. The kernel on this particular system is throwing EDAC errors as well, although with far more frequency than the eLOM is recording ECC events: EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x42a194, offset 0x60, grain 8, syndrome 0xf654, row 4, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error EDAC k8 MC0: general bus error: participating processor(local node response), time-out(no timeout) memory transaction type(generic read), mem or i/o(mem access), cache level(generic) MC0: CE page 0x48cb94, offset 0x10, grain 8, syndrome 0xf654, row 5, channel 1, label "": k8_edac MC0: CE - no information available: k8_edac Error Overflow set EDAC k8 MC0: extended error code: ECC chipkill x4 error Now if the server is detecting Uncorrectable ECC, the system resets, so clearly that's bad and removing/replacing the identified stick or pair corrects the issue. But I am thinking that if the error is Correctable, then there's no immediate issue -- I can treat this as a warning and be prepared to pull the stick/pair if an uncorrectable error starts occurring?

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • samba "username map" stopped to work after upgrade to 3.6

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems.

    Read the article

  • Home media storage solution

    - by Dan
    I record lots of personal HD film footage and am looking for a cheap way to store all of this. I take ~120 GB of footage each month, so something expandable would be nice... something that might be able to hold 6+ SATA drives. There is a low load requirement, as there is never more than a user or two... but it should be able to keep up with streaming 2 simultanious HD videos. I don't really want to spend more than $200-$300 on top of the $900 I am thinking of spending for 6X2GB SATA drives@ $150 apiece, but I am willing to pay extra for a quality solution. Should I get a cheap NAS server? a cheap multi-drive external enclosure? should I just get some used systems off craigslist? If it is an independent system I'll probably just throw ubuntu on it since I can maintain that well. Its easy to do a software raid from ubuntu too, if I choose to go that way. Thanks

    Read the article

  • Looking for recommendations on OCR problem - tabular numeric data

    - by ldigas
    I have 20 pages of experiment measurement data which I need to digitalize. The results are in tabular form, scanned in 600 dpi resolution, and as far as scans go, they came up pretty clean and readable. For an example of how it looks see here (but beware: it is a rather big scan; about 5Mb; no problem for any broadband connection, but dialups should approach with caution!) ... and I need it finished by sunday afternoon (:-o) <-- smiley in a state of panic (then why did't you start sooner?)... yea, yeah ... I know ... but, it came up late, and I wasn't thinking I was gonna need this data also. So, I'm looking for recommendations. I haven't much experience with OCR programs, save scanning a page or two of pure text, but just to mention, I haven't the wish also to test out every OCR program out there. So this isn't a "name your OCR favourite". What I'm looking is advice from someone who's done something like that, and his/hers experience on what would be the best way to undertake. I need the data in txt form but since it will have to be checked (by drawing it, and just simply watching whether some points "jump out") I'll probably be entering it in Excel at first.

    Read the article

  • exFAT to NTFS formatting troubles

    - by user1083734
    I recently ran a chkdsk on 2.5" 230GB SATA HDD but the plug was pulled before the end of the chkdsk and since then it wouldn't boot up. Deciding to scrap all data on the HDD (no longer needed it), I then fitted it into an external HDD caddy and (in diskpart) cleaned the disk, created new partition and volume and tried to format it to NTFS. It couldn't do this on long or short formats and so I went with the less-appreciated alternative - exFAT (I run Win7). It quick formats to exFAT fine but encounters errors during long format. At the moment it is exFAT. Of course I would really like it to be NTFS as I will probably need to use it on Win XP too. Could anyone suggest a method of trying to reformat to NTFS? Do you think that, when chkdsk was interrupted first time, the disk was corrupted and is irretrievable? I find this situation slightly odd, as it HAS formatted to exFAT and DOES seem to work when I copy files across! Also, I CAN use disk management console to create several partitions: e.g. a 50GB partition and then a large 180GB partition. The 50GB and WILL long-format to NTFS but the 180GB will not! I'm thinking hardware fault, but then I notice that it WILL format to exfAT! Much confusion!

    Read the article

  • Is encryption really needed for having network security? [closed]

    - by Cawas
    I welcome better key-wording here, both on tags and title. I'm trying to conceive a free, open and secure network environment that would work anywhere, from big enterprises to small home networks of just 1 machine. I think since wireless Access Points are the most, if not only, true weak point of a Local Area Network (let's not consider every other security aspect of having internet) there would be basically two points to consider here: Having an open AP for anyone to use the internet through Leaving the whole LAN also open for guests to be able to easily read (only) files on it, and even a place to drop files on Considering these two aspects, once everything is done properly... What's the most secure option between having that, or having just an encrypted password-protected wifi? Of course "both" would seem "more secure". But it shouldn't actually be anything substantial. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even consider hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what building walls is for the cables. And giving pass-phrases would be adding a door with a key. So, what do you think?

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Using UDF on a USB flash drive

    - by CesarB
    After failing to copy a file bigger than 4G to my 8G USB flash drive, I formatted it as ext3. While this is working fine for me so far, it will cause problems if I want to use it to copy files to someone which does not use Linux. I am thinking of formatting it as UDF instead, which I hope would allow it to be read (and possibly even written) on the three most popular operating systems (Windows, MacOS, and Linux), without having to install any extra drivers. However, from what I found on the web already, there seem to be several small gotchas related to which parameters are used to create the filesystem, which can reduce the compability (but most of the pages I found are about optical media, not USB flash drives). I would like to know: Which utility should I use to create the filesystem? (So far I have found mkudffs and genisoimage, and mkudffs seems the best option.) Which parameters should I use with the chosen utility for maximum compability? How compatible with the most common versions of these three operating systems UDF actually is? Is using UDF actually the best idea? Is there another filesystem which would have better compatibility, with no problematic restrictions like the FAT32 4G file size limit, and without having to install special drivers in every single computer which touches it?

    Read the article

  • Need Info on the Hidden Switch in SET - "/S" How to implement

    - by ttyl
    I am having some problems doing a proper search of "SET/S" or "SET /S" on google and other search providers. The difficulty arises with the SLASH "/", it is commonly used in search engines to add a "nearness" to the search parameter. I have found no way to escape the SLASH when searching for a SLASH. For those on this community, try searching this domain with the two search terms listed above. It just doesn't work, it ends up looking for SET S instead. But I digress. So Im asking the uber-guru's on this board to help me find out about the documentation of /S and how to implement SET /S in a batch file. SET is an internal DOS/cmd commandand allows many things incuding prompting the user, integer math and writing environment strings. in looking at this link: http://www.robvanderwoude.com/os2set.php it appears that the /S is only for OS2 but im thinking that this might not be the case, due to this: http://www.dostips.com/forum/viewtopic.php?t=2704, apparently used with substings and macros. any help is much appreciated

    Read the article

  • Can't connect to research.microsoft.com on home Qwest DSL connection

    - by rakingleaves
    I have a puzzling issue regarding accessing research.microsoft.com from my home Qwest DSL connection. By default, I frequently get timeouts when accessing research.microsoft.com from Firefox, Safari, or Chrome on my Mac. I also cannot access the site from Internet Explorer in a Windows VM. However, I am able to access the site through proxify.com, so I know the site is not down. Furthermore, I haven't noticed problems accessing other sites (in particular, www.microsoft.com works fine). Also, I can access research.microsoft.com when I'm connected to networks other than my home Qwest DSL connection. Together, the above make me suspect a problem with either my router (Airport Express) or, more likely, my ISP. Anyone have any thoughts on how I can narrow down the problem further? I could call my ISP and tell them the above, but my feeling is that probably won't get me very far. I can get by browsing research.microsoft.com through a proxy, but it would be nice to figure out what's going on here and fix the problem. Oh, the only relevant discussion I found via Google was here: http://forums.whirlpool.net.au/forum-replies-archive.cfm/1311734.html Update: Thanks to those who have tried to help! I found one other thing while Googling that may be vaguely relevant: http://thedaneshproject.com/posts/supportmicrosoftcom-not-working-behind-squid/ Disabling the Accept-Encoding headers in Firefox actually didn't make a difference for me. I just thought the above might spark some other ideas about how mishandling of HTTP headers somewhere might be causing this problem. Thanks again! Another update: In case anyone is still thinking about this; I've found that I can't surf research.microsoft.com using the links text-based browser, but I can reliably download individual files with wget. Maybe that helps?

    Read the article

  • Monitoring outgoing messages using EXIM

    - by dashmug
    I work as an IT guy in a law firm. I am recently asked to make a system wherein all the outgoing emails coming from our server to our clients will be put on hold first and wait for approval before it gets sent to the client. Our mail server uses Exim (that's what it says in cPanel). I am planning to create filters where the outgoing emails will be forwarded to an editor account. Then, the editor will review and edit the contents of the email. When the editor already approves the email, it will then get sent to the client by the editor but still using the original sender in the "From:" and "Reply-To:" field. I found some pointers from this site = http://www.devco.net/archives/2006/03/24/saving_copies_of_all_email_using_exim.php. Once the filters are in place, I want to make a simple PHP interface for the editor to check the forwarded emails and edit them if necessary. The editor can then click on an "Approve" button that will finally deliver the message using the original sender. I'm also thinking that maybe a PHP-less system will be enough. The editor can receive the emails from his own email client edit them and simply send the email as if he is the original sender. Is my plan feasible? Will there be issues that I have overlooked? Does it have the danger of being treated as spam by the other mailservers since I'll be messing up the headers?

    Read the article

  • Shared firewall or multiple client specific firewalls?

    - by Tauren
    I'm trying to determine if I can use a single firewall for my entire network, including customer servers, or if each customer should have their own firewall. I've found that many hosting companies require each client with a cluster of servers to have their own firewall. If you need a web node and a database node, you also have to get a firewall, and pay another monthly fee for it. I have colo space with several KVM virtualization servers hosting VPS services to many different customers. Each KVM host is running a software iptables firewall that only allows specific ports to be accessed on each VPS. I can control which ports any given VPS has open, allowing a web VPS to be accessed from anywhere on ports 80 and 443, but blocking a database VPS completely to the outside and only allowing a certain other VPS to access it. The configuration works well for my current needs. Note that there is not a hardware firewall protecting the virtualization hosts in place at this time. However, the KVM hosts only have port 22 open, are running nothing except KVM and SSH, and even port 22 cannot be accessed except for inside the netblock. I'm looking at possibly rethinking my network now that I have a client who needs to transition from a single VPS onto two dedicated servers (one web and one DB). A different customer already has a single dedicated server that is not behind any firewall except iptables running on the system. Should I require that each dedicated server customer have their own dedicated firewall? Or can I utilize a single network-wide firewall for multiple customer clusters? I'm familiar with iptables, and am currently thinking I'll use it for any firewalls/routers that I need. But I don't necessarily want to use up 1U of space in my rack for each firewall, nor the power consumption each firewall server will take. So I'm considering a hardware firewall. Any suggestions on what is a good approach?

    Read the article

  • Network bridge does not work on Windows 7

    - by D. Strout
    I am trying to use Windows 7 to bridge my wireless and wired connections together to get wireless on a second Windows XP computer (fresh install). The network bridge is created successfully, but when I connect the cable from the first computer with the bridged connections to the second one with just a wired connection, nothing happens. The second computer doesn't connect, and the first computer shows no sign of anything different. I tried bridging the connections of a third computer, and connecting this computer to the second computer. This worked (third computer bridged the wireless to the second computer). Thus, the problem must be with the first (Win 7) computer. However, I have no idea what the problem could be. All Internet Connection Sharing is turned off. Homegroup is disabled (it was originally enabled, I thought that might be a problem, so I disabled it). Also, I had VMWare fusion installed, and that created extra items in the "This connection uses the following items" box in the properties dialog. Thinking this might be causing issues, I uninstalled that too. Still, with everything I tried, I can't get it to work. Any suggestions? Edit: Another thing I noticed that might be worth mentioning: The network icon in the Win7 taskbar has a red X on it that means it's not connected, but when I click the icon, it says connected next to my wireless connection, and I am able to access the internet.

    Read the article

  • `sh` access denied over ssh connection

    - by inspectorG4dget
    I have an ubuntu server and a windows XP client running Cygwin. The server ssh's into the client and tries to execute a shell script with some params, with the following command: ssh user@IP_ADDR 'sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR' where IP_ADDR is the IP address of client. However, while doing so, I get the following error: Access is denied. Thinking this might be a user permissions error, I tried running sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR on the client, on Cygwin, while logged in as user. This works as expected. Then I thought that this might be an error with the login that I use when I ssh into the client. So I executed this instead: ssh user@IP_ADDR 'whoami' and got back user. This happened even after I did chmod -R 777 /home/user/project on the client, in Cygwin. For kicks, I got on Cygwin on the client and did ssh localhost and manually executed sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR. This worked as expected. However, when I did ssh IP_ADDR from Cygwin and did ssh localhost and manually executed sh /home/user/project/clientside 2 5 7 6 9 5 7 IP_ADDR, I get the same Access is denied. error. Why is this happening? How can I fix this? By the way, both the server and the client have each other's rsa public key for passwordless ssh

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >