Search Results

Search found 8893 results on 356 pages for 'stored'.

Page 261/356 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • Are Windows Domain Service Accounts Really Necessary?

    - by Zach Bonham
    One of the biggest problems we have in automating application deployments is the idea that running IIS AppPools and Windows Services under domain service accounts is a 'best practice'. Unfortunately, this best practice sometimes causes deployment headaches in that either we need to provision a new domain level service account quickly, or once we have the account, we now need to manage the account credentials. I had a great conversation about not making domain level service accounts a requirement and effectively taking one of two approaches: Secure at the node level using machine account(domain\machine$) and add the node to appropriate ActiveDirectory/Sql groups/roles Create local app specific accounts on each machine (machine\myapp) and add that account to appropriate ActiveDirectory/Sql groups/roles (the password here can change per deployment, it doesn't need to be stored) In both cases, it seems that its easier to manage either adding an account to appropriate group/role, or even stand up new, local account, than it is to have to provision a new domain level account and manage those credentials. This would hopefully ease the management burden on ActiveDirectory, Sql Server and Operations teams as there would be no more password management. We've not actually been able to implement this in practice yet. I am coming from a development background, so I'm curious as to how many ways this approach could go wrong? Can we really get rid of domain level service accounts with this direction? I'd appreciate any thoughts from anyone who has taken this path! Thanks! Zach

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • Flushing iptables broke my pipe, how can I save my instance?

    - by Niels
    I was setting up my iptables when I performed a iptables -F and my ssh pipe broke. This is the last output of my session: root@alfapaints:~# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp dpt:2222 ACCEPT tcp -- li465-68.members.linode.com anywhere state NEW,ESTABLISHED tcp dpt:nrpe ACCEPT tcp -- anywhere anywhere tcp dpt:9200 state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHED ACCEPT udp -- anywhere anywhere udp spt:domain Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:2222 ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:nrpe ACCEPT tcp -- anywhere anywhere tcp spt:9200 state ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHED ACCEPT udp -- anywhere anywhere udp dpt:domain root@alfapaints:~# iptables -F Write failed: Broken pipe I tested my connection just before and I was able to connect with ssh. Now I did a nmap scan and not a single port is open anymore. I know my VPS is running on VMWare ESXi, could a reboot help? Or if not could I attach and mount the disk to another vm to save the data? Does anybody have some advise? And maybe an explanation what happend or what could have cause my pipe to break? ps: I didn't save my rules on the config directories of iptables. But used a file I stored in ~/rules.config to apply my rules like this: iptables-restore < rules.config So probably a reboot would help? Thanks a lot in advance.

    Read the article

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • Convert Mac font to Windows format?

    - by Moshe
    I have a font that was used on Mac OS X 10.5. Mac recognizes it as a font file. Windows Vista shows it as a "File". Changing the file extension doesn't work. I tried both .otf and .ttf and neither worked. (Surprise! I didn't thinks so, but I'd be a fool for asking if that was the answer, so I tried...) Perhaps I need to try a different extension? Are there any utilities to convert the font? A Windows equivalent? It's called Franco. (FrancoNormal, actually.) Thanks. EDIT: DFontSplitter didn't work. I saw something online about "data fork" and "resource fork" that has to match up. Can someone please explain that? Do I need both for a font conversion? What do I tell my graphic designer to send me? EDIT 2: The font doesn't work when downloaded to Mac OS X either. (A different machine from the original.) "Get Info" reveals that there is no file extension, leading me to believe that this is the newer font format. Where wold I find the the "data fork" and "resource fork" on the Mac? I want to be able to tell the designer exactly where to look. EDIT 3: DFontSplitter works on the original computer but not on my PC. I converted and emailed myself the fonts from the graphic designers laptop. I guess it has to do with the data being stored in a "fork" whatnot. Thanks, Matt for that article. Thank you again for your help!

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Concerns about Apache per-Vhost logging setup

    - by etienne
    I'm both senior developer and sysadmin in my company, so i'm trying to deal with the needs of both activities. I've set up our apache box, wich deals with 30-50 domains atm (and hopefully will grow larger) and hosts both production and development sites, with this directory structure: domains/ domains/domain.ext/ #FTPS chroot for user domain.ext domains/domain.ext/public #the DocumentRoot of http://domain.ext domains/domain.ext/logs domains/domain.ext/subdomains/sub.domain.ext domains/domain.ext/subdomains/sub.domain.ext/public #DocumentRoot of http://sub.domain.ext Each domain.ext Vhost runs with his dedicated user and group via mpm-itk, umask being 027, and the logs are stored via a piped sudo command, like this: ErrorLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_error.log" CustomLog "| /usr/bin/sudo -u nobody -g domain.ext tee -a domains/domain.ext/logs/sub.domain.ext_access.log" combined Now, i've read a lot about not letting the logs out of a very restricted directory, but the developers often need to give a quick look to a particular subdomain error log, and i don't really want to give them admin rights to look into /var/logs. Having them available into the ftp account is REALLY handy during development stages. Do you think this setup is viable and safe enough? To me it is apparently looking good, but i'm concerned about 3 security issues: -is the sudo pipe enough to deal with symlink exploits? Any catches i'm missing? -log dos: logs are in the same partition of all domains. got hundreds of gigs, but still, if one get disk-space dos'd, everything will break. Any workaround? Will a short timed logrotate suffice? -file descriptors limits: AFAIK the default limit for Apache on Ubuntu Server is currently 8192, which should be plenty enough to handle 2 log files per subdomain. Is it? Am i missing something? I hope to read some thoughts on the matter!

    Read the article

  • how does a computer know which IP address will route information to the internet? [closed]

    - by JohnMerlino
    Possible Duplicate: How does IPv4 Subnetting Work? For example, I have a computer with a Network Inteface Card (NIC) which is an Ethernet card that is connected by Ethernet cables to a router. There is also another computer with a cable that is connected in another port of the router. This is a Belkin router operating over an Ethernet in the LAN. When I connect to serverfault.com, it maps to an IP address. My computer now has a task of connecting to that IP address. But my computer itself cannot connect to the serverfault IP address. Only the router can. So the task of my computer is to find the IP address associated with the node that will do the routing to the public internet. How does my computer know that a particular IP address in the local network belongs to the router, and is not another computer connected to the network? Is this information configured manually in the operating system itself? Somehow my computer must know that it must send ethernet frames to the router with the expectation that the router will then send the packet to a public IP. How does it know to send it to the router? Is the router's ip address stored in my computer like a key/value pair e.g. "router"="192.168.2.6", so that when I put a public ip address, my computer first knows to connect to 192.168.2.6?

    Read the article

  • passing on command line options in bash

    - by bryan
    I have a question. I have a script, a kind of long script written in bash aprox. 370 lines. That has several functions and in those functions the user has to enter information which is then stored in files. ( This is suppose to represent a MySQL database, with the functions INSERT, UPDATE, SELECT, SELECT where x=y.) I created this myself in bash, now the only thing that rests me is that I need to be able to pass arguments on the command line to the script, that will do the same as the script does. I know that bash has positional parameters such as $1 $2 $3 $* $@ $0 ( refers to the name of the script) etc. I know how I can use these parameters in a simple if function. This is not enough for my script. I basicly need to do the same thing that the script does, but then from the command line. I have been struggling with this for a couple of days now and I cannot think of a way to get it to work. Maybe someone here can help me with this? If you want to have the script. That can be possible, but I don't think I can paste it in here... EDIT: Link to script, http://pastebin.com/Hd5VsDv2 Note, I am a beginner in bash scripting. EDIT: This is in reply to Answer 1. As I said I hope I can just replace the if [ "$1" = "one" ] ; then echo "found one" to if [ "$1" = "one" ] ; then echo SELECT where SELECT is the function I previously had in my script(above) http://pastebin.com/VFMkBL6g LINK to testing script

    Read the article

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • Is there a free XML viewer to do "pivot tables"

    - by FumbleFingers
    I have an *.xml file on a PC running Windows XP, with a structure something like... <movies id="my movies" <movie name="Unforgiven" <people star="Clint Eastwood" director="Clint Eastwood"/> </movie> <movie name="A Fistful of Dollars" <people star="Clint Eastwood" director="Sergio Leone"/> </movie> <movie name="J Edgar" <people star="Leonardo DiCaprio" director="Clint Eastwood"/> </movie> </movies> I want to open this file in a viewer utility which will show one line per movie, filtered by a condition such as director="Clint Eastwood", including the relevant value for movie name on each line. Note - that's just an example. My actual file has thousands of lines, each possibly several hundred bytes long, and there are many more levels and named values. The important thing is I want to apply a filter for a specified value of some field at some level. And I want to see one line for every case where that value occurs, showing the values for any fields I specify, even if they're stored at higher levels. I may be mistaken in saying what I want is a "pivot table" - I don't know what I should call it.

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • Alternative software for Pinnacle PCTV 100e

    - by Stijn Sanders
    I have a Pinnacle PCTV 100e external USB cable television receiver. I've been using Pinnacle's software that came with the card (TVCenter Pro) to record things at given times. Things I don't like is an extremely high CPU load, and that it doesn't seem to halt the screensaver from running when watching in full screen. Also, I was away the last two weeks, and the schedules went terribly bust. Some items were recorded hours before or after the actual scheduled time (and now I missed some shows), and some recurring schedules weren't converted into the next occurrence correctly! Is there good alternative software that would work with my PCTV 100e? (Preferalby cheap or free) I've tried VLC Player, which gets video, but no audio. I've tried MediaPortal, which crashes when trying to scan for channels. When I select a channel manually, the stored mpg has big errors in encoding and is also missing audio. There's VirtualDub, but that doesn't have ready-made scheduled-recording options. This I can conjure some scheduled scripts for, but I've noticed the sync gets awfully wrong after some time. I've tried Windows Media Center, but it doesn't seem to support the PCTV 100e.

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • Why do my backup fail when I target a network share hosted by a Synology DS211 disk station?

    - by Larry
    My backups are failing when I try to use a network share hosted by a Synology DS211 disk station. They work fine if I target a different network share (i.e. \server1\data\larry). When I run the following command: Wbadmin start backup -backupTarget:\\diskstation\backup-larry -include:C: This is what I get: wbadmin 1.0 - Backup command-line tool (C) Copyright 2004 Microsoft Corp. Note: The backed up data cannot be securely protected at this destination. Backups stored on a remote shared folder might be accessible by other people on the network. You should only save your backups to a location where you trust the other users who have access to the location or on a network that has additional security precautions in place. Retrieving volume information... This will back up volume WIN7(C:) to \\diskstation\backup-larry. Do you want to start the backup operation? [Y] Yes [N] No y Note: The list of volumes included for backup does not include all the volumes that contain operating system components. This backup cannot be used to perform a system recovery. However, you can recover other items if the destination media type supports it. The backup operation to \\diskstation\backup-larry is starting. Creating a shadow copy of the volumes specified for backup... Creating a shadow copy of the volumes specified for backup... The backup operation stopped before completing. Summary of the backup operation: ------------------ The backup operation stopped before completing. Detailed error: Access is denied. Windows Backup failed to write the file: '<backup location>\WindowsImageBackup\<Computer Name>\MediaId'. Access is denied. The backup creates the following path \\diskstation\backup-larry\WindowsImageBackup\LARRY-MYDOMAIN\ but its empty. I definitely have read/write access on the target directory (\diskstation\backup-larry). I have verified this by looking at the permission and by actually copying files to this location. Any suggestions?

    Read the article

  • Allow incoming connections on Windows Server 2008 R2

    - by Richard-MX
    Good day people. First, im new to Windows Server. I've always used Linux/Apache combo, but, my client has and AWS EC2 Windows Server 2008 R2 instance and he wants everything in there. Im working with IIS and PHP enabled as Fast-CGI and everything is working, but, i cant see the websites stored in it from internet. The public DNS that AWS gave us for that instance is: http://ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com/ But, if i copy paste that address, i get nothing, no IIS logo or something like that. My common sense tells me that maybe the firewall could be blocking the access. Can anyone help me and tell where to enable some rules to get this thing working? I don't wanna start enabling rules at random and make the system insecure. If you need any additional info, you can ask me and i will provide it. Thanks in advance. UPDATE: Amazon EC2 display this: Public DNS: ec2-XX-XXX-XXX-121.us-west-2.compute.amazonaws.com Private DNS: ip-XX-XXX-XX-252.us-west-2.compute.internal Private IPs: XX.XXX.XX.25 In my test microinstance, i just to use the Public DNS address (the one that starts with "ec2") and it works like a charm (of course, the micro instance have its own Public DNS im not assuming same address for both instances...) However, for the large instance, i tried to do the same. Set up everything as in the micro instance but if i use the Public DNS, it doesnt load anything. Im suspicious about the Windows Firewall, but, the HTTP related stuff is enabled. What should i do to get access to the large instance? I don't want to set up the domain yet, i want access from an amazon url. 2ND EDIT: all fixed. Charles pointed that maybe Security Groups was not properly set up for the instance. He was right. Just added HTTP service to the rules and all works good.

    Read the article

  • Multiple Servers + One MailServer

    - by theomega
    Hy, I got several Linux-Servers (running Debian) where different services run: Database-Servers, Webservers, Applicationsservers, Tools and so on. All Servers are connected to the same internal network. There is also one special Server which is the Mail-Server: All Mailaccounts are stored on this server, it is also the outbound Mailserver for all the other servers. I want all Mails for all servers to get saved on the Mailserver. For example if an cron-job fails on one of the web-servers the mail should not be delivered to the local user but instead to the Mailserver so I get a centralized place for mail storage. How do you set up this scenario? My current setup is: Using postfix as MTA on the Mailserver and using ssmtp on all the other servers. SSMTP is configured to send the mails to the Mailserver. The Mailserver is configured to allow the whole internal network to relay mails using itself. Is this the right way to choose? I also thought about setting up a MTA (postfix) on every server and configure it somehow to forward the mails. What would be the advantage of this solution?

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >