Search Results

Search found 8190 results on 328 pages for 'separate'.

Page 221/328 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Remote desktop connection to network printer

    - by andand
    I'm trying to print a document from a remote WinXP machine to a network printer I use on a local Win7 machine using Remote Desktop. The network printer does not appear in the list of those available on the WinXP box. In more detail, the local machine runs Windows 7 (no admin rights) and connects to a network printer managed by a print server (i.e. not using a local TCP/IP Port). I have access to a Windows XP host on a separate network which I access using Remote Desktop. I would like to have print requests from the remote XP box forwarded to the network printer I use on the Windows 7 machine. The XP machine cannot access the print server I use on the Win7 machine nor can it create a TCP/IP port to connect directly to the printer (network configuration issues). After having consulting the KB312135 I confirmed the "Printers" option was selected in the Remote Desktop Client, Local Resources Tab, yet the network printer does not appear on the list of available printers on the XP box. Is this a lost cause or is there something else I haven't managed to locate yet?

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • Can you share offline files cache with two user accounts?

    - by Joel Coehoorn
    I have a new laptop that I use for both home and work. It runs windows 7 ultimate, and is joined to the domain at work. It is okay to use this laptop for both work and personal activities, and I even have an account set up on the local machine in addition to the work domain account specifically for this to help keep the two separate. At home, I have a file server that I use to share files and printers with my wife's laptop, this new laptop, and my old desktop which will now become the family machine. My mp3 library is on there, among other things. What I want to do is use the windows Offline Files feature to keep a synced copy of my music library on the laptop. That part is easy. What's tricky is that I want to share this offline cache between both the local account on the laptop and my work domain account. I could do them both separately, but then I have two copies of a very large music library stored locally. This also means twice the sync burden, when the domain account is rarely connected to the file share. I really want to be able to sync from the local machine account only, and have the domain account be able to use the synced files. I know where the offline file cache is kept (\Windows\CSC) and I can find the cached files (not encrypted), but permissions on the cache are setup weird, and so using that cache directly is not trivial. Any ideas appreciated.

    Read the article

  • MySQL-Cluster or Multi-Master for production? Performance issues?

    - by Phillip Oldham
    We are expanding our network of webservers on EC2 to a number of different regions and currently use master/slave replication. We've found that over the past couple of months our slave has stopped replicating a number of times which required us to clear the db and initialise the replication again. As we're now looking to have servers in 3 different regions we're a little concerned about these MySQL replication errors. We believe they're due to auto_increment values, so we're considering a number of approaches to quell these errors and stabilise replication: Multi-Master replication; 3 masters (one in each region), with the relevant auto_increment offsets, regularly backing up to S3. Or, MySQL-Cluster; 3 nodes (one in each region) with a separate management node which will also aggregate logs and statistics. After investigating it seems they both have down-sides (replication errors for the former, performance issues for the latter). We believe the cluster approach would allow us to manage and add new nodes more easily than the Multi-Master route, and would reduce/eliminate the replication issues we're currently seeing. But performance is a priority. Are the performance issues of MySQL-Cluster as bad as people say?

    Read the article

  • If I scp a file through an intermediate server, is the file stored temporarily on the server?

    - by Blacklight Shining
    For the sake of simplicity (I find it easier to remember names than arbitrary letters), I will dispense with letters and use names to refer to the machines in this scenario. Say I have two machines, applejack and pinkie-pie, each on their own separate LANs and not in the same physical location. I also have a server, cadance, with a direct Internet-facing connection. I want to copy a file from applejack to pinkie-pie, so to avoid dealing with port forwarding and such, I set up an ssh tunnel from pinkie-pie to cadance (ssh -R etc cadance). Now I can connect to pinkie-pie from anywhere, by connecting to cadance and specifying an alternate port to use. I can also easily copy files to pinkie-pie with scp -P $that_port $some_file cadance:$some_path. My understanding of how it works is this: A secure connection is made from applejack to cadance I am authenticated to cadance A secure connection is made from applejack to pinkie-pie that spans the existing reverse tunnel and the new connection from step 1. I am authenticated to pinkie-pie Files are copied directly from applejack to pinkie-pie over this connection. Am I correct here? How secure is this approach? If I'm wrong…are files copied this way decrypted at cadance before being passed on to pinkie-pie? Is there a possibility that traces of unencrypted data could remain on cadance?

    Read the article

  • VoIP setup for one external PSTN line

    - by Jcl
    I'm completely new to VoIP and the likes, and I'm trying to find information about what could be the best setup for this. I need 4 (maybe more in the future, but maximum 5 or 6) wireless extensions, connected to 1 PSTN line, and maybe 2 in the future. I've been trying to gather information about the gear needed but everything I find seems too much over-the-top (and extremely expensive). The main problem is that the physical place we are on doesn't have possibilities of having a decent internet connection, so using a external VoIP "virtual PBX" is not an option. Thing is, even if small, phone is critical to this organization. I currently have an analog DECT/GAP PBX which does what I need, however the PBX is very bad and the call quality is horrible, and that's why I want to change it. The requirements would be: 4 wireless terminals (routing cable is not an option), all of them ringing on incoming PSTN calls. Ability to do internal calls (4 separate offices) and ability to pass calls between terminals. The 4 terminals should be able to access the external PSTN line without dialing any special codes. Very important: terminals should be able to issue commands on the PSTN line to the external operator in the form *nn*nnnnnnnn# . Don't know wether this could face to be a problem, but I've had problems with analog PBX which would take any * as a PBX command and wouldn't allow terminals to send it to the external lines. Not so important, but would be nice to have: call waiting music Could anyone recommend such a setup? I need to be able to do this on a EXTREMELY LIMITED budget (that is: I don't have a limit, but all should get as much to zero as possible). I have enough spare powerful computers and a 300mbps wireless network which works just fine, so that's not to include in the budget. Don't really know if this is the best place to ask, but it's the most StackExchange-related site I've found to this subject.

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Safely removing my linux dual boot

    - by de1337ed
    Currently, I dual boot with win7 and opensuse 12.1. Is it okay if I first restore the MBR for win7, and then format the linux drives? Or would it be better for me to first format the linux drives, and then restore the MBR? The reason I ask this is sometimes I get nasty errors when I try and boot into my win7 cd in the latter method. Is it possible to restore the MBR without having to boot into the win7 cd? Like, can I remove linux using the disk management utility in win7, and then fix the mbr while I'm still in win7, or do I have to boot into the win7 cd? If I can do this, how do I go about doing so? Thank you. EDIT::::: The spamming of f8 didn't work, it just made loud beeping noises, so I decided to simply boot into my windows disk and use the bootsect /nt60 SYS /mbr command. Note: I haven't formatted my linux partition yet. After I did that, I restarted my computer, and nothing happened. Basically, GRUB is still the MBR and I'm still able to access openSUSE. I think the reason why this is happening is because I think my GRUB is on a separate partion than the linux OS. Here is a picture of my diskmgmt in win7: . openSUSE did all the partioning stuff for me. All I know is that the 40gb that is there is where openSUSE is installed, but I have no clue what's on the 6.05gb and the 14.75gb partitions. Can anyone help me find which partition GRUB is on, and then remove it so I can restore the windows MBR? Thank you.

    Read the article

  • Setup for a live (low-latency) audio video broadcast over Wi-Fi?

    - by Majal Mirasol
    The Upgrade We are capturing audio (from mixer) and video (from a camera) from a main auditorium and passing it to separate rooms within the building. We used to have done this via manual audio/video cables and wires. We wanted to "upgrade" the system and wirelessly broadcast the stream via Wi-Fi. The Problem In our current setup (Wirecast running on A10 on a Wireless-N network), we have the problem of delay. Our streams are delayed from a minute up to five minutes on the clients (laptop/iPad/Android). This had not been a problem from the previous wired connections. Since the wireless network is local, we thought that a delay of less than a second should be achievable. Our Question And so it goes. Anybody there who has any experience for a setup that has both low latency and at the same time user-friendly to clients streaming in the program? Any recommendations would be highly appreciated. (Our current setup in on Windows 7, but setup on a dedicated Linux box is preferred, if achievable.)

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • How to send T.38 from a mac?

    - by Brian Postow
    I'm trying to set up a fax-server on a macintosh. I have Hylafax, and we're going to use an internet FOIP fax provider (Haven't decided who yet, that may be another question). The problem is how to get from Hylafax to T.38. I know of two options, but I'm not sure how to decide between them: T38modem Advantages: It's only one extra program, and i know that I can compile it for the Mac. (well, At least I can get the H323 version working on a Mac) Disadvantages: It is mostly undocumented and seems to be supported only by one guy in Russia. IAXModem/Asterisk Advantages: It's well known, and well supported. We can pay for support. It presumably does the T38 with SIP correctly, so we don't have to worry about it. Disadvantages: It's two separate programs. While I know how to get Asterisk on a mac, I'm not sure about IAXModem. (It's sourceforge, and linux, but compiling things for a mac isn't always straight forward...) It's also mostly undocumented. Do these seem like an accurate listing of the pros/cons? Anyone have any other suggestions? thanks.

    Read the article

  • Can't ping my Window 7 machine from within a Windows XP virtual machine

    - by Jonathan Conway
    I have Windows 7 installed as my primary operating system, on a laptop that's on my home network (wireless). I'm using Microsoft Virtual PC 2007 SP1 to run a virtual machine of Windows XP SP3, in which I want to access the Windows 7 instance, both to browse a shared folder and access the local Apache server. So far I can ping my Windows 7 IP address (IPv4) and access the apache server through the web browser through HTTP. However using my machine name never seems to work. Pinging it fails, and I can't access my apache server using it either. The problem seems to be something to do with my machine's name being registered under IPv6 rather than IPv4. I'm at a loss what to do. Should I try to set up IPv6 on the virtual machine? Not sure how to go about that. Or maybe I should somehow get my machine name on Windows 7 to work with IPv4? Although I think it already does, because I can ping it from a separate box (running Ubuntu), which is only registered under IPv6.

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • SSL setup: UCC or wildcard certificates?

    - by quanza
    I've scoured the web for a clear and concise answer to my SSL question, but to no avail. So here goes: I have a web-service requiring SSL support for authentication pages. The root-level domain does not have the "www" - i.e., secure://domain.com - but localized pages use "language-code.domain.com", i.e. secure://ja.domain.com So I need at least a wildcard SSL certificate that supports secure://*.domain.com However, we also have a public sandbox environment at sandbox.domain.com, which we also need to support under localized domains - so secure://ja.sandbox.domain.com needs to also work. The previous admin managed to purchase a wildcard SSL certificate for .domain.com, but with a Subject Alternative Name for "domain.com". So, I'm thinking of trying to get a wildcard certificate with SANs defined as "domain.com" and ".*.domain.com". But now I'm getting confused because there seem to be separate SAN certificates, also called UCC certificates. Can someone clarify whether it's possible to get a wildcard certificate with additional SAN fields, and ultimately what the best way is to support: secure://domain.com secure://.domain.com secure://.*.domain.com with the fewest (and cheapest!) number of SSL certificates? Thanks!

    Read the article

  • ssh use with netcat to forward connections via bastion host to inside machine

    - by Registered User
    Hi, I am having a server in a corporate data centre who's sys admin is me. There are some virtual machines running on it.The main server is accessible from internet via SSH. There are some people who within the lan access the virtual machines whose IPs on LAN are 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4 the main machine which is a bastion host for internet has IP 192.168.1.50 and only I have access to it. I have to give people on internet the access to the internal machines whose IP I mentioned above.I know tunnel is a good way but the people are fairly non technical and do not want to get into a tunnel etc jargons.So I came across a solution as explained on this link On the gateway machine which is 192.168.1.50 in the .ssh/config file I add following Host securehost.example.com ProxyCommand ssh [email protected] nc %h %p Now my question is do I need to create separate accounts on the bastion host (gateway) to those users who can SSH to the inside machines and in each of the users .ssh/config I need to make the above entry or where exactly I put the .ssh/config on the gateway. Also ssh [email protected] where user1 exists only on inside machine 192.168.1.1 and not on the gateway is that right syntax? Because the internal machines are accessilbe to outside world as site1.example.com site2.example.com site3.example.com site4.example.com But SSH is only for example.com and only one user.So How should I go for .ssh/config 1) What is the correct syntax for ProxyCommand on gateway's .ssh/config should I use ProxyCommand ssh [email protected] nc %h %p or I should use ProxyCommand ssh [email protected] in nc %h %p 2) Should I create new user accounts on gateway or adding them in AllowedUsers on ssh_config is sufficient?

    Read the article

  • What folders to encrypt with EFS on Windows 7 laptop?

    - by Joe Schmoe
    Since I've been using my laptop more as a laptop recently (carrying it around) I am now evaluating my strategy to protect confidential information in case it is stolen. Keep in mind that my laptop is 6 years old (Lenovo T61 with 8 GB or RAM, 2GHz dual core CPU). It runs Windows 7 fine but it is no speedy demon. It doesn't support AES instruction set. I've been using TrueCrypt volume mounted on demand for really important stuff like financial statements forever. Nothing else is encrypted. I just finished my evaluation of EFS, Bitlocker and took a closer look at TrueCrypt again. I've come to conclusion that boot partition encryption via Bitlocker or TrueCrypt is not worth the hassle. I may decide in the future to use Bitlocker or TrueCrypt to encrypt one of the data volumes but at this point I intend to use EFS to encrypt parts of my hard drive that contain data that I wouldn't want exposed. The purpose of this post is to get your feedback about what folders should be encrypted from the general point of view (of course everyone will have something specific in addition) Here is what I thought of so far (will update if I think of something else): 1) AppData\Local\Microsoft\Outlook - Outlook files 2) AppData\Local\Thunderbird\Profiles and AppData\Roaming\Thunderbird\Profiles- Thunderbird profiles, not sure yet where exactly data is stored. 3) AppData\Roaming\Mozilla\Firefox\Profiles\djdsakdjh.default\bookmarkbackups - Firefox bookmark backup. Is there a separate location for "main" Firefox bookmark file? I haven't figured it out yet. 4) Bookmarks for Chrome (don't know where it's bookmarks are) and Internet Explorer ($Username\Favorites) - I don't really use them but why not to secure that as well. 5) Downloads\, My Documents\ and My Pictures\ folders I don't think I need to encrypt, say, latest service pack for Visual Studio. So I will probably create subfolder called "Secure" in all of these folders and set it to "Encrypted". Anything sensitive I will save in this folder. Any other suggestions? Again, this is from the point of view of your "regular office user".

    Read the article

  • Sync clock on Windows XP machine to external (non-domain, non-workgroup) Windows Server 2008 R2 machine

    - by Eric
    I have two machines and I'd like their clocks to be in sync for various reasons. Machine 1 is an XP machine located in the office. Machine 2 is a VPS hosted by a third party running Windows Server 2008 R2. These machines are not in any kind of workgroup or on a domain together. They are completely separate machines. Machine 2 is currently syncing once a week to time.windows.com. The clock on Machine 2 does seem to wander a bit within that week interval. What I would like to do is have Machine 1 set its clock based on the clock of Machine 2. I have tried configuring w32tm on the XP machine. This is what I used for configuration: w32tm /config /syncfromflags:manual /manualpeerlist:"<ip address of machine 2>" However, whenever I issue the /resync command I get "The computer did not resync because no time data was available". I have made sure to start the windows time service on machine 2, and I have added firewall exceptions for UDP port 123. Is there something I need to configure on Machine 2 (other than just starting the time service) in order to get it to respond? Edit: I have also run w32tm /config /reliable:YES /update on Machine 2. I am still getting "The computer did not resync because no time data was available". Is there something else I'm missing?

    Read the article

  • How do I only dp or do just the lines, not the entire block in Vim diff?

    - by hobbes3
    I'm currently using MacVim (Snapshot 64) "Split Diff by..." menu option. The file is Django's my settings.py from version 1.3.1 to a fresh file from version 1.4. (Open the image on a separate window/tab to enlarge.) I know two basic commands do to "obtain" (and replace) a block from the other side. dp to "put" (and replace) a block to the other side. But those two commands writes the entire block, which in MacVim is the purple highlights. If you look at the 2nd block, you can see that from line 2 and 3 only has 2 words that are different: mysite and hobbes3. I just want to replace per line not the entire block. So what is there a command to replace do do and dp per line as oppose to an entire block or do I have to manually type it out? Bonus question: I noticed that once I manually edit a block, I lose the purple highlighting. How do I "refresh" the diff again to include the highlights without reopening the file? Please try to keep the answers Vim-general as oppose to MacVim-specific. Thanks!

    Read the article

  • Sorting/grouping when there are multiple values in one cell

    - by ngm
    I have an Excel 2007 spreadsheet, where each row of the dataset describes a feature of a piece of software. One of the columns in the spreadsheet is Relevant Users, which describes which users of the software the feature is of interest to. There may be a couple of different users interested in a feature, in which case I've been filling in the cell with the two user types separated by a colon, e.g. 'Usertype A; Usertype D'. Occassionally, I'd like to sort my data by the Relevant Users column. However, the way I'm populating the column means the sorting isn't very smart. If I have a feature where 'Relevant Users' is 'Usertype A; Usertype D', and then I sort by Relevant Users, that feature will be grouped at the end of all the other features of relevant to Usertype A, as it's just sorting alphabetically. But I want it to be listed in the two separate groups of Usertype A and Usertype D. Or, if I have a pivot table that groups the features together under the heading of Relevant User, I'll get all the features for 'Usertype A', then 'Usertype B', then 'Usertype C', then 'Usertype D', then 'Usertype A; Usertype D', etc. Whereas I really want a feature with Relevant Users as 'Usertype A; Usertype D' to show up in both the Usertype A group and the Usertype D group. I guess if this information was in a database I might have a many-to-many table linking Relevant Users to features. But is there a way to go about having this kind of many-to-many relationship in Excel?

    Read the article

  • IIS7 - Web Deployment Tool - SetParam/SetParamFile to set http and https bindings + Cert

    - by Andras Zoltan
    Hi, we're currently using the MS Web Deployment Tool to sync a live website and some WebServices from a staging box to two live servers. The staging box hosts the site on any IP on port 17000, whereas the two live servers are load-balanced and have a different IP for each of them. At present, I generate two separate packages for deployment - one for each machine - using the sync operation and specifying a DestinationBinding parameter as follows: msdeploy -verb:sync -source:WebServer,computerName=localhost -dest:package="machinename.zip" -setParam:type="DestinationBinding",scope="SiteName",value="ip_address:port:". (Split across multiple lines to make it easier to read!) I run this twice, with a different target filename and ip address for each of the two machines. When it comes to deployment, I simply do a sync from each package to its respective live site. I know, I know - I should be able to do it by generating one parameterised package and then perhaps using the SetParamFile switch for each of the two Servers - believe me I'd like to, but the documentation on doing this is frankly non-existent. Now I need to configure and deploy both HTTP and HTTPS binding for this site; including also the ssl cert that is to be used. I've added an SSL binding for the site on the staging box - which uses a development cert (which will need to be replaced - or should the staging box be using the live cert?), and now the above command line has the effect of replacing the target IP on both http and https entries. It appears that I cannot specify multiple bindings plus the cert information in the DestinationBinding value in the -setParam above, so anyone know how would I go about doing this? Any help greatly appreciated.

    Read the article

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • Is there a quick way of undoing a folder change in Far Manager?

    - by Johannes Rössel
    I love Far Manager. However, it has a feature to quickly go to the root directory of a drive with Ctrl+\. I do sometimes need and use this feature, but more frequently I use Ctrl+? to quickly insert the file name under the cursor into the command line. As it so happens, the ? key is located dangerously close to \ which is why I sometimes erroneously go the root directory (which then is doubly unfortunate since I originally wanted to work with a file in the directory I was in). Now I could probably just redefine Ctrl+\ to do nothing, although I still sometimes need that (can be replicated with a quick cd\, though). But Windows Explorer, in the wake of the WWW, provided us with a handy directory history and two separate ways of navigating backwards: backwards through the history and backwards through the hierarchy. Is there something quick and easy to get back to the folder I were in? This is less of an issue in C:\Users\Me (still nagging) but more so in deeper hierarchies.

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >