Search Results

Search found 16903 results on 677 pages for 'single responsibility'.

Page 108/677 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Rename Active Directory domain following Windows 2000 -> 2008 migration.

    - by ewwhite
    I'm working with a site that needs an internal DNS domain rename. It currently has a DNS name of domain.abc.com and NT name of ABC. I'm trying to get to a DNS name of abctrading.com and NT name of ABCTRADING. Split DNS would be used. The site originally ran from a single Windows 2000 domain controller hosting AD, file, print, DHCP and DNS services. There was no Exchange system in the environment. The 50 client PCs are all Windows XP with a handful of users using roaming profiles. All users are in a single OU and there are no group policy/GPOs. I'm a Linux engineer, but have been trying to guide another group of consultants to reach a more suitable setup. With the help of this group, we were able to move the single Windows 2000 system to a set of Windows 2008 R2 servers separated into domain controller and file/print systems (virtualized). We are also trying to add an Exchange 2010 system to this mix. The Windows 2000 server was demoted and is no longer in the picture. This is the tricky part, as client wants the domain renamed and the consultants aren't quite sure how to get through it without another 32-40 hours of testing/implementation. THey say that there's considerable risk to do the rename without a completely isolated test environment. However, this rename has to be done before installing Exchange. So we're stuck at this point. I'd like to know what's involved in renaming the domain at this point. We're on Windows Server 2008. The AD is healthy now. Coming from a Linux background, it seems as though there should be a reasonable path to this. Also, since the original domain appears to be a child/subdomain, would that be a problem here. I'd appreciate any guidance.

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • iSCSI performance questions

    - by RyanLambert
    Hi everyone, apologies for the long-winded post in advance... Attempting to troubleshoot some iSCSI sluggishness on a brand new vSphere deployment (still in test). Layout is as such: 3 VSphere hosts, each with 2x 10GB NICs plugged into a pair of Nexus 5020s with a 10gig back-to-back between them. NICs are port-channeled in an active/active redundant fashion (using vPC-mac pinning for those of you familiar with N1KV) Both NICs carry service console, vmotion, iSCSI, and guest traffic. iSCSI is on a single subnet/single VLAN that is not routed through our IP network (strictly layer2) Had this been a 1gig deployment, we probably would have split the iSCSI traffic off onto separate NICs, but the price/port gets rather ridiculous when you start throwing 4+ NICs to a server in a 10gigabit infrastructure, and I'm not really convinced it's necessary. Open to dialogue/tech facts re: this, though. At this point even a single VM guest will boot slowly to iSCSI storage (EMC CX4 on the same Nexus 5020 10gig switches), and restores of VMs from iSCSI take about twice as long as we'd expect them to. Our server folks mentioned that if we split the iSCSI off onto its own NIC, performance seems significantly better. From a network perspective, I've run through the variables I can think of (port configuration errors, MTU problems, congestion etc.) and I'm coming up dry. There really is no other traffic on these hosts other than the very specific test being performed at the time. Important thing to note is that guest traffic works just fine... it seems storage is the only thing affected by whatever gremlin exists. Concluding that we're not 'overutilizing' the network infrastructure since we're doing hardly anything, I'm just looking for some helpful tips/ideas we can use to resolve this... preferably without hurling extra 10gig NICs that are going to sit around 10% utilization while we've got 70+% left on our others.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Connecting multiple access points

    - by mohsen farahanipoor
    I'm working on a big project. We want to create a wireless network throughout the building with 15 floors. My idea is that we should set up one unified wireless access point at least in each floor...in case of signal attenuate, we use Access point extender/repeater. I selected DWL-6600AP from among D-Link industrial access points. I want to implement a single wireless LAN throughout the building. Is it possible to combines multiple DWL-6600 access points to achieving just a single WLAN? Can a wireless switch controller do this task? Can these Access Points interfere with each other? What is the solution? I read D-Link website's learning materials, but I am still confused. My other question is around the connecting these APs to Wireless Switch Controller - Is it possible to use power line for connecting DWL-6600 to Wireless Switch Controller device? My main goal is that clients with portable devices such as laptops should be easily connected to the network to share & have communication without any more manual configuration as they are already connected to a single network.

    Read the article

  • ProCurve ACL to prevent a subnet from leaving the switch

    - by kce
    I have a single HP ProCurve 2610 in a remote location that is connected in with the rest of the network via SHDSL. There are two Layer-3 networks on this segment. ACLs are setup to deny one subnet (192.0.2.0/24) from ever being able to leave the switch by virtue of being applied to port attached to the upstream connection. The other subnet should be permitted to freely leave the switch. Both subnets are on the same VLAN. Unfortunately SFlow very clearly show broadcast traffic from 192.0.2.0/24 on the upstream connection. ProCurve ACLs are not my strong suit but I feel like I'm missing something very simple here. ip access-list extended "Filter for Camera Network" deny ip 192.0.2.0 0.0.0.255 0.0.0.0 255.255.255.255 log permit ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 exit interface 24 name "DSL - UPLINK" access-group "Filter for Camera Network" in exit Unless I am mistaken traffic from 192.0.2.0/24 should be dropped as it crosses the uplink port (int 24) whereas all other traffic will be permited by the following default allow rule. What exactly am I missing here? EDIT: Firstly, why do you have two subnets contained in the same VLAN? Because that's how it was configured by a previous administrator and while it makes conceptual sense that a single subnet is "mapped" to a single VLAN there's no technical constraint that I am aware of that makes this have to be the case. Instead of filtering inbound traffic on your uplink, you should be filtering outbound traffic. The HP2600 series can only filter inbound traffic on interfaces. Should I change my filter to deny any to 192.0.2.0/24?

    Read the article

  • DVI splitter not working as expected/confusion between DVI-D and -I

    - by Freakishly
    Hey guys, thanks for looking. I have an ATI FirePro™ V3700 in my desktop machine, and I have been running a dual-monitor setup quite effortlessly, thanks to the two DVI ports on the card. I came upon a third monitor, and wanted to extend my desktop to 3 screens, so I purchased a DVI splitter from Amazon. Now, I can only duplicate the second monitor onto the third, not extend it. I've tried all possible combinations of input to no avail. Here's the setup: The ATI FirePro™ V3700 has two Dual-Link DVI-I outputs The splitter splits a single Dual-Link DVI-I port into two Dual-Link DVI-I outputs Two of the monitors are NEC E222W, and the third monitor is a Dell 2001FP. Each monitor has one D-Sub and one Dual-Link DVI-D input. Cables going from the video card to the monitors are two Dual-Link DVI-D to the NECs and one Single-Link DVI-D to the Dell. Is the problem likely with the DVI-D/DVI-I mismatch? Or is it with the cable on the Dell that is only a Single-Link? The cables are easily replaceable, the monitors not so much. Thanks for your time, I really appreciate it. http://www.amd.com/us/products/workstation/graphics/ati-firepro-3d/v3700/Pages/v3700-specs.aspx http://www.amazon.com/Cables-Unlimited-DVI-D-Splitter-PCM-2260/product-reviews/B000H09RFM/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1 www dot newegg dot com/Product/Product.aspx?Item=N82E16824002495 accessories dot us dot dell dot com/sna/PopupProductDetail.aspx?cs=19&l=en&c=us&sku=320-1578 Apologies for the fudged links, I'm new here and they won't let me post more than two :P

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Weird Apache behaviour and with files again

    - by afifio
    Hi and thanks for stopping by. I have read Weird Apache problem with file, I have read Weird Apache problem with file ...and its not the problem Setup single XAMPP installation on Windows, single windows user, 2HD, 1 is a portable USB. All is fine, until I move the xampp to new portable HD Symptom Old php files - works fine, new one doesnt http://127.0.0.1/Ajax/index.php - yay http://127.0.0.1/test2/t.php - display the source code http://127.0.0.1/Ajax/test2/t.php - display the source code http://127.0.0.1/Ajax/t.php - display the source code Extra Info IIS+MS Web Development stuff, .NET4, Asp, etc is being installed and still hast reboot yet. .htaccess also seems doesnt work Apache2 conf file was modified to Averride All and still it doesnt care. One of the directory supposed to treat .htm as php yet got text, created another directory and edit a phpinfo, still another text, browse to phpmyadmin, viola, works fine Suspect Does Apache honour XP security and permission ? If so, this is a single user computer. Does Apache dont like my new hard disk/new place ? Why it doesnt execute the php in new directory but happily execute in old folder ? Thanks for the riddle answers

    Read the article

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • Rackspace Ubuntu 12.04 server stuck in initramfs after kernel upgrade

    - by Znarkus
    Can't boot after I did a aptitude full-upgrade and let it update menu.lst (did a diff first and it looked good). This is what I've done so far in the BusyBox shell: mkdir /tmp/xvda1 mount /dev/xvda1 /tmp/xvda1 chroot /dev/xvda1 nano /boot/grub/menu.lst This file looks like this: title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro quiet splash initrd /boot/initrd.img-3.2.0-31-virtual title Ubuntu 12.04.1 LTS, kernel 3.2.0-31-virtual (recovery mode) root(hd0,0) kernel /boot/vmlinuz-3.2.0-31-virtual root=UUID=/dev/xvda1 ro single initrd /boot/initrd.img-3.2.0-31-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-virtual (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-virtual root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-virtual titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro quiet splash initrd/boot/initrd.img-3.2.0-24-generic titleUbuntu 12.04.1 LTS, kernel 3.2.0-24-generic (recovery mode) root(hd0,0) kernel/boot/vmlinuz-3.2.0-24-generic root=UUID=/dev/xvda1 ro single initrd/boot/initrd.img-3.2.0-24-generic titleChainload into GRUB 2 root(hd0,0) kernel/boot/grub/core.img titleUbuntu 12.04.1 LTS, memtest86+ root(hd0,0) kernel/boot/memtest86+.bin From what I remember, the upgrade added the UUID= string. Should I remove these? Or rather, how do I get my system back online again? Thanks. Update: Seems like I can't even edit the file. [ Error writing /boot/grub/menu.lst: Read-only file system ]

    Read the article

  • Relayout LVM Disk

    - by Tom
    I have an Ubuntu 11.10 system with two 500GB disks. The partition tables look like this: /dev/sda1 primary 465.52GB /dev/sda2 extended 243.17MB -> /dev/sda5 logical 243.14MB /dev/sdb1 primary 465.76GB sda1 and sdb1 are in a single LVM physical volume group containing a single logical volume containing a single logical filesystem which is mounted as /. sda5 is mounted as /boot. The problem comes when I want to upgrade to Ubuntu 12.04, which requires at least 247MB free on /boot. So I need to reduce the size of sda1 so that I can increase the size of sda2 and sda5. How the heck do I do that? I can find how to shrink the logical volume group, but I'm not at all clear on how to clear out the end part of sda1 so that I can reduce the physical volume group. Does pvresize just deal with this automagically? Or is that wild wishful thinking? I guess the alternatives are to back everything up onto something or other and recreate the thing from scratch or find out whether GRUB2 supports using LVM for /boot.

    Read the article

  • How to get just value from database query in Excel?

    - by Corin
    I'm creating a spreadsheet as a collection point of information from a number of MS Access databases. I will run a query on each database to get a count of records in a particular table. Each database has the same structure but different content as they are used in different situations. So the query returns a single value, rec_count. I've figured out how to create that query, save it and then use it as the data source. So far so good. The problem is that Excel treats the query results as a table. So instead of getting just the single value the query returns, I also get the field name. Thus the result takes up two cells instead of one. When linking in the data source, I only see Table, PivotTable Report and PivotChart as options for viewing the data. I don't want any of those. I just want the single value without any formatting, column headers, etc. Is there a way to do this is Excel 2007?

    Read the article

  • Portforwarding Combine Several Ports

    - by kiraitachi
    Hi I got a Raspberry Pi at A.A.A.B in my local network and I have set up a DMZ on my router so that any incoming traffic that comes to my router gets redirected to my raspberry pi wich I can connect via NO-IP adress. The problem is that I want to set up portforwarding since i got several services running on my Pi like SSH, torrent webgui, webalbum, etc. I had this already done before long time ago, but I forgot a bit the syntax and cant get to set it up. Router Help says: The Application allows you to do port forwarding, but only have the ports open when data flowing out of the trigger ports. When a program sends data out on outgoing ports called trigger ports, the device then allows incoming data on the open ports specified in your port triggering configuration. 1.Trigger Port Start Trigger Port Start Specify the start port on the device that would trigger the device to open ports for incoming data. 2.Trigger Port End Specify the end port on the device that would trigger the device to open ports for incoming data. You can enter a port number the same as the trigger port start or enter a larger port number to specify a port range. 3.Trigger Traffic Protocol Type Select the trigger traffic type. Open Port Specify all the ports to be opened. It's content could be: A single port only. A port range only. Start open port number and end port number should be separated by "-" . Combined several single port and several port ranges. Each single port or port range should be separated by "," . Open Traffic Protocol Type Select the open traffic type. This are the fields: http://es.tinypic.com/view.php?pic=n5lv1k&s=8 I think this is the syntax 1-7999,8001-9090,9092-65535. But each time I want to add it gives me an error. Any ideas?

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • Nginx and Wordpress side-by-side with static directory alias?

    - by user117161
    I'm a Nginx novice, but I have it set up with Wordpress Multisite (subdirectories) and php-fpm, and it's working great as is. This lets me set up Wordpress sites off the web root: domain.com/site1 - a Wordpress network single site, which renders as expected. domain.com/site2 - ditto etc. Concurrently, I can easily create static files in the web root that don't conflict or interact with Wordpress, and they are also rendered normally. domain.com/hello.html - rendered normally domain.com/hello.php - rendered normally, including php processing domain.com/static/hello.php - rendered normally (along as "static" isn't a WP single site name) What I'd like to do, and this is where I'm out of my depth with nginx.conf, is create a root directory domain.com/static and put static sites in there domain.com/static/site3 domain.com/static/site4 and have Nginx check the request that comes into the root request comes in for: domain.com/site3 and before handing off to Wordpress, check to see if it exists in the /static folder checks: domain.com/static/site3 - static content exists there and if so, serves that content while maintaining the root URI. serves: domain.com/site3 (with content from domain.com/static/site3) if not, it lets Wordpress check if /site3 is a Wordpress single network site as it does now, and the process continues normally. In nginx.conf, in the server section, I start with this try_files rule: location / { try_files $uri $uri/ /index.php?q=$uri&$args; } I then include a bunch of Wordpress specific rules as identified at http://codex.wordpress.org/Nginx under the subdirectory section. I can see that rewrite rules might take care of it easily, but in my experimentation I've only achieved a bunch of looping (/static/static/static, etc.) and managed to bypass Wordpress if the looping stopped. Sorry if this is a very long-winded way of asking a simple question, but I'm definitely learning some of this stuff for the first time. Thanks!

    Read the article

  • How to pass an enum to Html.RadioButtonFor to get a list of radio buttons in MVC 2 RC 2, C#

    - by Matt W
    Hi, I'm trying to render a radio button list in MVC 2 RC 2 (C#) using the following line: <%= Html.RadioButtonFor(model => Enum.GetNames(typeof(DataCarry.ProtocolEnum)), null) %> but it's just giving me the following exception at runtime: Templates can be used only with field access, property access, single-dimension array index, or single-parameter custom indexer expressions. Is this possible and if so, how, please? Thanks, Matt.

    Read the article

  • Sharepoint: Convert a SPFieldMultilineText to SPFieldText

    - by driAn
    Hi Is it possible to programmatically change a "multi-line text field" to "single-line text field" ? SPFieldMultiLineText field = list.Fields["sample"] as SPFieldMultiLineText; // how to change the type to 'single line' now ? Or do I need to create an additional field (with a similiar name) and migrate the content? Thanks for any help.

    Read the article

  • replace symbol in javascript

    - by Jin Yong
    Does anyone know how can I replace this 2 symbol below from the string into code? ' left single quotation mark into ‘ ' right single quotation mark into ’ " left double quotation mark into “ " right double quotation mark into ”

    Read the article

  • External iPhone Cryptography Libs

    - by AO
    Are there any legal problems using external crypto libs in my iPhone application? I know that Apple has to comply to US cryptography export rules but do I as a developer have any responsibility? How does it work?

    Read the article

  • singleton vs factory?

    - by fayer
    i've got 3 Log classes that all implements iLog interface: DatabaseLog FileLog ScreenLog there can only be one instance of them. initially i though of using single pattern for each class but then i thought why not use a factory for instantiation instead, cause then i wont have to create single pattern for each one of them and for all future Log classes. and maybe someone would want them as multiple objects in the future. so my questions is: should i use factory or singleton pattern here?

    Read the article

  • Does EF 4 Code First's ContextBuilder Dispose its SqlConnection?

    - by Eric J.
    Looking at Code First in ADO.Net EF 4 CTP 3 and wondered how the SqlConnection in their walkthrough is disposed. Is that the responsibility of ContextBuilder? Is it missing from the example? var connection = new SqlConnection(DB_CONN); var builder = new ContextBuilder<BloggingModel>(); var connection = new SqlConnection(DB_CONN); using (var ctx = builder.Create(connection)) { //... }

    Read the article

  • Do Programmers create BUGS?

    - by Diallo
    Recently i had a not very friendly discussion with a client stating that "he can't pay for fixing bugs on the application i've built for him". his reason is simple: - I Can't actually pay for bugs you actually create. He think that a bug in an application has as origin the author so it should be his responsibility to fix it. Do you share that idea?

    Read the article

  • Silverlight OOB Application Path Portability

    - by jws
    Silverlight Out-of-browser applications get installed to a seemingly random location: AppData\LocalLow\Microsoft\Silverlight\OutOfBrowser\2333572144.www.microsoft.com for example. Currently, I am simply storing this path, which works perfectly well on a single machine and a single install, but how can I refer to this application between different installations?

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >