Search Results

Search found 4103 results on 165 pages for 'party mcfly'.

Page 123/165 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • What do I need in order to extract and combine text files from multiple ZIP files, via command line?

    - by Iszi
    I've got an interesting scripting challenge in front of me. I'm fairly certain there's a way to do it, but I feel like I'm probably lacking some particular tools and/or functional knowledge. There's some fifty-plus ZIP files that each contain, among other things, text files that need to be merged with one another. The structure is something like this: C:\Reports\FirstJob-1.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\FirstJob-2.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\SecondJob-1.zip |-MyName |-SecondJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt If I had all the Report.txt files in one regular folder, and uniquely named, I could probably just write a FOR statement that targets *.txt and runs something like type filename.txt >> Consolidated.txt on each. However, these all have the same file name and are embedded deep within separate ZIP files. The potentially useful tools I currently have at my disposal are Windows XP Professional SP3, PowerShell, and WinZip. I'd rather not download or install anything else, but I do understand that third-party tools (or additional tools from Microsoft or WinZip) may be necessary. Whatever tools I use should run natively in Windows. I really don't want to have to mess with Cygwin or other emulators on this system. At the very least, I need a tool that will allow me to analyze and manipulate ZIP files from the command line. Also, are there any other particular complications to this that I've not yet thought of?

    Read the article

  • Joining two routers together, but I have no access to the second router, although I know it's IP address and Gateway

    - by JohnnyVegas
    I have temporarily moved into a rented apartment for 4 months, which has wireless. The trouble I am having is that the access points here are wifi only and no RJ45 and I need to use RJ45 to connect some equipment that I am working with. I have purchased an RT-N66U and installed Tomato (shibby ver. 1.28) and successfully replaced the existing access point, but now I want to enable the access point that I have replaced as it links wirelessly to 3 others. Can I plug in a cable from the access point to my RT-N66U and get it to access the internet via my router? I have no access to the existing wireless access point, and don't want to reset it as it's not mine. There is another router situated in the roof somewhere which I also have no access to, but it's supplying my RT-N66U internet and I most definitely have a double-nat, which although isn't the best way of doing things I am limited with what I can do. Any suggestions on routing tables, vlans etc would be helpful, but I have no experience in these fields before - but I know the tomato firmware can cater for this. My router is set to IP 10.0.1.1 and dhcp is 10.0.1.100-200 The wireless access point address was 192.168.1.2 but this was assigned by the router in the roof which has the address 192.168.1.1. There is a cable from this router going to a wall socket which I now have my RT-N66u attached to via the WAN port. I understand it's scruffy and it isn't the way to do things but I have tried to ask for the admin details but as the wireless network is looked after by a third party and nobody knows their details I am stuck with this dilemma. I could buy three wireless access points and replace the existing but this isn't what I want to do, and although I have installed plenty of DD-WRT wireless repeater bridges they simply don't work here for some unknown reason. The phone line here is very noisy too and I don't have the rights to install ADSL in a building that isn't mine, and 3G coverage isn't good enough either. Thanks for your time

    Read the article

  • Apache+PHP on Windows Server 2008

    - by Álvaro G. Vicario
    I've installed Apache/2.2 and PHP/5.3 lots of times under Windows XP, Windows Vista and Windows Server 2003. The official *.msi installers work fine and configure everything. Now I need to install them into a Windows Server 2008 R2 Standard 64-bit box and I'm facing nothing but problems: There are no official 64 bit binaries for Apache and no binaries at all for PHP (official or third-party). It's alright, I'll do with good 32 bits, but it's kind of surprising. Official documentation is vague, generic and completely unaware of UAC or any recent Windows security feature. The PHP installer is unable to configure mod_php and the Apache installer is unable to configure... well, Apache. After three hours I've finally reached the point where I'm installing everything in the root folder and assigning full control access to all users in all files and directories and all I've got is a PHP-less Apache server that's able to serve static pages. So I guess it's time to stop and think. My question is: Has anyone installed an Apache+PHP production server under Windows Server 2008 in a serious, secure and reliable way and documented the whole process? Or should I just find a bundle like XAMPP and the like that requires no installation? === EDIT === I've installed Xampp Lite 1.7.3 and everything was working in 5 minutes. I'd still like to find some documentation about installing the original packages: XAMPP installs tons of stuff I don't need and offers no tool to enable and disable PHP extensions.

    Read the article

  • Terminal server performance over high latency links

    - by holz
    Our datacenter and head office is currently in Brisbane, Australia, and we have a branch office in the UK. We have a private WAN with a 768k link to our UK office and the latency is at about 350ms. The terminal server performance is reeeeealy bad. Applications that don't have too much animation or any images seem to be okay. But as soon as they do, the session is almost unusable. Powerpoint and internet explorer are good examples of apps that make it run slow. And if there is an image in your email signature, outlook will hang for about 10 seconds each time a new line is inserted, while the image gets moved down a few pixels. We are currently running server 2003. I have tried Server 2008 R2 RDS, and also a third party solution called Blaze by a company called Ericom, but it is still not too much better. We currently have a 5 levels dynamic class of service with the priority in the following order. VoIP Video Terminal Services Printing Everything else When testing the terminal server performance, the link monitored using net-flows, and have plenty we of bandwidth available, so I believe that it is a latency issue rather than bandwidth. Is there anything that can be done to improve performance. Would citrix help at all?

    Read the article

  • Options for synchronizing Palm Desktop calendars

    - by Al Everett
    My wife and I each have Palm Centro phones and very full calendars. We've been using Palm products for years and are happy with them and are not looking to switch (and don't have the budget for it even if we wanted to). What is a viable way we can synchronize our calendars? Back when we had Palm Z22s we used AirSet. It worked great in synchronizing our desktop calendars. Unfortunately, their sync software does not support Palm Desktop v6 and there is nothing in the pipeline to support it. (The third party vendor is apparently not interested in updating for the newer Palm desktop.) I would love to be able to get back to having each other's appointments appear on the other's calendar. What can we do? Some limitations: A data plan is not an option (so this precludes over-the-air synchronization options) No Microsoft Outlook Edit: Syncing my Palm to my Palm Desktop is not the issue. Being able to sync, in some way, the Palm calendar databases of my wife's and my calendars is what's desired.

    Read the article

  • How to monitor process hogging CPU on a remote server

    - by sergeb
    I have a remotely hosted (virtual, VMware) dedicated server (Windows 2008 Server Web edition w/ SP1) that I can only connect to over Remote Desktop. Lately, a process hogs CPU for ~40 minutes most every day (at a random hour) and brings all web sites on the server down. While this is going on I also cannot connect using Remote Desktop to investigate on what is that process... Promptly after 40 min I can RD and the first thing I see on the Perf Monitor is that there was something topping the CPU at 100% and stops just before I'm able to RD... I'm aware of the beginning and end of this for I have monitors setup that email me up/down status of the web sites but I'm locked out while this is happening - can't RD to the server until it's over (and too late to see the Task Manager/Process Explorer picture). What is the best way/tool to setup on the server to continuously monitor all processes so when this happens I login and "replay" it to find the process causing this trouble? (I have no control over the virtual/VMware setup for it is hosted by a 3rd-party but I have most full control over my dedicated machine) Thanks in advance!

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • Windows roaming profile when creating a new user profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • Throttle CPU Usage consumed by Process

    - by Brett Powell
    We run a game-server company where we basically have large amounts of customers sharing a single machine, and are just on their own instance of a Java Process (Minecraft) managed by our Web Control Panels. In the last few game updates released, we have noticed that many of the third-party plugins our customer's use have become poorly written and we are frequently seeing huge CPU increases from certain servers until we manually kill the process. Our Game Panel automatically restarts processes, so killing them is not really an issue. Our problem is that once once of these servers starts consuming 50%+ CPU Usage, it takes atleast 5 minutes to RDP into the machine, locate who it belongs to, shut it down and notify them. Are there any current solutions for Server 2008 which allow for the throttling of CPU usage or worst case, just auto kill a process stuck using that much? As Minecraft is essentially a single-threaded application, we have investigated using Affinity, although with the variations in our Packages and fluctuations in usage, this doesn't work well for us. Some option to throttle the maximum usage a process can use would be perfect, or at least the option to kill a process using that much. Thanks!

    Read the article

  • Windows roaming profile when creating a new Windows profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • Apache2 with lighttpd as proxy

    - by andrzejp
    Hi, I am using apache2 as web server. I would like to help him lighttpd as a proxy for static content. Unfortunately I can not well set up lighttpd and apache2. (OS: Debian) Important things from lighttpd.config: server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_proxy", "mod_status", ) server.document-root = "/www/" server.port = 82 server.bind = "localhost" $HTTP["remoteip"] =~ "127.0.0.1" { alias.url += ( "/doc/" => "/usr/share/doc/", "/images/" => "/usr/share/images/" ) $HTTP["url"] =~ "^/doc/|^/images/" { dir-listing.activate = "enable" } } I would like to use lighttpd in only one site operating as a virtual directory on apache2. Configuration of this virtual directory: ProxyRequests Off ProxyPreserveHost On ProxyPass /images http://0.0.0.0:82/ ProxyPass /imagehosting http://0.0.0.0:82/ ProxyPass /pictures http://0.0.0.0:82/ ProxyPassReverse / http://0.0.0.0:82/ ServerName MY_VALUES ServerAlias www.MY_VALUES UseCanonicalName Off DocumentRoot /www/MYAPP/forum <Directory "/www/MYAPP/forum"> DirectoryIndex index.htm index.php AllowOverride None ... As you can see (or not;)) my service is physically located at the path: / www / myapp / forum and I would like to support lighttpd dealt with folders: / www / myapp / forum / images / www / myapp / forum / imagehosting / www / myapp / forum / pictures and left the rest (PHP scripts) for apache After running lighttpd and apache2 working party, but did not show up any images of these locations. What is wrong?

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • Is Windows Server 2008R2 NAP solution for NAC (endpoint security) valuable enough to be worth the hassles?

    - by Warren P
    I'm learning about Windows Server 2008 R2's NAP features. I understand what network access control (NAC) is and what role NAP plays in that, but I would like to know what limitations and problems it has, that people wish they knew before they rolled it out. Secondly, I'd like to know if anyone has had success rolling it out in a mid-size (multi-city corporate network with around 15 servers, 200 desktops) environment with most (99%) Windows XP SP3 and newer Windows clients (Vista, and Win7). Did it work with your anti-virus? (I'm guessing NAP works well with the big name anti-virus products, but we're using Trend micro.). Let's assume that the servers are all Windows Server 2008 R2. Our VPNs are cisco stuff, and have their own NAC features. Has NAP actually benefitted your organization, and was it wise to roll it out, or is it yet another in the long list of things that Windows Server 2008 R2 does, but that if you do move your servers up to it, you're probably not going to want to use. In what particular ways might the built-in NAP solution be the best one, and in what particular ways might no solution at all (the status quo pre-NAP) or a third-party endpoint security or NAC solution be considered a better fit? I found an article where a panel of security experts in 2007 say NAC is maybe "not worth it". Are things better now in 2010 with Win Server 2008 R2?

    Read the article

  • When to use Truecrypt, and when not to?

    - by tm77
    I have about 30 (this number will most likely grow over the next few years to 50 or more) unencrypted laptops that I have been tasked to encrypt (entire drive). These machines will be used off site regularly by my users. These machines are running Windows 7 and XP (about 50/50), but more Windows 7 every month. I have experience with Truecrypt, and have had no issues. It appears to be THE solution for a free solution. My concern with Truecrypt is that my users will have 2 passswords needed to login to their machines. Also, I need to choose to either have 1 password for my organization, or carefully document each machine's password (management nightmare). In my mind, choosing between a managed and a free encryption solution is primarily based on the NUMBER of machines that will be encrypted and supported. Two questions: From a management standpoint, what is the tipping point of users where a managed solution would pay for itself over Truecrypt? What are some good third party solutions? (I will consider Bitlocker, but the price to upgrade Windows 7 licenses is a turn-off) I would love to hear from some admins with experience in supporting encrypted machines in a corporate environment. Many thanks in advance!

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • RemoteApp .rdp embed creds?

    - by Chris_K
    Windows 2008 R2 server running Remote Desktop Services (what we used to call Terminal Services back in the olden days). This server is the entry point into a hosted application -- you could call it Software as a Service I suppose. We have 3rd party clients connecting to use it. Using RemoteApp Manager to build RemoteApp .rdp shortcuts to distribute to client workstations. These workstations are not in the same domain as the RDS server. There is no trust relationship between domains (nor will there be). There is a tightly controlled site to site VPN between workstations and the RDS server, we're quite confident we have access to the server locked down. The remoteApp being run is an ERP application with its own authentication scheme. The issue? I'm trying to avoid the need to create AD logins for every end user when connecting to the RemoteApp server. In fact, since we're doing a remoteApp and they have to authenticate to that app, I'd rather just not prompt them at all for AD creds. I certainly don't want them caught up in managing AD passwords (and periodic expirations) for accounts they only use to get to their ERP login. However, I can't figure out how to embed AD creds in a RemoteApp .rdp file. I don't really want to turn off all authentication on the RDS server at that level. Any good options? My goal is to make this as seamless as possible for the end-users. Clarifying questions are welcome.

    Read the article

  • Understanding Security Certificates (and thier pricing)

    - by John Robertson
    I work at a very small company so certificate costs need to be absolutely minimal. However for some applications we do Need to have our customers get that warm fuzzy not-using-a-self-signed certificate feeling. Since creating a "certificate authority" with makecert really just means creating a public/private key pair, it seems pretty clear that creating a public/private key pair FROM such a "certificate authority" really just means generating a second public/private key pair and signing both with the private key that belongs to the "certificate authority". Since the keys are signed anyone can verify they came from the certificate authority I created, or if verisign gave me the pair they sign it with one of their own private keys, and anyone can use verisigns corresponding public key to confirm verisign as the source of the keys. Given this I don't understand when I go to verisign or godaddy why they have rates only for yearly plans, when all I really want from them is a single public/private key pair signed with one of their private keys (so that anyone else can use their public keys to confirm that, yes, they gave me that public/private key pair and they confirmed I was who I said I was so you can trust my public/private key pair as belonging to a legitimate third party). Clearly I am misunderstanding something, what is it? Does verisign retire their public/private key pairs periodically so that my verisign signed key pair "expires" and I need new ones? Edit: I learned that the certificate has an internal expiration date and it also maintains an internal value stating whether it can be used to sign other certificates (i.e. sign other private/public key pairs stored as certificates). Can't I get a few (even one) non-signing certificate signed by someone like verisign that I can use for authentication/encryption without a yearly subscription?

    Read the article

  • Changing dual-monitor settings without closing the laptop lid on OS X

    - by hekevintran
    I have a Unibody MacBook hooked up to an external display. By default when I boot up, the system will go to dual-monitor mode. I want to use only the external display. The Apple supplied solution to this problem is to close the lid of the laptop which puts the machine into sleep mode and then move the mouse around to wake it up again. Because the machine is being woken up with the lid closed, when the displays are detected the system finds only the external. After the system is functional again, you can open the lid if you want and the laptop screen will be non-functional until you either tell the system to detect displays from the system preferences or you turn off the external display. Every time I want to use only the external display, I must reach my hand over to close the lid, wait for the machine to sleep, jigger the mouse, wait for the machine to wake up, and finally open the lid again because I don't want the machine to overheat. I feel that this is very stupid to have to do. Why is there no button or menu option that says "don't use this screen"? Is there any third-party software way to change the screen setup that does not involve physically closing the lid and playing a game of "are you sleeping" in order to switch such a simple software setting? We are in the 21st Century and honestly this is childish.

    Read the article

  • No digital audio output with Asus Xonar DG

    - by Lunatik
    I've purchased an Asus Xonar DG as replacement for faulty onboard audio in a Medion 8822 as it has an optical output which is all I really need to feed my HTPC. I uninstalled the previous drivers/devices, switched the PC off, inserted the Asus card, powered up, disabled the onboard audio in the BIOS, then installed the driver that came on the CD (same version as on Asus' website as of today) and everything went perfectly - no errors. I set the audio devices up in Windows and in the Asus utility (SPDIF enabled, 6-ch audio) as I would expect to see them work, but the only thing is I have no digital audio from test tones within Windows/the Asus utility, PCM audio or Dolby Digital from DVD. Analogue audio is fine. I've uninstalled things and reinstalled a couple of times now, as well as trying almost all combinations of analogue/digital outputs but can't get it sorted. Does anyone have any tips on how to get this working? This card has just been released so there isn't much out there to go on. Notes: The light on the toslink port is lit. OS is Vista 32-bit SP2 and all up to date, pretty much a fresh install with almost no 3rd party applications installed This page seems to suggest that a digital output device in Windows is not needed with Xonar cards as it was with the previous Realtek so I have it set to Analog. The only other output device is S/PDIF pass-thru

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >