Search Results

Search found 1351 results on 55 pages for 'pain'.

Page 35/55 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that a lot of things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. especially basic configurations these could manage. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go? and for more advanced features, one just use the terminal like old days. is this a smart way of managing a linux server? and which one would you choose? have you used any of these and could share your thoughts about them?

    Read the article

  • How to scope access to a service to set of users, using OpenLDAP, and only OUs

    - by JDS
    Okay, here goes. Solving this will solve several problems for me (as I can reapply this knowledge to several extant, similar problems), but luckily I have a very specific, concise problem to describe. Enough preamble. Our hosting partner is setting up VPN access for us and is connecting it to our LDAP server. They are using Cisco VPN, the docs on setting this up are here: http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808c3c45.shtml#maintask1 Specifically, note the screenshot in (5), under "ASDM" Now, I do NOT want to provide access to all of our users. I only want to provide access to our IT group. But I do not see a configuration option for LDAP groups on that web reference for the Cisco VPN. We are using: OpenLDAP 2.4 Static groups (i.e. "Group has the following members...") Single user OU, "ou=users,dc=mycompany,dc=com" Is it possible to provide an alias of some kind in OpenLDAP that creates another OU, "itusers", say, and lets me alias the members of that OU somehow? Something like: "cn=Jeff Silverman,ou=itusers,dc=mycompany,dc=com" is an alias for "cn=Jeff Silverman,ou=users,dc=mycompany,dc=com" And is NOT a separate, unique user account. Alternatively, should I just create a separate OU and manage it separately? It is a pain, but only 12-15 users will have to be managed that way, with two separate user accounts. But I hate this option - messy, unmanageable, unscalable. You know what I mean. I am open to any options. I've searched and read all over but I can't quite find an directly analagous example. I can't possibly be the only one who's had this problem! Thanks!

    Read the article

  • SQL Server 2008 Logshipping not Restoring

    - by Nai
    I am getting the following errors during the restore part of the Logshipping process on my secondary server: 2010-04-01 10:00:01.85 Error: The file 'F:\UK_20100327090001.trn' is too recent to apply to the secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.85 Error: The log in this backup set begins at LSN 55408000007387500001, which is too recent to apply to the database. An earlier log backup that includes LSN 55147000001788900001 can be restored. RESTORE LOG is terminating abnormally.(.Net SqlClient Data Provider) 2010-04-01 10:00:01.87 Searching for an older log backup file. Secondary Database: 'UK_Backup' 2010-04-01 10:00:01.90 Skipped log backup file. Secondary DB: 'UK_Backup', File: 'F:\UK_20100324090000.trn' 2010-04-01 10:00:01.93 Error: Could not find a log backup file that could be applied to secondary database 'UK_Backup'.(Microsoft.SqlServer.Management.LogShipping) 2010-04-01 10:00:01.93 Deleting old log backup files. Primary Database: 'UK' 2010-04-01 10:00:01.96 The restore operation completed with errors. Secondary ID: 'c066bb63-930c-4b73-861c-f59f0a38c12c' It was happily humming along until I checked it this morning. Some additional details. In the Logshipping folder, there is one file UK_20100324090001.trn dated on the 2009-3-24. The next most recent .trn file is the UK_20100374090001.trn which is the file that was applied during the restore. Why is there an older trn file seemingly on it's own? How can I fix this problem? It'll be a real pain to restart the entire logshipping process. x_x

    Read the article

  • Developer's PC - worth getting more than 8GB RAM?

    - by Borek
    I'm building a developer PC and am wondering whether to get 8GB or 12GB. It's a Core-i7 860 system, i.e., 1156 motherboards with 4 slots for RAM sticks, dual channel, usually up 16GB (as opposed to 1366 sockets where 6 banks / triple-channel are used). 8GB would be cheaper to get especially because price per GB is lower with 4x2GB compared to 2x4GB. Also the availability is worse for 4GB DIMMs here where I live; those are the main practical advantages of 8GB. (Edit: I should have stressed the price difference more - in the eshop I'm buying from, the difference between 12GB and 8GB is so big that I could almost buy a whole new netbook for it.) However, I understand that more RAM can never do harm which is the point of this question - how much of a difference will 12GB make as opposed to 8GB? Honestly, I've always been on 3.2GB systems (4GB but 32bit system) and never felt much pain from having too little memory - of course there could be more but for instance compiler's performance was usually held back by slow I/O or not utilizing multiple cores on my CPU. Still, I'm not questioning that 8GB will be useful, however, I'm not sure about the additional 4GB difference between 8 and 12 gig. Anyone has experience with 8GB / 12GB systems? The software I usually run all the time: Visual Studio or Eclipse (both should be fine with ~2GB RAM, after that I feel their performance is I/O bound) Firefox (it can never have enough RAM can it? :) Office (~500MB RAM should be enough) ... and then some smaller apps like Skype, other browsers, some background services etc.

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Need a new mouse: clicky scroll wheel and additional thumb-operated scroll wheel

    - by cksubs
    I'm having some carpel tunnel syndrome or RSI issues, and I need to get a new mouse. I've tried a friend's Logitech MX Revolution and it's almost perfect. I LOVE the thumb-operated scroll wheel, and I think that that alone would solve many of my hand pain issues. The problem is that it uses their newfangled smooth/click scroll wheel. It can be either smooth or clicky, and is switched by pressing in the scroll wheel. I really question Logitech's user research on that one. I've been using middle-click to open new links and close tabs for years now in my browser, and I'm not willing to give that up for this cool technology. In fact I HATE smooth scroll wheels in general, and need a clicky one. The option would be ok, but not if it's activated via the middle-click button. In closing, I'm looking for these items in my mouse: Thumb-operated scroll wheel or 'joystick'-like apparatus. Clicky, as-stiff-as-possible scroll wheel on top Normal middle-click functionality. Comfortable, preferably ergonomic design. Anyone have any suggestions? Logitech, want to get rid of that smoothing-scrolling crap so I can buy your mouse?

    Read the article

  • Citrix Access Gateway with Citrix Receiver

    - by vm370
    I'm currently using a Citrix Access Gateway with firmware 5.0.4 to provide access over an SSL-VPN to an isolated environment, which is connected to over the Citrix Access Gateway Client, which is delivered by the device by default. However we encountered different problems, e.g. that it's somehow not possible to get it out of the autostart (not registered as a service or in the autostart?!) and it killed the Cisco VPN Client, which is used in the company and unfortunately cannot be replaced. The Cisco client can also just be used again after a procedure with cleaning the registry from all CAG Client remains, which requires a lot of effort. Because of that, I'd like to check if there is an alternative to this client, since is this is a real pain... Unfortunately I couldn't find a way to use the Receiver with the CAG yet, but if you have any resources on how to build this workaround, I'd be very happy. Thanks a lot in advance UPDATE: If there are other alternatives I'd be even more happy, since using the Receiver would also mean that there is an issue with the ICA-Client, which is also used in our environments. From my experience, the Receiver and the ICA-Client are also no good friends...

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

  • Facebook Chat through XMPP protocol on Pidgin Portable - Will not Authorize

    - by Sara Neff
    I heard you can use facebook chat on desktops now. Thats awsome! What i didn't hear is that it is a pain in the butt! Not awsome! I've followed six nearly identical sets of instructions from six different websides, including the one that facebook generates for you, to get facebook chat connected through Pidgin. Its the latest portable version, so from what i hear the plugin is out of the question. Whenever I go to try and connect i get a message saying "Not Authorized" and buttons to either modify the account info, or retry. NOTHING i have done has fixed this, and I can't find anything remotely usefull anywhere. I am running windows xp, and running pidgin (portable) off of a flash drive. Someone please tell me what i have to do. I read about authorizing the chat on my actual facebook page. I'd have tried that if i could find out how to do it, but if its there they hid it good. HELP?!

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • What configuration changes can I make to speed up extremely slow Windows VM's in ESXi 4.0.

    - by Shawn Anderson
    I've recently moved from VMWare Server to ESXi 4.0. Running on Dell T310. My VM's have been restored but they are running dog slow compared to VMWare Server. I loaded ESXi 4.0 using only default values. Where are some areas where I can tweak the performance? Even logging onto the VM's can be extremely sluggish. Trying to install software on any of them is a new experience in pain. Dell PowerEdge T310 Xeon X3460 2.80 GHz 32 GB RAM 1 HD (2 TB) I have 16 VM's on this server, but only six or so will be running during my testing. I keep an eye on the Resource Allocation and Performance tabs for the host and I never see CPU or RAM getting anywhere close to pegged. Events tab does show some notices for video RAM issues and some hints on Windows activation issues, but nothing that would point to the sort of sluggishness that I'm experiencing. 1 Windows Server 2008 R2 (64-bit) - 4 GB RAM 1 Windows 7 (32-bit) - 2 GB RAM 1 Vista (32-bit) - 1 GB RAM 3 XP (32-bit) - 1 GB RAM Over to you! Thanks - Shawn

    Read the article

  • Migrating Windows 2003 File Server Cluster to Windows 2008 R2 Standalone?

    - by Tatas
    We have a situation where we have an aging Windows 2003 File Server Cluster that we'd like to move to a standalone Windows Server 2008 R2 VM that resides in our Hyper-V R2 installation. We see no need to keep the Clustering as Hyper-V is now providing our Failover/Redundancy. Usually, in a standalone file server migration we migrate the data, preserving NTFS permissions and then export the sharing permissions from the registry and import them on the new server. This does not appear possible in this instance, as the 2003 cluster stores the sharing permissions quite differently. My question is, how would one perform this type of migration? Is it even possible? My current lead is the File Server Migration Toolkit, however I can find no information on the net about migrating from cluster to standalone, only the opposite. Please help. UPDATE: We ended up getting the data copied over (permissions intact), but had to recreate the shares manually by hand. It was a bit of a pain but it did in the end work out.

    Read the article

  • On dual boot system, Is it possible to use VirtualBox to boot other installed OS?

    - by Derek Ziemba
    I currently run Windows 7 from a 256GB SSD as my Operating System. Lately, for school, I've been using openSUSE linux inside a VirtualBox PC and I'm really liking it. I'm starting to hate even working in Windows. But I can't just abandon Windows. I've been considering dual booting openSUSE and will likely purchase another SSD for openSUSE. Once I have the dual boot set-up, there is going to be times I need to do something quick in the operating system that I'm not currently in. It would be a pain to have to reboot the computer each time I need to switch the OS for a simple task, especially from Windows since it doesn't let you save it's state. From openSUSE, I want to be able to start a VirtualBox Machine using my Windows Drive. And in Windows, I want to be able to start a VirtualBox Machine of my openSUSE Drive. Would this be possible? The issue I'm worried about is drivers. For instance, the OS will be installed on native hardware and have the native hardware's drivers configured. When I try to boot the OS in a VirtualMachine, I feel like the OS is not going to know what to do, and have to reconfigure itself or just not work.

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • Import exponetial fixed width format data into Excel

    - by Tom Daniel
    I've received a bunch of text data files consiting of Lots of records (30K/file) of 3 fields each of 5-place numbers in exponential format: s0.nnnnnEsee (where s is +/-, n is a digit and ee is the exponent (always 2 digit). When I open the file in Notepad, the format is perfectly uniform throughout each file, but when I import it to Excel using Data|Import|Fixed Width, many of the data values get messed up, no matter what format (text, exponential, various custom tries) I assign to the cells. Looking at the Notepad version, it appears that leading + signs were replaced with a space in the data file, but the sign of the exponential is always there. This means that some fields begin with a space, and this appears to confuse the Excel import routine. I get the same result in Excel 2003 and 2007. I'm sure there's a straightforward solution (hopefully without a messy VBA routine), but I can't figure out what to try next. :-) To clarify (hopefully), here are some input records and the corresponding text input to Excel: Notepad Excel -0.11311E+01 0.10431E-04 0.27018E-03 -0.11311E 1.0431E-05 2.7018E-04 0.19608E+00-0.81414E-02-0.89553E-02 0.19608E -8.1414E-03 8.9553E-03 etc. Whoopee! Solved my own problem - in the spirit of Jeopardy, now that I've begun the question, here's the answer - Use a different "File Origin" - several other than the default "Unicode UTF..." work fine! What a pain. Hope this helps somebody else avoid a few unpleasant hours! Aloha from Kona, Tom

    Read the article

  • Why do I have 55 local area connections in ipconfig?

    - by RMorrisey
    Windows Vista Home Premium. I should mention that I am having no problem whatever getting an internet connection. When I type "ipconfig" in the console, I get (55!) messages of 3 lines each, listing a ton of disconnected network connections. My PC only has one network card. Each message looks like this: Tunnel adapter Local Area Connection* 55: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : These don't cause a major problem; they make it a pain, though, to fish upward and find my IP address. How can I get rid of them? Edit: Actually, a few connection numbers are randomly missing from the sequence; so, it's really more like 30 or 40 connection messages, rather than all 55. Not sure why that is, either.

    Read the article

  • Windows XP mounting USB drive to same letter as previously mapped network drive

    - by GAThrawn
    Why does Windows always mount a USB drive as the next drive letter after the last physical drive, even when that letter is already taken by a mapped drive, and is there any way to improve this behaviour? What happens is I tend to use a few different flash drives on my PC, as well as having both a Blackberry and a personal phone that mount as USB drives when I plug them in to charge. Being on a corporate PC I also have a number of mapped network drives (some set by login script, some set as persistent mappings in my profile). When I first login I'll have drive letters like this: C: - Local Drive D: - DVD Drive G: - Login script mapped drive J: - Login script mapped drive When I plug the Blackberry in it'll mount two drives (one for onboard storage, one for the SD card) as E: and F:. If I then plug in another USB drive it will mount as G:, even though that's already taken by a network mapped drive. This leaves me with the following drives: C: - Local Drive D: - DVD Drive E: - USB drive (Blackberry) F: - USB drive (Blackberry) G: - Login script mapped drive [G: - USB drive - mounted but not visible in Explorer or command prompt] J: - Login script mapped drive I then have to go into Disk Management, find the new USB drive that's mounted to G: and re-assign it to another letter eg Z:, once this is done Auto-Play detects it and throws up its normal dialog, and its browseable in Explorer. While this is OK to do if you only use one or two USB drives and have admin access to your PC with your login account, its a total pain in the proverbial if you regularly use a whole load of different USB devices, and corporate policy means you have one account for your normal login (that only has User access to workstations), but have to use a different account for any privileged action. I realize that one possible reason for this is the difference between hardware which is mounted and assigned drive letters at the systen level, and mapped drives which are done at the user level. For USB devices that are already plugged in before login, then obviously they're mounted before Windows knows what network drives may be mapped. However if you plug the USB devices in after you're fully logged in and have drives mapped then Windows must know which letters are available?

    Read the article

  • Cannot send email outside of network using Postfix

    - by infmz
    I've set up an Ubuntu server with Request Tracker following this guide (the section about inbound mail would be relevant). However, while I'm able to send mail to other users within the network/domain, I cannot seem to reach beyond - such as my personal accounts etc. Now I have no idea what is causing this, I thought that all it takes is for the system to fetch mail through our exchange server and be able to deliver in the same way. However, that hasn't been the case. I have found another server setup in a similar fashion (CentOS 5, Request Tracker but using Sendmail), however it is a dated server and whoever's built it has kindly left no documentation on how it works, making it a pain to use that as a reference system! :) At one point, I was told I need to set up a relay between the local server's email add and our AD server but this didn't seem to work. Sorry, I know next to nothing about mailservers, my colleagues nothing about Linux so it's a hard one for me. Thank you! EDIT: Result of postconf -N with details masked =) alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = myhost.mydomain.com, localhost.mydomain.com, , localhost myhostname = myhost.mydomain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = EXCHANGE IP smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes Sample log message: Sep 4 12:32:05 theedgesupport postfix/smtp[9152]: 2147B200B99: to=<[email protected]>, relay= RELAY IP :25, delay=0.1, delays=0.05/0/0/0.04, dsn=5.7.1, status=bounced (host HOST IP said: 550 5.7.1 Unable to relay for [email protected] (in reply to RCPT TO command))

    Read the article

  • Windows XP dual screen problems, user account related

    - by Chris
    I have had this issue with a few laptops now and it looks like it is some sort of user account problem. Specifics of the system are: Dell Laptop Windows XP Pro SP3 Non-domain member computer DLP Projector connected to laptop via VGA I use this setup almost daily to do presentations, always the mirrored display mode where I can see on the laptop monitor the same thing that is displayed on the projector. Today, when I boot up, I get the mirrored display at the login screen, but after I log in, it switches to Extended Desktop (like two desktops side-by-side). Fn+F8 just cycles through all the normal settings except the mirrored display. I created a new user account on the computer and it performs normally. Mirrored display works as normal. I have run into this about 4 times now and it always can be solved by creating a new user account on the computer, and then all is well. I would like to either: 1. Find a way to reset the customized settings for a specific user account which would hopefully make this go away, or 2. Find the specific setting that causes this so that I can easily fix it when the problem comes up. Creating new user accounts is kind of a pain and a easy fix must be out there somewhere.

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • In GNU Screen, Recalled bash history command displays one character position to the left of actual location

    - by vergueishon
    I am running Red Hat 5 32-bit (2.6.18-194.26.1.el5). The issue is that when I recall any previous command in bash's history, the first character in the command is displayed immediately after the shell prompt, without any intervening space, likeso: \[me@mymachine tmp]$man mysql If I enter a Ctrl-C, and retype the command, it looks likeso: \[me@mymachine tmp]$ man mysql This makes recalling a command and editing it before re-entering a real pain. Basically, if I try to edit a recalled command, my changes occur one character position to the left (I believe) of what I see on the screen. It's a bit tedious to describe, and appears to only happen with commands with a large number of arguments. UPDATE: The contents of /etc/sysconfig/bash-prompt-screen, 1 #!/bin/bash 2 echo -n $'\033'"_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"$'\033\\\\' and the contents of /etc/bashrc, 24 screen) 25 if [ -e /etc/sysconfig/bash-prompt-screen ]; then 26 PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen 27 else 28 PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"; echo -ne "\033\\"' 29 fi 30 ;; I've disable bash-prompt-screen by renaming it--this fixed it. It's entirely possible that there is a fix to the bash-prompt-screen prompt line in the latest version of screen for RHEL 5. The error is seen under Screen version 4.00.03 (FAU) 23-Oct-06. (I noticed an update in the queue, which is installing as I write this.)

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >