Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 465/606 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • Changing PATH Environment Variable for all Users. (Ubuntu)

    - by Wally Glutton
    I recently compiled Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like to update my PATH to ensure this new version of Ruby (found in /opt/ruby_ee/bin) supersedes the older version in /usr/local/bin. (I still want the old version around, though.) I would like these PATH changes to affect all users and crontabs. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did not affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in /etc/login.defs and /etc/crontab. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the commands "echo $PATH" and "ruby -v". My shell is bash. My .bashrc doesn't override my PATH. Yes, I have heard of the Ruby Version Manager project. ;)

    Read the article

  • How do I restore to a delta file (disk) on Vmware ESXi

    - by Oscar
    Using VMware Server ESXi (freebie version) I have a Virtual Machine (win 2k3 r2 server). When I first provisioned it I took a snapshot of it. I recently tried to clone the primary drive using my standard hardware-based method to grow a windows disk. (using knoppix, clone drive to a new drive, make it bootable, then I intended to extend the partition via diskpart from within windows). This process failed; I tried setting the cloned drive (via the vmware gui) to replace the original drive, boot and be done. This didn't work out so well. The machine never booted. I checked the boot order, the disk location and all the basics I usually do. As a failsafe, I then tried changing all the settings back so the machine would boot to the original drive and I could figure out (as I eventually did) a better way of growing the disk. However when I powered on the machine with the original drive, it reverted back to that initial snapshot I created; It lost all the changes since. I looked in the file system and found a few files, I think the keyfile here is one named "delta" and I'm assuming that's the disk I want, but I can't find a way to have the Virtual Machine actually use that drive/file. It isn't available to add when I go to add an existing drive. Do I need to somehow commit that delta to the original drive and then boot from it again? Can you point me in the right direction? I've since discovered the proper way of growing drives using "vmkfstools" but I need to get back to the original state of the machine to try this out. Any help would be greatly appreciated.

    Read the article

  • Windows Server - share files without access for administrator

    - by Pawel
    We have a MS Windows Server 2008 R8 based server that is administrated by our IT department. We would like to achieve two things simultaneously: A folder on the server, containing several thousand files (new files added frequently) that is accessible to some ActiveDirectory users (e.g. board of directors) but is not accessible by IT department employees IT department employees still maintain rights to administrate the server, including installing new software and services We already checked some solutions: Using NTFS access rights. Unfortunately IT (members of "Administrators" group) can set themselves as new owners of the files and change the permissions so that they gain access to the files. Enabling EFS. Unfortunately even if you do not allow IT to access files, they still can disable EFS completely because they have administrative rights. Moreover as far as I know you have to manually add permissions for all users but the owner for each new file - very inconvenient. Creating a new role for the IT department that has all the privileges apart from taking ownership of files. Unfortunately if you're not a member of the Administrators group, you cannot install new software, no matter what privileges you add to the role. TrueCrypt - nice free encryption software, but with poor sharing capabilities. You can either mount an encryption container on the server (and then IT has access to its contents) or you mount them locally but only one user can mount it for writing. AxCrypt - free encryption software that enables file-by-file encryption on the server. There are some disadvantages though - you have to manually encrypt each new file added. The files have their extensions changes. You can only set one password for all files (so all users have to know this one password). Any other ideas? Our budget is limited so enterprise-class software from Symantec or PGP would probably be not an option.

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • How should a small company administer their web server?

    - by John Isaacks
    We currently have our website hosted by a small company that is actually a reseller for Rackspace. They act as our server administrators. They configured the servers, handle the backups, if there is a problem, we call them and they fix it. We are growing and want to move away from our shared server to either a cloud or dedicated server. I am thinking cloud myself but I am open to either. The current company doesn't seem to want to offer us anything more than a shared hosting plan. I looked into cloud solutions at vps.net, with them I would have to be the server administrator myself. I am the website programmer but administering the server is outside my comfort zone. vps.net does have a $99/month plan for Pro-Active Managed Support but I am not sure if this is the equivalent on a server admin that is there when you need them. We could hire someone in house, but I think that would be overkill for our needs. I am not exactly sure what we need, I do know we need as close to 100% uptime as we possible can. and we need the ability to add/remove/change the server configuration/software/etc. when needed (though changes shouldn't be very often once everything is setup right). Can someone point me in the right direction? What do other companies do?

    Read the article

  • Win 7 x64 explorer.exe stops responding on copy/paste

    - by Sourabh
    Few months ago I changed my Win7 orb using this software. But it changed only the explorer.exe in C:\Windows. To solve that, I copied explorer.exe from there to C:\Windows\system32 and C:\Windows\SysWOW64 (Probably a bad idea). Things were working fine until a few days ago, it started giving me problems while copying. Sometimes, when I copy/paste, It works fine but sometimes it stays on that dialog box which looks like it is doing pre-copying things like counting number of files/file size etc. But its not. It stays on that screen forever. Now if I try to close it, title changes to cancelling but rest stays the same. If I try to close it too many times, explorer.exe crashes. And its not related to file-size. It may crash for 4Mb file and work fine for 10Gb file but some other time, it'll do the opposite. Is there any way to reset my explorer.exe and Is this because of explorer.exe or something else PS: I forgot to take backup of explorer.exe. PS2: I didn't install any software or downloaded anything that could harm my computer. EDIT I tried System Restore and SFC /scannow but they didn't help.

    Read the article

  • Second monitor stopped being recognized by Windows

    - by Eric J.
    One of the developer PC's running Windows Vista Ultimate had the second monitor stop being recognized in Windows overnight. There were no hardware or driver changes at the time, though I have subsequently updated to the latest nVidia drivers (card is NVIDIA GeForce 210). The non-recognized monitor IS recognized during the boot sequence. In fact, only the "bad" one shows the POST or the Windows loading screen. At some point during Windows initialization after the loading screen disappears and before the logon screen appears, the active monitor switches. Any thoughts? UPDATE: When I open the Vista monitor properties window, I see my primary display and secondary display depicted. The primary one is portrayed as the regular blue box, but the secondary one is portrayed greyed-out. I have the option to "Extend desktop to this monitor", the only resolution is 800x600, and all of the advanced monitor properties are greyed out as well. If I opt to extend the desktop, the greyed-out box turns blue, when I then select Apply the screens flash as usual and I'm given the 15 second countdown to accept the new settings and when I do, everything returns to the previously broken state... secondary monitor is portrayed greyed-out again. At no point is the desktop shown on the secondary monitor.

    Read the article

  • Reference entire excel 2007 sheet to another sheet

    - by Keikoku
    I have one sheet with 100 rows of data. I have a second sheet that will use the same data but with filters applied. In fact, I will have a dozen sheets, each with different filters. My goal is to have references from the every sheet to the first so that prior to filtering, they all contain exactly the same data. This way I only have to modify one sheet and all sheets will reflect the changes. The purpose is to create external links from word to excel to display specific rows, but there appears to be a limitation to linking where it displays absolutely everything that you see on the sheet itself (and each view must be different). I can manually reference the first cell and then drag the black box to easily expand it to the required number of rows, but that would require me to go into each sheet and drag the black box again whenever I add new entries to the master copy. Is there an easy way to do this? Note that the issue is the same as Easy way for users to update linked documents Excel 2007, except this time I am using a different approach. Solutions to both would be welcome.

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • Migrating to AWS Cloud with auto-scaling - where to put Redis and ElasticSearch?

    - by RobMasters
    I've been trying to research this topic but haven't found anywhere that recommends where to install services such as Redis and ElasticSearch when migrating to a cloud framework. I'm currently running a Symfony2 application on 2 static servers - one is running MySQL and the other is the public facing web server, which also has Redis and ElasticSearch running on it. Both of these servers are virtualised, but they're static in terms of not being able to replicate at present (various aspects are still dependent on the local filesystem). The goal is to migrate to AWS and use auto-scaling to be able to spin up and kill web servers as required, but I'm not clear on what I should put on each EC2 instance. Should they be single-responsibility only? i.e. Set up individual instances for the web server(s), Redis, and ElasticSearch and most likely an RDS instance for MySQL and only set up auto-scaling on the web server(s)? I don't foresee having to scale the ElasticSearch server anytime soon as it's only driving the search functionality, but it's possible that Redis may need to be replicated at some point - but should this be done manually? I'm not sure of how this could be done automatically as each instance needs to be configured to know about it's master/slave(s) as far as I know. I'd appreciate advice on this. One more quick question while I'm here - how would I be able to deploy code changes when there are X web servers currently active? I'm using a Capifony deployment script (Symfony2 version of Capistrano), which I think can handle multiple servers easily enough by specifying an array of :domain addresses...but how can should this be handled when the number of web servers can vary?

    Read the article

  • Arch Linux: How to handle patches which only you will use?

    - by user12932
    I'm using freerdp together with xmonad and it has been giving me a lot of trouble. The super key (or "windows key") is my mod key in xmonad and it has been interfering with my freerdp usage rather annoyingly. Whenever I switched workspaces (or did anything else in xmonad involving the super key), windows (controlled by the freerdp instance in focus) registered a keypress as well. This event combined with the loss of focus got the super key stuck in windows indefinitely: the press of the keys d and r would first show my desktop, then open the run dialog (as if I was pressing the windows key constantly). I've tried several versions of freerdp, but all exhibited this annoying behavior. So I resorted to patching freerdp myself to just ignore the left super key on my keyboard. I love free software for a lot of reasons (especially the ability to alter things like this myself), however I still find it annoying to patch and rebuild freerdp on all version (and dependency) changes. How do you deal with situations like this? Is there even a "right way" to resolve this issue?

    Read the article

  • All application passwords lost on Windows 7

    - by Rynardt
    A couple of days ago I changed my Windows 7 login password. My laptop is on my company's domain, so password changes are done over the internal network. Since changing the password I noticed that all my saved Chrome passwords are missing. Also Skype, Windows Live, Internet Explorer and Outlook lost their saved passwords. I guess there could be more applications with lost passwords, but I have not opened them yet. This makes me think that most applications saves their passwords to a general password vault on the Windows system and this vault got somehow corrupted when I changed my domain login password for windows. Do anyone have any idea of how to fix this and prevent it from happening again? EDIT : More Info I do development work at the office, so most of the time I bypass the firewall and connect directly to the internet gateway. Now and then I would connect to the company wifi network to do printing and access files on a NAS. So by default my laptop does not connect to the wifi hotspot. On this occasion to update the password, I had to connect to the wifi. So referring to the comment by OmnipotentEntity below, could this have happened when the system rebooted without a connection to the network as the laptop does not auto connect to the wifi hotspot?

    Read the article

  • How to set umask globally?

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally (and for all users), so that the X11 desktop environment gets the memo as well? (The system is Linux Mint / Ubuntu, in case that changes anything...)

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • I would like to edit the layout of my keyboard just a bit - what's the best way?

    - by Codemonkey
    I'm using an Apple keyboard which has some annoyances compared to other keyboards. Namely, the Alt_L and Super_L keys are swapped, and the bar and less keys are swapped ("|" and "<"). I've written an Xmodmap file to swap the keys back: keycode 49 = less greater less greater onehalf threequarters keycode 64 = Super_L NoSymbol Super_L keycode 94 = bar section bar section brokenbar paragraph keycode 108 = Super_R NoSymbol Super_R keycode 133 = Alt_L Meta_L Alt_L Meta_L keycode 134 = Alt_R Meta_R Alt_R Meta_R I did this by identifying the keys using xev and the default modmap xmodmap -pke and swapping the keycodes. xev now identifies all my keys as correct, which is awesome! I can also use the correct keys to type the bar and less than symbols. (I followed this answer on askubuntu: http://askubuntu.com/q/24916/52719) But it seems the change isn't very deep. For instance, the Super key is now broken in the Compiz Settings Manager. No shortcuts involving the Super key works (but the Alt key does). Also the settings dialog for Gnome Do doesn't heed the changes in xmodmap, and I can't open the Gnome Do window anymore if I use any of the remapped keys. So to summarize, everything broke. I would like a deeper way of telling Ubuntu (or any other Linux distro for that matter) which keys are which on the keyboard. Is there a way to edit the Keyboard Layout directly? I'm using the Norwegian Bokmål keyboard layout. Does it reside in a file somewhere I could edit? Any comments, previous experiences or relevant stray thoughts would be greatly appreciated - Thanks

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • SQL Server Instance login issue

    - by reallyJim
    I've just brought up a new installation of SQL Server 2008. I installed the default instance as well as one named instance. I'm having a problem connecting to the named instance from anywhere besides the server itself with any user besides 'sa'. I am running in mixed mode. I have a login/user that has a known username. Using that user/login, I can properly connect when directly on the server. When I attempt to login from anywhere else, I recieve a "Login failed for user ''", with Error 18456. In the log file in the server, I see a reason that doesn't seem to help: "Reason: Could not find a login matching the name provided.". However, that user/login DOES exist, as I can use it locally. There are no further details about the error. Where can I start to find something to help me with this? I've tried deleting and recreating the user, as well as just creating a new one from scratch--same result, locally fine, remotely an error. EDIT: Partially Resolved. I'm now passed the base issue--the clients were trying to connect via the default instance. I don't know why. So, once proper ports were opened in the firewall, and a static port assigned to the named instance, I can now connect--BUT ONLY if I specify the connection as Server,Port. SQLBrowser is apparently not helping/working in this case. I've verified it IS running, and done a stop/restart after my config changes, but no difference yet.

    Read the article

  • Routing / binding 128 IPs to one server

    - by Andrew
    I have a Ubuntu server with 128 ip's (static external ips 86.xx.xx.16), and I want to crawl pages thru different ip's. The gateway is xx.xxx.xxx.1, the main ip is xx.xxx.xxx.16, and the other 128 ip's are xx.xxx.xxx.129/255. I tried this configuration in /etc/network/interfaces but I doesn't work. It work if I remove the gateway for the aliases eth0:0 and eth0:1. I think this is routing problem. auto lo iface lo inet loopback auto eth0 auto eth0:0 auto eth0:1 iface eth0 inet static address xx.xxx.xxx.16 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:0 inet static address xx.xxx.xxx.129 netmask 255.255.255.128 gateway xx.xxx.xxx.1 iface eth0:1 inet static address xx.xxx.xxx.130 netmask 255.255.255.128 gateway xx.xxx.xxx.1 Also, please tell me how to "reset" every changes that I made in networking and routing. Update: I removed the gateway and now it works. I can reach the website thru all 128 ip's. But when I try to bind a socket connection in php to a specific ip I get no answer. socket_bind($sock, "xx.xxx.xx.xxx"); socket_connect($sock, 'google.com', 80); I tryed to use a sniffer to see the packets, and I see the packet sent from binded ip to google.com but the "connection" can't be established. I don't know anything about "route" command, but I have a feeling that this is the solution.

    Read the article

  • Large, high performance object or key/value store for HTTP serving on Linux

    - by Tommy
    I have a service that serves images to end users at a very high rate using plain HTTP. The images vary between 4 and 64kbytes, and there are 1.300.000.000 of them in total. The dataset is about 30TiB in size and changes (new objects, updates, deletes) make out less than 1% of the requests. The number of requests pr. second vary from 240 to 9000 and is dispersed pretty much all over, with few objects being especially "hot". As of now, these images are files on a ext3 filesystem distributed read only across a large amount of mid range servers. This poses several problems: Using a fileysystem is very inefficient since the metadata size is large, the inode/dentry cache is volatile on linux and some daemons tend to stat()/readdir() it's way through the directory structure, which in my case becomes very expensive. Updating the dataset is very time consuming and requires remounting between set A and B. The only reasonable handling is operating on the block device for backup, copying, etc. What I would like is a deamon that: speaks HTTP (get, put, delete and perhaps update) stores data it in an efficient structure. The index should remain in memory, and considering the amount of objects, the overhead must be small. The software should be able to handle massive connections with slow (if any) time needed to ramp up. Index should be read in memory at startup. Statistics would be nice, but not mandatory. I have experimented a bit with riak, redis, mongodb, kyoto and varnish with persistent storage, but I haven't had the chance to dig in really deep yet.

    Read the article

  • How do you optimize your Outlook Exchange + IMAP setup?

    - by Mike
    My company provides an Outlook/Exchange account we must use for mail/calendar. Like many companies, they unfortunately also provide a ridiculously small mail quota. I got tired of managing and backing up .pst files (since I'm always in my e-mail there is never a good time to back it up), so I started storing my archived mail "in the cloud", using an IMAP server I set up on my Linux box. This has a few drawbacks for me: IMAP (at least the implementation in Outlook) is *very slow*. Furthermore, if I move a large number of messages to the IMAP server, it blocks the entire Outlook client for hours sometimes, which is quite annoying. Can't use exchange over HTTP to do mail without launching a VPN session, because the client-side rules I have which organize my mail fail and disable the rule if the IMAP server can't be reached. If I reply to a message from my IMAP store, I have to specify a SMTP server willing to relay for me in order to send e-mail, unless I always remember to select my Exchange account while composing e-mail. ... but the main advantage of being very easy to back up, with a couple of cron jobs that essentially do an 'rsync'. Short of moving the IMAP server to my local host (which seem like might have the same file locking problems as using a .pst), my options seem limited for solving (1). I'd like to come up with a solution for (2) and (3) though. For problem (2) would it be possible to somehow tell Outlook that the IMAP server is "offline", and have it synchronize my changes during a periodic "send and receive"? If so, I wonder if it would block the Outlook client, like it does in problem (1), and if it would be compatible with the client-only rules I use to sort my mail into folders. I've looked all over the options menu and have not found a way to tell Outlook to not use a certain account for sending mail, which would solve (3). Is anyone else crazy enough to be doing something like this? Any ideas?

    Read the article

  • Windows File Access Denied

    - by Tom
    I seem to have a general problem with "access denied on Windows". It manifests itself every time if e.g: My bat file calls a compiler creates a file on disk My bat file renames a file But I also have files downloaded (FireFox) to Windows desktop where Windows is giving me "access denied" if I try delete the file. Tried disable AVG + make exception in AVG resident shield (I have tried checking with Task Manager + Winternals process explorer that it is not process running still running that should cause the locks.) Windows 7. My user account is an administrator. All files are created by same user account. The problem is recent, but some things I first noticed yesterday (when I started calling .bat files again which I have used for many years) I have tried: Starting e.g. Windows Explorer with "run as administrator", but that makes no difference right-click - properties - security and changes permissions/ownership (I also get "access denied" when trying this so this does not help) Here is a ascreenshot if I try change security of a "locked" file. (The problem here is the locking occurs continously every time the file is created) ! If I click on, it states I am not the owner? Which baffles me as I just created it. (Yes, through a .bat file calling executables that create the file. But all running under my administrator user account. Interestingly after having this dialog open, the file somehow sometimes suddenly seem to allow me delete it)

    Read the article

  • Synchronize folders on different computers without cloud and without network just internet

    - by theimmortalbg
    I have two computers with windows 7, one in my home town and one in another town. So they are not in private network but I have internet on both. They have exactly the same file structure. I am searching for program that can keep the data equal. I know about dropbox or google drive but they are cloud and I don't want to use them. Also they are using folder that you should copy your data in it. There is another programs that are like a server, just put something and after that you can download it but I dont need them. I want just to point which folders to be synchronized and the program make the synchronization. The sync can be in real time if the two computers are powered, or after few time when they are powered. Or it can lock another computer synced folder till update is required. At all this is my documents that I want to be synced in all my computers and to be changed from where I want. In fact I can move the updates with flash but if some program save the changes and make them on another computer with one click it will facilitate my work.

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

  • Ubuntu 12.10 64bit host reboots when trying to install any guest system using VirtualBox

    - by gts123
    I am having a really nasty problem with VirtualBox as everytime I try to install any guest OS(using ISO file as CD for installation media), the installation starts normally but as soon as it is about to start either installing to virtual hard drive or loading(e.g. as LiveFS) it causes the host system to reboot abruptly. Config is as below: Host system: Ubuntu 12.10 64bit - Intel® Core i7-2640M CPU @ 2.80GHz × 4 Virtualbox version: 4.1.18_Ubuntu r78361 Guest OS systems tried: 32bit version of FreeBSD 9, Debian 6, Tails 0.14 VM setup Tried to have the minimal setup necessary just in case it would avoid for each system to make sure I'd avoid conflicts, but to no avail. I've tried different values and combinations of the below but the problem still persists: Shared Clipboard: Disabled Show in fullscreen/seamless: Disabled Remember runtime changes: DIsabled Base Memory: 2048 MB Chipset: PIIX3 IO APIC: Disabled EFI: Disabled Absolute Pointing device: Disabled Processor(s): 1 CPU PAE/NX: Disabled VT-x/AMD-V: Disabled Video Memory: 12 MB 3d/2d acceleration: Disabled Storage IDE COntroller: PIIX3 (same as chipset instead of PIIX4) Use host I/O cache: No Audio: disabled Network adapter: NAT USB controller: disabled No shared folder Also another sideeffect of the reboot is that it appears that it does not log any information in the error log files; not making things any easier. Please help.

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >