Search Results

Search found 10661 results on 427 pages for 'move semantics'.

Page 329/427 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \cartman\users\emueller, I want to send a link to file foo.doc to everyone. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would like to see \cartman\users\emueller\foo.doc, obviously. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Handling the Outlook 2007 AutoArchive PST file

    - by Doug Luxem
    We encourage our users to enable AutoArchive in Outlook 2007 as a way to manage their mailbox sizes. However, we frequently end up running in to problems with the archive.pst file that is generated. The two main problems we have are: The archive.pst file is located in the user's local profile directory and is never backed up. A dead hard drive or stolen laptop could result in months or years of missing email. All other personal data is stored on network shares, but we can't do that for Outlook PST files. Without some sort of manual intervention, the archive will grow to enormous sizes. Although Outlook 2007 SP2 handles the large files better than before, it still results in slow response times from Outlook and an increase likelihood of a corrupt PST file. To mitigate these problems personally, I move the archives to a c:\Outlook folder and manually back that up to a shared drive every month or so. Additionally, I rotate archive files every year so that I have one file for each year (archive2008.pst, etc). Obviously, asking our users to do this same wouldn't help much. We need some sort of automated solution to take care of points 1 and 2. I have to imagine this is a common problem for Exchange organizations, so what is the best method to handle this?

    Read the article

  • Writing scripts that work with my emails

    - by queueoverflow
    I currently use Thunderbird as my email client and it has some filters, but that seems to be all I can program in it. On several occasions, I heard people talk about their automated email workflow. One example: When I do not get a reply to an email the script will send a “nag” email asking why I did not get a response yet. Or another one: I get so much mail that I cannot read them all. After a week, unread email is put on hold and the sender gets a “if it was important, reply to this email and it will be set to un-hold” email. The script then takes the answer and move it to back into the important folder. I read about FiltaQuilla which seems nice, but it does not seem to be the kind of programming that I am looking for. How can I write general purpose scripts like those? Do I need to write my own Python IMAP/SMTP client (if that is even possible) to to this or can I script it it, say JavaScript, in Thunderbird?

    Read the article

  • CloudFront for dynamic content CDN

    - by Elad Lachmi
    I would like to use CF as a CDN for my entire site, including static and dynamic content. I have been using CF for static content for a while and I am very happy with the results. I am now doing POC of putting the web server completely behind CF. For the dynamic content I created a new distribution and set the origin to be my web server. Right now I'm looking to test the solution, so I have the web server on the original domain and the CF distribution on the amazon domain. This works with the exception of HTTPS urls and POST requests. For HTTPS requests, I see the requests are forwarded to the original site domain for now, but how will CF handle them when I move the distribution to the www cname? What configuration changes should I make so that CF forwards HTTPS requests to the origin? For POST requests, I want the post to be made to the origin server. Can I set this up in CF? Finally, the site has membership. Can I configure CF to pull all content from the origin if the user is logged in? Sorry for the long question. I'm a little lost and documentation for dynamic CF is still kind of scarce. Thank you!

    Read the article

  • Permanently deleting files on Mac OS

    - by Jonik
    A while back, as relatively new Mac OS X user, I was surprised to learn that you cannot easily delete files. Directly, that is, without moving them to the trash first. On Windows and Linux this can obviously be done with ease, but not so on the Mac. I noticed this when trying clear up files from a USB memory stick — removing the files ("move to trash") does not free up space; that happens only after emptying the whole system-wide Trash. Not particularly convenient! (It seems stupid to have to empty the whole trashcan just to make some space on the USB stick. There might be gigabytes of stuff in there, and this sort of defeats its purpose - what if you'd actually need to restore something from the trash some day.) So, what's your way of getting around this? Have you bought a 3rd party application like RAW Trash for $16.95 just to delete files, or do you diligently empty the trashcan whenever needed? Or did I miss something? Also, can you convince me that this is actually the way it should be — that users shouldn't be able to fiddle with the filesystem easily? :)

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Trying to limit IMAP folders/mailboxes my iPhone/iPad sees

    - by QuantumMechanic
    (Note: I am using dovecot 1.0.10 on Ubuntu 8.04.4 LTS. Yes, I know I need to upgrade before next year :) (Note: The SMTP/IMAP server in question only serves my family, so there's only a very few users. Certainly what I propose below, even it it works, would be a logistical nightmare with any significant number of users). I have noticed (and have confirmed via google) that the iOS mail app is terrible in its handling of IMAP subscriptions, namespaces, etc. For example, my iPhone and iPad will see EVERYTHING (all mailboxes, folders, etc.), whereas clients like Thunderbird, alpine, etc. only see what I tell them to see. This makes it an incredible pain to move mail between mailboxes because I have to scroll through a gazillion things. The mail_location in dovecot.conf is: mail_location = mbox:%h/Mail/:INBOX=/var/mail/%u To get around this, I've been considering doing the following for user foo: Create a dovecot userdb with a foo-ios virtual user in it, whose UID is identical to that of the real (in /etc/passwd) foo user and with a homedir of /home/foo-ios. ln -s /var/mail/foo /var/mail/foo-ios mkdir -p /home/foo-ios/Mail cd /home/foo-ios/Mail ln -s /home/foo/Mail/mailbox-i-want-visible mailbox-i-want-visible Make symlinks for the rest of limited set of mailboxes/folders I want visible to the iOS mail app. chown -R foo:foo /home/foo-ios Change iOS mail app settings to log in as user foo-ios instead of user foo. Will this work or will there be some index/file corruption hell because there will be two sets of indexes (one set living in /home/foo/Mail/.imap and other set living in /home/foo-ios/Mail/.imap) indexing the same underlying mbox files? And I'd be more than happy to hear of a better way to do this with dovecot! (Or to hear that dovecot 2.x works better with iOS devices).

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \\cartman\users\emueller, and I want to send a link to the file foo.doc therein to coworkers. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would need to see \\cartman\users\emueller\foo.doc to be able to consume the link. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \\cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • why altgr+p doesn't work, AUTOHOTKEY

    - by voodoomsr
    Hi guys, i try and i try to find the bug in this script but i can't . Maybe some of you can give me a hint... Problem. When i press altgr&p it suppose that the Delete key is triggered, but the weird thing is that after one succesfull delete, if i continue pressing altgr&p appears the p, and the delete isn't triggered anymore. in the meantime i test other solution move to the right and then delete with the backspace, that works, but when i have text selected this alternative isn't good.... here is the code #InstallKeybdHook ;characters very used RAlt & e:: SendInput []{Left} Return RAlt & w:: SendInput <>{Left} Return RAlt & d:: SendInput (){Left} Return RAlt & s:: SendRaw {} SendInput {Left} Return RAlt & x:: SendInput ""{Left} Return RAlt & c:: SendInput ''{Left} Return RAlt & f:: SendInput * Return RAlt & r:: SendRaw + Return RAlt & v:: SendInput - Return ;comienzo y fin de linea RAlt & a:: SendInput {Home} Return RAlt & z:: SendInput {End} Return ;movimientos InEditon /* RAlt & p:: SendInput {Right}{BackSpace} Return */ <^>!p:: Send {Del} Return RAlt & o:: SendInput {Up} Return RAlt & l:: SendInput {Down} Return RAlt & k:: SendInput {Left} Return RAlt & ñ:: SendInput {Right} Return RAlt & ,:: SendInput {Enter} Return RAlt & i:: SendInput {BackSpace} Return ;; clipx ^mbutton:: sendinput ^+{insert} Return ^+k::^+Left +k::+Left ^k::Left +l::+Down ^+l::^+Down ^l::^Down +ñ::+Right ^+ñ::^+Right ^ñ::^Right +o::+Up ^+o::^+Up ^o::^Up +a::+Home ^+a::^+Home +z::+End ^+z::^+End

    Read the article

  • How can I prevent Apache from asking for credentials on non SSL site

    - by Scott
    I have a web server with several virtual hosts. Some of those hosts have an associated ssl site. I have a DirectoryMatch directive in my main config file which requires basic authentication to any directory with secured as part of the directory path. On sites that have an SSL site, I have a rewrite rule (located in the non ssl config for that site), that redirects to the SSL site, same uri. The problem is the http (80) site asks for credentials first, and then the https (443) site asks for credentials again. I would like to prevent the http site from asking and thus avoid the potential for someone entering credentials and having them sent in clear text. I know I could move the DirectoryMatch down to the specific site, and just put the auth statement in the SSL config, but that would introduce the possibility of forgetting to protect critical directories when creating new sites. Here are the pertinent declarations: httpd.conf (all sites): <DirectoryMatch "_secured_"> AuthType Basic AuthName "+ + + Restrcted Area on Server + + +" AuthUserFile /home/websvr/.auth/std.auth Require valid-user </DirectoryMatch> site.conf (specific to individual site) <DirectoryMatch "_secured_"> RewriteEngine On RewriteRule .*(_secured_.*) https://site.com/$1 </DirectoryMatch> Is there a way to leave DirectoryMatch in the main config file and prevent the request for authorization from the http site? Running Apache 2 on Ubuntu 10.04 server from the default package. I have AllowOverride set to none - I prefer to handle things in the config files instead of .htaccess.

    Read the article

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • Windows ACL inheritance issues for FTP server and automated tools

    - by Martin Sall
    I have set up Cerberus FTP server. By default, Cerberus FTP service runs under SYSTEM ACCOUNT. Also I have some console applications which run as scheduled tasks. They are running under a dedicated "Utilities" user account which has "Log on as batch job" permissions. These console applications take uploaded FTP files, process them and then move them to some dedicated archive folder. The problem is that my console apps are throwing Security exceptions when trying to acces the uploaded files. I tried to give the Full control permissions on the ftproot folder for my "Utilities" account and I have checked that "Replace all Child object permissions with inheritable permissions from this object" checkbox, but it affects only current files. When new files are uploaded, they again are not accessible by my "Utilities" account. I tried to go another way and put Cerberus FTP service under "Utilities" account. Then I also needed to give "Utilities" account permissions on Cerberus Data folder in ProgramData. Still no luck - after this operation, Cerberus internal SOAP web service stopped working (although everything else seems to work). I need that SOAP service to be available, so running the Cerberus FTP under "Utilities" account seems to be not an option. Unless I find out, what else do I need to set up for that "Utilities" account to stop Cerberus from complaining. I guess, Cerberus is uploading files to some temporary folder and so those files get the permissions form that folder and keep the same permissions even after moved to the ftproot. What would be the right solution for this which would grant Cerberus FTP server and the "Utilities" account minimal needed permissions to access the contents of the ftproot folder?

    Read the article

  • Prevent Windows from resizing all the apps on the desktop when switching monitors

    - by Greg Hewgill
    Short version: When moving my laptop and sleeping between using different monitors, all my open windows are crammed into the upper left corner as if they tried to fit on the laptop internal screen resolution. I plug in and switch to the external monitor before unlocking my session. Is there a way to prevent this automatic resizing? Longer version: I have a laptop that I move between two locations. I have one docking station, and the same kind of monitor configured for 1600x1200, in both locations. The internal laptop screen is awful so I don't use it. Location A: Docking station, monitor connected via DVI. Location B: No docking station, external monitor connected via VGA cable. In this location I have the laptop lid open for keyboard access but I don't use the laptop screen. When moving from Location A to Location B, the laptop wakes up from sleep, displaying the screen on the internal monitor. I switch to the external monitor display (using Fn+F8 on this laptop), and only after that do I unlock my session with my password. However, Windows has crammed all my nicely arranged windows into the upper left corner as if it were trying to fit them all on the laptop internal screen resolution. When moving from Location B to Location A, I have the laptop lid closed when using the docking station so Windows apparently concludes the screen resolution is 1600x1200 and doesn't resize any windows. The laptop is a Dell Latitude running Windows 7 Professional.

    Read the article

  • Linux - preventing an application from failing due to lack of disk space [migrated]

    - by Jernej
    Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file. The program is running on Linux and I do not have root access to the machine. The program is CPU intensive and has been running for months. It still has a few days to write all the data. 40% of the data in the files is redundant and can be removed after a quick test. The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes. Given the above points I see the following solutions with respective problems Given that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file? Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk) Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines? Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.

    Read the article

  • Production deployment to EC2 with minimal downtime

    - by jensendarren
    I have a simple web application deployed on a large instance with EC2. I now want to deploy the latest code to this server but I want to do this in a way which minimizes downtime and is a smooth as possible for the end user. Here is my plan: Fire up another large instance Install all the software layers on that instance Restore and attach an EBS drive to the instance Deploy our latest production ready code on the new instance Run all tests (including manual testing of the application) (If tests pass) Put a "Site Under Maintenance" notice on the live site. Backup the EBS instance on the live site Detach the EBS instance from the new server and replace with the latest backup Use ec2-associate-address to move the IP address to the new instance Sit back and wait for traffic to start flowing though the new instance Terminate the old instance Does this seem like a good strategy? Are there any tutorials or books that might cover this topic? I have already read Cloud Application Architectures by George Reese, which is an excellent book, but does not cover deployment. Additionally, I know that there are tools that can help with this like RightScale or enStratus which I will use when I start using more than one instance.

    Read the article

  • Remote paging with Nagios when network is down and email won't work -- cellular modems and alternatives

    - by Quinten
    What is the best option for remote paging when network services are down? I'm looking for a solution that can let me know when network services are down during off-hours only, and especially when email/smtp services are out. Therefore, it needs to be redundant to our network and power supply. I'm imagining a cellular modem is one option. What's the price range for these? Is anybody using them and feel that they are worth the cost? I'm imagining that it's something we would end up sending an emergency page ~ 1x/month at most, so I'd like the pricing to reflect that--I don't mind a high per-page cost as long as it has a low recurring cost. Another option would be to expose at least one server to remote ping, and run a check script on a remote server. Are there paid options for this? Currently, we run Nagios on a Linux VM on a Windows 2008 Hyper-V host. It would be great if the solution would work in that environment, but I know it's tricky with external devices, and we could move Nagios to a standalone workstation if needed.

    Read the article

  • Lookup Multiple Results for Multiple Criteria

    - by Matt
    I've got a list of parent SKUs for items I need to create in my inventory system. This list has been finely paired down to the 165 products we would like to carry. However, each one of these 165 SKUs has between 2 and 8 child SKUs of different colors, sizes, etc. Those are stored on a different worksheet, mixed into around 2500 items. Those are the SKUs I need to input into my inventory system. Here is what it looks like. Sheet 1 is just SKUs: A 1 2 3 4 Sheet 2 is comprised of all the child SKUs, with parent SKUs in column B. Not all parents have the same number of children: A B 1BLKM 1 1BLKL 1 1BLUM 1 2BLKM 2 2BLKL 2 2BLUM 2 2ORAM 2 3BLKM 3 3BLUM 3 I want to look up all of the child SKUs for the Parent SKU list that has been fine tuned. Parent SKU is included as a column on the child SKU worksheet. I need to lookup all matches of the Parent SKU, then continue to move down the parent SKU list until all matches for all 165 parent items have been found. It seems like every function I try can't use an Array for input. Is there a way to do this with Lookup or some combination of index, match, row, etc? Any way at all to do it without VBA? Or maybe even a VBA solution with code that I can understand, as someone who hasn't used VBA before.

    Read the article

  • What USB key would you recommend using for running a Windows 7 VM off of?

    - by Darryl Hein
    Because I can't find a good PHP editor for OSX, I develop in Windows with PhpEd. At the moment, my development time is split between a desktop and a laptop. To partially solve the problem of having 2 different environments, I have installed a virtual machine (through Virtual Box) and put the hard drive file on an external hard drive. At the moment, I've been connecting it through Firewire 800. I have 2 problems with this setup: (1) The hard drive is fairly large so to carry the laptop and hard drive I pretty much require a backpack. (2) The hard drive requires quite a bit of power and therefore reduces the battery life (by about 40%). My thought is to move the VM hard drive onto a USB key. I realize it will be slower, but as I'm just using it for PHP development, there isn't a lot of disk activity in the VM. The only really intense time is boot up, otherwise, it just about sits idle. Do anyone have any suggestions on a USB key to use for the VM? It would need to a minimum of 32GB.

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • Metacity/Compiz not staring upon Login Ubuntu 10.10

    - by Ryan Lanciaux
    TLDR: As of this afternoon, I do not have a window manager when I login to Ubuntu 10.10. I would like to have window manager on login without needing to add to startup. Just started using linux again as my home OS. (Used it for a long time years ago but been on windows up until this past weekend) so this may be kind of n00b-ish :) Anyways, up until today, everything on my machine was running okay. I did not have compiz running as the default wm because I'm running NVidia Drivers and Xinerama (and as I understand Xinerama & Compiz don't work well together). I made no changes to my xorg / etc but today when I logged in, I had to manually start metacity from command line to get any window manager. Really not sure what would be causing this or what I can do to get it working again. My xorg.conf is available here: https://gist.github.com/845618. My default Window Manager is set to /usr/bin/metacity in Configuration Editor under /desktop/gnome/applications/window_manager. p.s. Any tips on how to run 3 monitors where I can move windows between screens without Xinerama would be appreciated but that's prolly for another thread :)

    Read the article

  • Windows Explorer Hangs on Right-Click

    - by Bryan
    I am not sure if this is the right site to post this one as I typically post coding questions on stackoverflow. But I'll ask anyways and hopefully someone can move it if it's incorrect. Currently I have a customer built PC, utilizing an Intel i7 chip, 1300WATT PSU, 8Gigs of RAM, and two video cards. Originally I had the one video card (NVIDIA) that used the PSU and had two DVI output. After purchasing a third monitor I installed another ATI) graphics card not needed any PSU connectors. After installing and restarting, I noticed that when I right-click on my desktop, or through Windows Explorer it will hang, freeze then restarted. Sometimes after Windows Explorer restarts the problem dissipates. I checked to make sure everything was connected properly and it was. I repaired the ATI Catalyst Control Center to see if that had an issue, and I checked to see if either video card required updated drivers. Nothing worked. I tried restarting my PC and that didn't work. I tried using ShellXView (I forgot what it's actually called) and tried closed processes but that didn't work. Does anyone have any idea what could have caused this orpossible solutions I should try?< Thanks in advance.

    Read the article

  • ACDSee alternatives for batch editing images

    - by Oxwivi
    I am looking for free, preferably open, alternatives to ACDSee for batch editing work. While I can do much of the work well on ACDSee, it's not entirely satisfactory despite having to pay for it. I need at least the following batch editing functions: Resize using either height or width and maintain aspect ration Auto contrast text overlays and occasionally, cropping oh, I make extensive use of renaming features as well Couple of issues with ACDSee are: I always need to highlight the Exposure section or auto contrast will not be done despite it being saved in the preset; and I can't define, move around the cropping box, forcing me to manually crop tons of images. I'm not an advanced, or "power photo-editor". I only require the basic stuff I described to be automated. My personal feature wish list (I'm pretty sure something so niche doesn't exist) would be text overlay based on the image names (images are named as image-1_1, image-1_2 or image-2_c1_1, image-2_c1_2, and text overlay would Image-1 and Image-2 C1 and Image-2 C2). I tried digiKam, but damn that thing is huge. It runs very slowly on my Pentium 4 and 1.5 GB RAM. On top of being a program with over 1 GB of files, the KDE library it uses is always slow regardless of it running on either Windows or Linux.

    Read the article

  • Transferring an SQL Processor License to a virtual hosted environment

    - by Andrew Shepherd
    My company is currently hosting a service in-house, and we want to move to an externally hosted environment. We would then be using a virtual server. I understand that this might be spread across multiple machines, but from my perspective as a customer, this layer is abstracted away - I shouldn't know or care about the hardware that the OS is hosted on. We have a licensed edition of SQL Server 2008. This is one Processor license. Will it be a violation of the licensing agreement to use this in a virtual environment. From the reference guide here it says When licensed Per Processor With Workgroup, Web, and Standard editions, for each server to which you have assigned the required number of per processor licenses, you may run, at any one time, any number of instances of the server software in physical and virtual operating system environments on the licensed server. However, the total number of physical and virtual processors used by those operating system environments cannot exceed the number of software licenses assigned to that server For enterprise edition there is an added option: if all physical processors in a machine have been licensed, then you may run unlimited instances of SQL server 2008 in one physical and an unlimited number of virtual operating environments on that same machine. I'm having trouble getting my head around this. Would I theoretically have to get a license for every processor in this virtual environment (which is effectively impossible because I have no way of knowing how many processors there actually are)? Or can I just say that it's hosted on one "virtual" server, so that's OK?

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >