Search Results

Search found 16911 results on 677 pages for 'top hat'.

Page 525/677 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Ubuntu 10.10 - disaster - what other linux for beginner?

    - by A-ha
    Guys, I've tried to install ubuntu (desktop and notebook ed) on my laptop and unortunately I have to say that as despite the fact that installation process supposed to be easy I couldn't finish installation of this system - didn't detect my keyboard or rather lost my keyboard as soon as I tried to switch on/off pad on my laptop. After I've discovered that, I started all over again (this time without touching my laptop's pad during installation) and yes, eventually it get to the end of installation. Unfortunately, when I've tried to switch my pad (sometimes I just do not want to use a mouse) the whole system froze. So I had to restart it with the power button and this time I didn't touch pad at all, plugged in mouse and tried to rearrange taskbars according to my liking (all taskbars on the top side of the screen and auto-hide on) and I gave up. It is so unfinished that I just can't be bothered to use it. I would like to have one linux system on my machine so I started googling and most of the links are to either ubuntu (which I just do not want to touch for now) and suse or commercial versions of linux. I do not really mind paying for something (and having experience with ubuntu I'd rather pay and have something pro then get it free and discover that it's unusable). So could someone please provide short list of linux distros which would be appropriate for a beginner, and I don't mind paying for it, I just want it to be a professional product.

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • Synchronize Dreamweaver over an SSH tunnel using an SFTP connection

    - by Aeo
    Maybe... Just maybe... I'm asking too much here. Maybe I'm even barking up the wrong tree. I'm looking to essentially have Dreamweaver establish an SSH tunnel to one machine, and then use that connection to synchronize a site that is on another machine entirely. Now for some details: We've got two connections here at work. We've got our office connection for day to day business, and then we've got some fancy connection hosting our web servers upstairs. For the most part they've been mutually exclusive until recently. We had been establishing an SFTP connection to synchronize our web sites by going out over the office connection to the web and coming back in over the fancy connection to our servers upstairs. Recently -ish, we established a LAN connection to one of our servers that makes a pleasant change in VNC connection quality. Thanks to Vinagre, this makes it really easy to connect to any of our servers over this LAN connection via SSH tunnel for VNC. However, in spite of that new addition of a LAN connection, we still synchronize over the 'net. Out the office connection and in on the fancy one upstairs. I'm looking to change this. I'd like to get Dreamweaver to first tunnel over our LAN connection to the servers, and then go from there to whatever connection it needs to. Am I asking too much? The current set up: Dreamweaver is installed on Windows XP which is running within VirtualBox on top of Ubuntu 10.10. The network connection for VirtualBox is currently made in NAT mode, but could easily be switched to a Bridged Connection should it need be. The LAN connection is to 1 of 5 servers running CentOS 5.

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Why can't I browse my D: drive, even if I'm in the Administrators group?

    - by Nic Waller
    My fileserver running Windows Server 2008 has two logical drives; the C: drive contains all of the system and application data, and the D: drive contains all of the business data. There are several shares on the top level of the D: drive that are working fine. However... When logged into the fileserver interactively via Remote Desktop, only the Domain Administrator and local Administrator accounts can browse the D: drive. I set up an account called "Maintenance" and added it to the local Administrators group, but when logged in with this user, I can't browse into the D: drive. The D: drive has the following permissions ACL: Full Access - SYSTEM Full Access - MACHINE\Administrators It won't even let me view the ACL for the E: drive. So I tried taking ownership of the E: drive, then I can read the ACL, and "Effective Permissions" says that I have full access. But I still get this error message. Location is not available D:\ is not accessible. Access is denied. Here's a screenshot proving that I get access denied even when I have Full Access. http://www.getdropbox.com/gallery/2319942/1/errors?h=2bd644

    Read the article

  • AutoHotKey temporarily rebind Winkey

    - by wes
    I've got a wireless keyboard that puts some media keys on top of the Function keys, so that by default F4 is actually lock (Rwin & l) and Fn+F4 is a real F4. So I'd like to basically switch those around. Here's what the key history shows: VK SC Type Up/Dn Elapsed Key ------------------------------------- 73 03E d 17.32 F4 ; Fn+F4 73 03E u 0.16 F4 5C 15C d 2.96 Right Windows ; F4 4C 026 d 0.00 L 5C 15C u 0.13 Right Windows 4C 026 u 0.00 L This doesn't do anything: SC15C & SC026::MsgBox,Pressed F4 But this prints that I hit F4 then goes to the login screen: Rwin & l::MsgBox,Pressed F4 So how can I stop it from switching to the login screen? Ideally I'd like F4 (which registers as Rwin & l) to just send F4, Fn+F4 to send Rwin & l, and also have them work with other keys (e.g., a manual !F4 should still close a window). Is this possible?

    Read the article

  • PC won't boot from IDE HDD when SATA data drive connected

    - by Kevin
    I have an old Pentium 4 system running XP. The machine is set up as an HTPC. It was set up and running well with 1 SATA drive as a boot drive, another SATA drive to store TV recordings, and an IDE drive to store more recordings. Last week the original boot drive (a SATA drive) failed. The BIOS would no longer recognize it. I had a disused IDE drive hanging around that was large enough for the OS, so I reformatted it and installed XP on it. Now the system will only boot if I do not connect the remaining healthy SATA data drive. All three drives are recognized by the BIOS, and I have set the boot order so that the IDE drive with XP on it has top priority, but after the BIOS recognizes the drives, etc. I just get a black screen. I know the SATA drive is functional, because if I hot plug the drive AFTER the system is booted (I know I'm not supposed to do this), I can go into the control panels and mount the drive, and see all the files and folders on it in Windows Explorer. Any suggestions on what is going on and how to fix it? Many thanks.

    Read the article

  • High availability virtual machines

    - by Jeremy
    I've been reading a lot about high availability virtualization, either via Hyper-V or VMWare. In that context, essentially high availabliity means that the VM is hosted by a closter of physical servers (nodes), so if one of the physical servers goes down, the VM can still be served by other physical servers. So far so good, the physical cluster and the VM itself are highly available. However if the service being provided, let's say SQL server, MSDTC, or any other service, are actually being provided by the VM image and the virtualized operating system. So I imagine that there is still a point of failure at the virtual layer that isn't accounted for. Something could happen within the virtual machine itself that the physican cluster can not account for, correct? In that instance the physican failover cluster (Hyper-V) or VMWare host, can not fail over, because the issue is not with one of the servers in the physical cluster - failing over a physical node would not do any good. Does this necessitate building a virtual failover cluster on top of the physical one, or is this not necessary? Alternatively, I suppose you could skip the phsyical clustering, and just cluster at the virtual layer (Child based failover clustering), because that should still survive a physical failure. See image below showing parent based (left), child based (right) and a combination (center). Is parent based as far as you need to go, or is child based more appropriate?

    Read the article

  • Magical moving desktop icons

    - by Nathan Taylor
    I have encountered a very strange behavior in Windows 7 that I cannot seem to identify and I have never seen or heard of on any system configuration. Whenever I move my mouse to the left-most edge of my primary display (centered in 3-display setup), my desktop icons magically move away from the cursor (up or down and to the right). It only happens when my desktop has focus and the mouse is positioned on the left, top or bottom edge of the main display. Moving the mouse all the way to the right edge of my right secondary display causes the mouse icons to snap back into their correct position. Ridiculous video of the issue My setup is 3 displays on two display adapters. The main display is running at 2560x1600, connected to the machine via a USB-powered DVI-D to DisplayPort adapter and is driven by an NVIDIA NVS 3100M video card. The secondary displays are running at 1440x900 and 1200x1920 and are driven by integrated Intel HD Graphics (mobile). It seems like some kind of panning behavior, but it's obviously not working as expected. I have updated all of my drivers, but no change. It's probably worth noting that the desktop icons are set to auto-arrange.

    Read the article

  • Ubuntu 13.10 - How to disable LVM and cryptsetup? cryptsetup: evms_activate is not available

    - by NeverEndingQueue
    I am trying to remove whole drive encryption from my Ubuntu installation. I've run Ubuntu from Live CD, mounted crypt partition and copied it to another partition /dev/sda3. sudo cryptsetup luksOpen /dev/sda5 crypt1 sudo dd if=/dev/ubuntu-vg/root of=/dev/sda3 bs=1M After that I've run boot-repair: https://help.ubuntu.com/community/Boot-Repair Added entry to /etc/fstab: UUID=<uuid> / ext4 errors=remount-ro 0 1 Of course I've replaced with blkid result of my /dev/sda3. I've also deleted overlayfs and tmpfs lines from /etc/fstab. (I've just compared it to content of /etc/fstab in non-encrypted Ubuntu installation and could not find overlayfs and tmpfs). I've chrooted from LiveCD into my system and rebuilt initramfs: http://blog.leenix.co.uk/2012/07/evmsactivate-is-not-available-on-boot.html I've also removed cryptsetup using apt-get remove. Basically I can easily mount my system partition from Live CD (without setting up the encryption and LVM stuff), but can not boot from it. Instead I see: cryptsetup: evms_activate is not available When I've chosen the Recovery mode I've seen this: Begin: Mounting root file system ... Begin: Running /script/local-top ... Reading all physical volumes. This may take a while ... No volume groups found cryptsetup: evms_activate is not available Begin: Waiting for encrytpted source device ... My /etc/crypttab is empty. I am pretty sure that system tries to find encrypted partition, search for LVMs etc. Do you have ideas what could be the problem or how can I fix it? Thanks

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • Apache using 100% CPU, once again

    - by CBenni
    Recently, apache2 started using 100% of CPU power: top gives me From other, similar threads, I took the tip to use mod_status. Aside from HUGE amounts of NULL requests, it gives: CPU Usage: u2.16 s1.32 cu0 cs0 - .0835% CPU load 1.2 requests/sec - 17.6 kB/second - 14.6 kB/request 8 requests currently being processed, 42 idle workers The access and error logs do not show anything surprising or intriguing at all. Note the .8% CPU usage. Another tip was to use strace: root@server:~# strace -p 1956 Process 1956 attached - interrupt to quit restart_syscall(<... resuming interrupted call ...> And remains like this for at least half an hour, without producing any additional output. Restarting apache fixed the problem for less than a second The server runs a few custom python scripts aswell as a django-powered website on apache2 (up-to-date), but even turning the scripts off (or not having them active in the first place) did not change anything. After I stopped apache and powered my server off, powered it on a few minutes afterwards and restarted all my services, the CPU usage remained low for several hours, just in order to pop up again randomly (?) The DigitalOcean CPU stats on my server are: You can see how the CPU usage was super high for almost half a day until I restarted the bot - just to remain stable for several hours and then pop up again. I am completely at a loss of words and don't know what I could do to find out what piece of my code is giving me these problems or if apache itself is the cause... Therefore I would greatly appreciate any hints to the questions: What else can I try to do? Which things might I not have checked? Is this definitely in my own code? How do you find what part of python code crashes an app via a infinite loop or similar?

    Read the article

  • Terminal is not letting me make commands unless I hit enter a bunch of times

    - by ninja08
    Whenever I open terminal it normally allows me to immediately begin making commands. Only earlier today I did the setup for github here https://help.github.com/articles/set-up-git And then all of a sudden the thing where I give terminal commands won't allow me to give it commands unless I hit enter a few times. This is what it looks like: Last login: Fri Nov 9 11:43:28 on ttys001 mysql.save: Permission denied mysql.save: Permission denied /Users/Nick/.zshrc:32: command not found:  . ~ git: ? ~ git: ? ~ git: ? See the big space? That's because it simply will never show the ~ git: thing unless I hit enter 3-4 times. Also, it never used to say ~ git: before I did the git setup. I'm not sure what I changed. I've checked the zshrc file and commented everything out to find the line causing the problem. I've done that and it turns out it was the source $ZSH/oh-my-zsh.sh Within the oh-my-zsh.sh file I've commented out each block of code for the file starting at the top and I've found that this block is causing it: # Load the theme if [ "$ZSH_THEME" = "random" ] then themes=($ZSH/themes/*zsh-theme) N=${#themes[@]} ((N=(RANDOM%N)+1)) RANDOM_THEME=${themes[$N]} source "$RANDOM_THEME" echo "[oh-my-zsh] Random theme '$RANDOM_THEME' loaded..." else if [ ! "$ZSH_THEME" = "" ] then if [ -f "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" ] then source "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" else source "$ZSH/themes/$ZSH_THEME.zsh-theme" fi fi fi

    Read the article

  • How can I create a separate toolbar from the Task Bar?

    - by Iszi
    In Windows XP, you could separate toolbars from the Task Bar by dragging them to the desktop. They could then be left lying about anywhere on your screen or, my preferred option, docked to any side of the screen. I found this particularly useful to keep a handy list of common phone numbers quickly accessible. I'd create a new toolbar pointing to a custom folder, and put a bunch of dead shortcuts in the folder that had names and numbers as their file names. I'd then dock the toolbar to the left side, set it to auto-hide and always on top (options which could be set separate from the Task Bar as well) and it would be readily available no matter what else I was doing on my system. However, on my Windows 7 system, I seem unable to perform the crucial step of pulling the new toolbar off of the Task Bar. This is of course with the Task Bar "unlocked" so that I can move all my toolbars around. Is there something I'm missing here, or is this a feature that's been disabled in Windows 7? Is there any way to re-enable it, or otherwise achieve similar functionality? I'd rather be able to do this without additional software, if possible.

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Can nginx be an mail proxy for a backend server that does not accept cleartext logins?

    - by 84104
    Can Nginx be an mail proxy for a backend server that does not accept cleartext logins? Preferably I'd like to know what directive to include so that it will invoke STARTTLS/STLS, but communication via IMAPS or POP3S is sufficient. relevant(?) section of nginx.conf mail { auth_http localhost:80/mailproxy/auth.php; proxy on; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 SSLv3; ssl_ciphers HIGH:!ADH:!MD5:@STRENGTH; ssl_session_cache shared:TLSSL:16m; ssl_session_timeout 10m; ssl_certificate /etc/ssl/private/hostname.crt; ssl_certificate_key /etc/ssl/private/hostname.key; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { protocol imap; listen 143; starttls on; } server { protocol imap; listen 993; ssl on; } pop3_capabilities "TOP" "USER"; server { protocol pop3; listen 110; starttls on; pop3_auth plain; } server { protocol pop3; listen 995; ssl on; pop3_auth plain; } }

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • All computers on network get stuck waiting for some sites indefinetely

    - by zacaj
    This happens across three computers, running windows 7 and Ubuntu, firefox, opera, and chrome (all latest versions). I am connected to the internet through a Verizon wireless usb modem. When I try to open some web pages they will never finish loading (and usually never even show anything). The status bar at the bottom of the browser will display "Waiting for X" The servers it gets stuck on include: platform.twitter.com s7.addthis.com connect.facebook.net ajax.googleapis.com 2mdn.net Ive been getting away with just blocking them in AdBlock up until now, however the last two have been causing problems. There are some sites which require googleapis.com to load correctly, and some that wont ever load unless its blocked. eBay requires access to 2mdn.net to load pictures. On top of this its getting really annoying having to update AdBlock across all these computers whenever a new site pops up. I'm hoping there's some easier way to fix this? The different sites causing the freeze indicate to me that it's either a problem on my end (somehow?) or some server side software that got updated with a new bug?

    Read the article

  • Thunderbird: export email account settings

    - by zpea
    I'd like to create a new profile for Thunderbird using the same mail accounts I already configured in my old profile. As it is quite a number of accounts, it would be great to have a way to export/import them instead of writing down the settings just to fill in again in the new profile. Using web search and search here I mainly found following suggestions that do not match what I need: Copy the whole profile: Not possible for me as I don't want to copy other settings, the downloaded mail data etc. and the old profile broke when running out of space in the home folder anyway. Use mozBackup: There seem to be several programs by that name (forks?). In any case, it's Windows-only and hence no option (I am mostly on Linux and prefer platform-independent solutions anyway) Use accountex: Seems to do what I want, but it is not compatible with current Thunderbird version (supports only up to version 3.1) Posts with various tips from 4 years ago: Top results in the web search with the G. But they do not work in current versions of Thunderbird either. Did I overlook anything? After all, it doesn't sound like I was looking for something nobody ever looked for.

    Read the article

  • How can I make my Super keys (Windows Key) behave more like Ctrl/Alt/Shift in Linux

    - by deltaray
    After using the Ctrl + "arrow keys" for 13 years to switch virtual desktops in X windows, I've been convinced recently to change to using the Super keys instead (the windows key and the context menu key, which I've remapped). This all works fine for the most part. However, something is still picking up the key events that these keys are sending as if they are a normal alphanumeric like key. For example, I first noticed this in Google Docs spreadsheet that if I press the windows key alone over top of a cell, that it starts editing that cell. It doesn't insert anything, it just sends a key event that Firefox sees and starts editing the cell. This caused problems on a collaborative document I was working on as the way Google docs works, it led to me accidentally erasing the data in a few fields before I realised what was going on. I like using the super keys, but I want them to behave more like a Ctrl or Alt key does in that its a modifier key and doesn't send anything until a second key is pressed. My setup is the following: Ubuntu 10.10 XFCE 4 Microsoft Natural Ergo 4000 keyboard (with the logo scratched out) The following is my .Xmodmap file: remove Lock = Caps_Lock keycode 66 = Escape ! The below maps my other windows context menu key. keycode 135 = Super_R Edit: As requested, here is the relevant output from xev for a keypress and keyrelease of my Super_L (left windows key) KeyPress event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849342, (177,174), root:(182,228), state 0x10, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 34, synthetic NO, window 0x8200001, root 0x15d, subw 0x0, time 2428849430, (177,174), root:(182,228), state 0x50, keycode 133 (keysym 0xffeb, Super_L), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False

    Read the article

  • Safe use of Update-FormatData?

    - by Steve B
    In a custom PowerShell module, I have at the top of my module definition this code: Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") This is working fine as all .ps1xml files are loaded. However, the module is sometimes loaded using Import-Module MyModule -Force (actually, this is in the install script of the module). In this case, the call to Update-FormatData fails with this error : Update-FormatData : There were errors in loading the format data file: Microsoft.PowerShell, c:\pathto\myfile.Types.ext.ps1xml : File skipped because it was already present from "Microsoft.PowerShell". At line:1 char:18 + Update-FormatData <<<< -AppendPath "c:\pathto\myfile.Types.ext.ps1xml" + CategoryInfo : InvalidOperation: (:) [Update-FormatData], RuntimeException + FullyQualifiedErrorId : FormatXmlUpateException,Microsoft.PowerShell.Commands.UpdateFormatDataCommand Is there a way to safely call this command? I know I can call Update-FormatData with no parameters, and it will update any known .ps1xml file, but this would work only if the file has already been loaded. Can I list somewhere the loaded format data files? Here is a bit of background: I'm building a custom module that is installed using a script. The install script looks like : [CmdletBinding(SupportsShouldProcess=$true,ConfirmImpact="High")] param() process { $target = Join-Path $PSHOME "Modules\MyModule" if ($pscmdlet.ShouldProcess("$target","Deploying MyModule module")) { if(!(Test-Path $target)) { new-Item -ItemType Directory -Path $target | Out-Null } get-ChildItem -Path (Split-Path ((Get-Variable MyInvocation -Scope 0).Value).MyCommand.Path) | copy-Item -Destination $target -Force Write-Host -ForegroundColorWhite @" The module has been installed. You can import it using : Import-Module MyModule Or you can add it in your profile ($profile) "@ Write-Warning "To refresh any open PowerShell session, you should run ""Import-Module MyModule -Force"" to reload the module" Import-Module MyModule -Force Write-Warning "This session has been refreshed." } } MyModule defines, as first statement, this line : Update-FormatData -AppendPath (Join-Path $psscriptroot "*.ps1xml") As I updated my $profile to always load this module, the Update-Path command has been called when I run the install script. In the install script, I force import the module, which be fire again the module, and then, the Update-Path call

    Read the article

  • Which default Database Systems come installed in Microsoft VS2010 Express?

    - by Tonygts
    Appreciate all advice 0n the following questions Which database systems (Ms SQL 2008, MS SQL Compact, or others) comes installed with VS2010 Express edition. SQL Server 2008 R2 Express is free, can we install and integrate with VS2010 Express? How to uninstall those database already come installed? I have installed VS2010 express on Windows 7; just VS2010 components (VB, C#, C++ and Web Developer) and without installing any other things like SQL Express. In the Console Panel-Program & Features' window, the installed list is shown below: Microsoft SQL Server 2008 Setup Support File Microsoft SQL Server 2008 Browser Microsoft SQL Server VSS Writer Microsoft SQL Server Database Publishing Wizard 1.4 Microsoft ASP.NET MVC2 - VWD Express 2010 Tools Microsoft SQL Server 2008 Management Objects Microsoft SQL Server Compact 3.5 SP2 ENU Microsoft SQL Server System CLR Types Microsoft Silverlight 3 SDK Microsoft ASP.NET MVC 2 Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools Visual Studio 2010 Tools doe SQL Server Compact 3.5 SP2 ENU Web Deployment Tool Microsoft Visual Web Developer 2010 Express - ENU Microsoft Visual C++ 2010 Express - ENU Microsoft Visual C# 2010 Express - ENU Microsoft Visual Visual Basic 2010 Express - ENU Microsoft SQL Server 2008 As you can see, Microsoft SQL Server 2008 (last line) and near the top, Microsoft SQL Server Compact 3.5 SP2 ENU and many of their related SQL components such as Microsoft SQL Server 2008 R2 Management Objects are also installed. These are actually installed by installing VS2010 Express, but I have no idea how to use them or verify their valid existence from VS2010. Also, do I have to uninstall them before I install SQL Server 2008 R2, which is the latest version I believe? And what tool is needed to manage and create data source and tables?

    Read the article

  • Error at the end of APC install

    - by cinqoTimo
    I need to get APC running for a Drupal install of mine. I found a fairly concise guide at http://blog.4rev.net/2009-09/installing-apc-accelerator-into-php5-fedora-core-11/ for installing on FC11, only, I am using FC12. I figured I would give it a shot. I was able to run the following commands successfully - and yum installed fc12 versions of everything in the FC11 guide. yum install php-pear yum install php-devel httpd-devel yum groupinstall ‘Development Tools’ yum groupinstall ‘Development Libraries’ Then, I tried pecl install apc. Everything looked good until to got to the end, where it outputted the following error. /var/tmp/APC/php_apc.c: In function ‘zif_apc_compile_file’: /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_class_table’ /var/tmp/APC/php_apc.c:881: warning: unused variable ‘eg_function_table’ /var/tmp/APC/php_apc.c: At top level: /var/tmp/APC/php_apc.c:959: error: duplicate ‘static’ make: *** [php_apc.lo] Error 1 ERROR: `make' failed Some people have had success with installing apc-beta, but that didn't work for me.. Any suggestions? Is there something I missed that is critical in the FC12 version?

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >