Search Results

Search found 23664 results on 947 pages for 'quirky update'.

Page 710/947 | < Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >

  • MacBook Pro battery capacity 65K mAh

    - by Alexander Gladysh
    I have a 15" MacBook Pro 3.1 (that is Late 2007 model AFAIR). I've bought it new a couple of years ago. Recently its on-battery power lifespan became very short (30 to 10 minutes). When my notebook turns itself off due to "low battery" and I press the small button on the battery itself, all LED lights are alight, indicating full charge. When I plug in the power adapter, my Mac displays that "battery is fully charged, finishing charging process" (I have a Russian OS X 10.5.7, so that is a rough translation), but the LEDs on battery itself display (seemingly accurate) status that there are one or two "LEDs still not charged". My battery have as few as 37 recharge cycles (yes, I've neglected calibration over the time I've used it). Battery info programs like iBatt2 report battery capacity of 65 337 mAh (with by-design capacity of 5600 mAh). I get it that something went wrong with battery electronics. I've tried resetting my Mac's PRAM and SMC, it did not changed anything. Now I'm trying to recalibrate the battery, but looks like it does not help as well. Will try to recalibrate it several times in a row. I'd buy a new battery if I knew if it is battery fault, not a notebook's. Any suggestions? Update: After recalibration, my battery status now displays battery capacity of 1500 mAh. But with every recalibration (or simply when I use notebook without power adapter plugged in) this number changes in the range from 200 mAh to 1700 mAh. LEDs on battery now are synchronous with what nodebook thinks on the charge level. Also I've noticed that cycle count changes rather slowly. It is now 39, it was 37 when I've started recalibration, and I went through the process at least ten times... So, the main question is: does it look like that replacing the battery would help me (or does it look like this is notebook's problem)? I guess I should try replacing the battery.

    Read the article

  • Configure APE-Server on Ubuntu10.10 webserver

    - by sadmicrowave
    I'm having problems configuring my ape-server. First, I reside behind a corporate firewall where our own DNS servers are maintained. I requested a domain name for my server and was provided uslonsweb003.us.mycompany.com from my IT group. Therefore, my website works and can be accessed via (intranet only) at http://uslonsweb003.us.mycompany.com/test.php. I followed the instructions at ape-project.org and run the Check Tool at the end only to find I get an error stating: Running test : Contacting APE Server (adding frequency) Can't contact APE Server. Please check the folowing url is pointing to your APE server : http://0.uslonsweb003.us.mycompany.com:6969 my /etc/apache2/apache2.conf module looks as follows: <VirtualHost *:80> Servername uslonsweb003.us.mycompany.com ServerAlias ape.uslonsweb003.us.mycompany.com ServerAlias *.ape.uslonsweb003.us.mycompany.com DocumentRoot "/var/www/" </VirtualHost> my /var/www/ape-jsf/Demos/config.js config section looks as follows: APE.Config.baseUrl = 'http://uslonsweb003.us.mycompany.com/ape-jsf'; APE.Config.domain = 'uslonsweb003.us.mycompany.com'; APE.Config.server = 'uslonsweb003.us.mycompany.com:6969'; The instructions at ape-project.org tell me that the APE.Config.server should be `ape.mydomain.com:6969'; but that does not work (I'm assuming because my corporate DNS does not understand the 'ape' before the domain name since 'ape' was not registered with the IT DNS). So therefore, I changed it to what you see above. Please help!! Thanks in advance UPDATE 1 per the installation instructions located on this page http://www.ape-project.org/wiki/index.php/Advanced_APE_configuration under 'Configure your Server/Computer' (I'm running it on a server obviously) It says I need to add some lines to my DNS config file. It sounds like (since I'm within a corporate network) I would ask my IT group to add the following lines to the DNS configuration file on their end: ape IN A x.x.x.x ; IP address of my APE server *.ape IN CNAME ape I just want to make sure this is all I have to have them add (or if this is even correct) before I ask them.

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

  • Windows 8 unable to connect to WPA2 AES Wireless Network

    - by user170193
    I'm running Windows 8 and am unable to connect to my home wireless network. I've tried restarting the router, patching the drivers to the next version, patching the drivers to the last version, running windows update and patching the chipset drivers to the latest version. So far nothing has worked. My computer can get on the internet via USB tethering on my phone or an open WiFi connection, but it is unable to connect to my home WPA2 AES secured wireless network. It sees the network, attempts to connect, gets a limited connection and then drops the connection. All the other wireless devices in my household have no problems. I have the new Dell XPS 12, running Windows 8 using an Intel Centrino Advanced-N 6235 wireless adapter. I've refreshed windows twice now to try different driver configurations. I've tried uninstalling all the Dell software, I've tried uninstalling all the Intel software and reinstalling just the drivers. I've tried turning switching the ability for the Wireless driver to turn the computer off or on. I've tried setting up the connection manually from desktop mode. I've tried switching it on and off using the wireless button on the keyboard and in the software. So far nothing has allowed me to connect to the secured network. It just keeps getting a limited connection, dropping the connection and retrying. It's driving me crazy, any ideas, anything I missed? Thanks.

    Read the article

  • Installing rpm module of Python for yum

    - by vito
    I've installed Python and yum from source (configure, make, make install), not using RPMs because that's leading to several other issues. So when I executed: # yum update ... I get the following error: Traceback (most recent call last): File "/usr/bin/yum", line 22, in <module> import yummain File "/usr/share/yum/yummain.py", line 22, in <module> import clientStuff File "/usr/share/yum/clientStuff.py", line 18, in <module> import rpm ImportError: No module named rpm Now because I've installed yum and python from source, do I need to install Python's rpm module from source, too? Because installing the rpm for this module lead to the following error: # rpm -vih rpm-python-3.0.4-6x.i386.rpm warning: rpm-python-3.0.4-6x.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e error: Failed dependencies: python >= 1.5.2 is needed by rpm-python-3.0.4-6x.i386 libbz2.so.0 is needed by rpm-python-3.0.4-6x.i386 librpm.so.0 is needed by rpm-python-3.0.4-6x.i386 Suggested resolutions: /var/spool/up2datepython-2.3.4-14.7.el4.x86_64.rpm I tried searching for the source of this module, but I couldn't find it. Any help in installing this module is appreciated. Thanks for your time. Other info: # python -V Python 2.6.5

    Read the article

  • Windows 7 search doesn’t find text strings

    - by Hugh Tash
    I’m not able to find any text strings starting not from the beginning of word in filename or in file content using Windows 7 search. My Windows 7 search configuration: Let’s say I’m searching for a documents containing word “content”. I’m able to find those documents when searching for “content”, “conte”, “con” (as long as the string includes the beginning of the word). "content" "con" But if I search for “ontent”, “tent” or any other combination that doesn’t include the beginning of the word, Windows search won't find it. I've tried other indexing/searching software such as Copernic Desktop search, Google desktop search. Those programs also weren’t able to find part of the word starting from the middle of the word. For instance, it finds “conte”, but doesn’t find “onte”. Finds “conte” Doesn’t find “onte” I got the same problem using Copernic desktop search. On the other hand, when I use non-indexing content search software such as Agent Ransack or FileSeek, I get the same results when searching for “conte” or “onte”: “conte” “onte” Why do all pre-indexing content search applications (Windows search, Google desktop, Copernic desktop search) fail to search for a string inside the words? Why do non-indexing applications find text strings wherever they are: in the beginning, middle or end of the word? I’ve tried wildcards and other constructions with no luck. *onte onte “onte” content:onte content:onte content:~onte All these searched doesn’t find the word “content”. How can I make Windows search find strings from any part of words? Could you try these searches and see if they work for you? Or is this normal behavior? Thank you. Update: Using wildcards before or after "onte" doesn't find any results. content:~=onte doesn't find any results.

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • Failed to connect from slave to master with error "error connecting to master (1045)"

    - by Victor Lin
    I try to setup replication from slave to the master. CHANGE MASTER TO MASTER_HOST = 'master', MASTER_PORT = 3306, MASTER_USER = 'repl', MASTER_PASSWORD = 'xxx'; And I did grant privileges to the user on master. I can connect with mysql command from slave machine to the master mysql -h master -u repl -p mysql> show grants; GRANT RELOAD, SUPER, REPLICATION SLAVE, CREATE USER ON *.* TO 'repl'@'xxx' IDENTIFIED BY PASSWORD 'xxx' mysql> select 1; +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.04 sec) As you can see, privileges are correct, connection works fine, but however, the connection for replication to master always failed. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: Read_Master_Log_Pos: 4 Relay_Log_File: slave-replay-bin.000002 Relay_Log_Pos: 4 Relay_Master_Log_File: Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 0 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master:3306' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec) Is this caused by different version of MySQL server? The version of master is 5.0.77, and the slave is 5.5.13. But all articles I could find tell me that it's okay to replicate from a newer slave to old master. How to solve this problem? -- Update -- I even try to upgrade the old MySQL, but still, the problem is not solved. mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master-bin.000007 Read_Master_Log_Pos: 107 Relay_Log_File: slave-replay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: master-bin.000007 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 107 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repl@master' - retry-time: 60 retries: 86400 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row in set (0.00 sec)

    Read the article

  • DNAT to 127.0.0.1 with iptables / Destination access control for transparent SOCKS proxy

    - by cdauth
    I have a server running on my local network that acts as a router for the computers in my network. I want to achieve now that outgoing TCP requests to certain IP addresses are tunnelled through an SSH connection, without giving the people from my network the possibility to use that SSH tunnel to connect to arbitrary hosts. The approach I had in mind until now was to have an instance of redsocks listening on localhost and to redirect all outgoing requests to the IP addresses I want to divert to that redsocks instance. I added the following iptables rule: iptables -t nat -A PREROUTING -p tcp -d 1.2.3.4 -j DNAT --to-destination 127.0.0.1:12345 Apparently, the Linux kernel considers packets coming from a non-127.0.0.0/8 address to an 127.0.0.0/8 address as “Martian packets” and drops them. What worked, though, was to have redsocks listen on eth0 instead of lo and then have iptables DNAT the packets to the eth0 address instead (or using a REDIRECT rule). The problem about this is that then every computer on my network can use the redsocks instance to connect to every host on the internet, but I want to limit its usage to a certain set of IP addresses only. Is there any way to make iptables DNAT packets to 127.0.0.1? Otherwise, does anyone have an idea how I could achieve my goal without opening up the tunnel to everyone? Update: I have also tried to change the source of the packets, without any success: iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 1.2.3.4 -j SNAT --to-source 127.0.0.1 iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.0/24 -d 127.0.0.1 -j SNAT --to-source 127.0.0.1

    Read the article

  • Jetty - 401 Unauthorized when using basic authentication

    - by JP.
    I am running SOLR on jetty in Ubuntu (a bitnami VM, if that helps) and am trying to lock down access to both the admin pages and the update/delete/etc. pages using basic authentication. When I attempt to connect to the admin console via a web browser I am prompted for a user name and password, but the username and password I use simply does not work. For test purposes I am using foo:bar as the credentials, but I receive a '401 Unauthorized' response. I see the following in my request log. 127.0.0.1 - - [10/Nov/2013:05:35:46 +0000] "GET /solr/ HTTP/1.1" 401 1376 Am I doing something wrong and/or is there anything obviously incorrect with the below configuration? Any help is greatly appreciated. Jetty.xml <Call name="addBean"> <Arg> <New class="org.eclipse.jetty.security.HashLoginService"> <Set name="name">solr</Set> <Set name="config"><SystemProperty name="jetty.home" default="."/>/etc/realm.properties</Set> <Set name="refreshInterval">5</Set> </New> </Arg> </Call> /etc/realm.properties foo: bar, solr_admin webdefault.xml <security-constraint> <web-resource-collection> <url-pattern>/</url-pattern> </web-resource-collection> <auth-constraint> <role-name>solr_admin</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>solr</realm-name> </login-config>

    Read the article

  • Cannot Change "Log on through Terminal Services" in Local Security Policy XP from Server 2008 GP

    - by Campo
    This is a mixed AD environment, Server 2003 R2 and 2008 R2 I have a 2003 AD R2 and a 2008 R2 AD. GPO is usually managed from the 2008 R2 machine. I have a RD Gateway on another server as well. I setup the CAP and RAP to allow a normal user to log on to the departments workstation. I also adjusted the GPO for that OU to allow Log on trhough Remote Desktop Gateway for the user group. This worked on my windows 7 workstation. But unfortunately the policy is a different name in XP "allow log on through Terminal Services" I can get through right into the machine but when the log on actually happens to the local machine i get the "Cannot log on interactively" error. This is set in (for the local machine) Secpol.msc Local Security Policy "user rights assignment" but is controlled by the GPO in Computer Configuration Policies Security Settings Local Policies "User Rights Assignment" Do I simply need to adjust the same setting on the same GPO but with a server 2003 GP editor? Feel like that could cause issues... Looking for some direction. Or if anyone has run into this issue yet. UPDATE Should this work? support.microsoft.com/kb/186529 Still seems like I will have the issue as the actual GP settings for Log on through Terminal Services is still different between Server 2008 R2 and 2003 R2.... Another Thought: Should I delete the GPO made for the department and remake it with the 2003 R2 server? I have no 2008 specific settings as the whole department runs XP other than myself. If that's a solution I will move my computer out of the department as a solution... Thoughts?

    Read the article

  • Windows 7 install detects SSD but doesn't list it to install to

    - by Mohamed Meligy
    I'm having quite a weird problem when trying to install Windows 7 SP1 on a new Corsair Force Series 3 SSD to replace a failing HDD in my wife's laptop. When I boot to Windows install, it shows that I have no disks to install to, and tells me to find it a driver to any custom disks I may have. When I go to repair option on the first install window, and then open command prompt Window, I can see the disk using diskpart, and can partition it and format partitions, and then later access them from command prompt and copy files to them. After creating partitions, clicking the "browse" button in Windows install screen that shows no disks available to install Windows to, does show the partitions created by diskpart! So, it does detect the disk and partitions, but refuses to list them as options to install to. People on the Interwebs seem to suggest that just running diskpart "clean" solved the issue for most people, just creating an "active" "primary" partition is al most tutorials suggest. Both got me only as far as described above. The BIOS doesn't have RAID option, changing between "ATA" and "AHCI" (the only available options) didn't make any difference. Might be worth mentioning that this is on a laptop that has Sata III controller for main drive (which I connected the Sata3 SSD to), and Sata II for DVD (which I used for Windows install media). That's what googling brings at least (DELL XPS 15 L502). Any ideas? . Update: The SSD is 460 GB. I tried setting it all as one partition and creating 70-90 GB partition as well (NTFS). More importantly, Windows doesn't list the partition as one it cannot install to (which it does with disks in general when they are small for example). What happens here is different. It doesn't list anything at all. It shows empty list of drives.

    Read the article

  • How to install VMware tools for Ubuntu 11.04 hosted on VMware ESXi?

    - by Dmitri Toubelis
    I'm running Vmware ESX 4.1 and I have a development VM that I recently upgraded from Ubuntu 10.04 to 11.04. Then I tried to re-install VMware Tools and some of the modules gave me an error and would not compile. As a result I'm having problems with backing up this virtual machine now and I suspect VMware tools is the reason. I installed latest patches for VMware host, that included an update to VMware Tools (v8.3.7 build-381511) but I'm still getting the same error. The error I'm getting is like this: ... /tmp/vmware-root/modules/vmhgfs-only/super.c:73:4: error: unknown field \u2018clear_inode\u2019 specified in initializer make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/super.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' and also this: /tmp/vmware-root/modules/vmci-only/vmci_drv.c:91:4: error: unknown field \u2018ioctl\u2019 specified in initializer /tmp/vmware-root/modules/vmci-only/vmci_drv.c:91:4: warning: initialization from incompatible pointer type /tmp/vmware-root/modules/vmci-only/vmci_drv.c: In function \u2018vmci_init\u2019: /tmp/vmware-root/modules/vmci-only/vmci_drv.c:151:4: error: implicit declaration of function \u2018init_MUTEX\u2019 make[2]: *** [/tmp/vmware-root/modules/vmci-only/vmci_drv.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmci-only' Any ideas?

    Read the article

  • Multiple contacts with shared information

    - by Keith Thompson
    Background: I currently have several hundred contacts, synchronized between a Microsoft Exchange server and several mobile devices. I also save exported copies of the contacts in .vcf format. Is there a good way (application, file format, whatever) to maintain contacts with shared information? A very common scenario is that I have contacts for two or more people who live in the same house, for example: John Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-2222 Mobile: 555-555-3333 E-mail: [email protected] Jane Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-4444 Mobile: 555-555-5555 E-mail: [email protected] As you can see, both contacts have the same home address and phone number, but distinct names and work and mobile phone numbers. (Other information might also be either shared or distinct.) The applications and file formats I'm familiar with don't seem to have a good way to deal with this. If I use a single "John & Jane Doe" contact for both, it's difficult to distinguish the distinct information (if I want to call Jane's mobile phone rather than John's). If I use a separate contact for each, I have to remember to update both of them (or all of them for N 2) when they move or change their home phone number. An ideal solution would let me create a record containing information for their household, and have each of their contact records contain a reference to the household record, so that when I view John's contact record I see both shared and distinct information. Is there anything out there that has good support this kind of thing? (I would think there would be, since it's a very common scenario.) (I suppose I could roll my own system that generates merged .vcf files from some extended format, but that wouldn't play well with synchronizing across multiple devices.)

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • Preventing access to files if a user types the full url on the address bar

    - by bogha
    i have a website, some folders on the websites contains images and files like .pdf , .doc and .docx . the user can easly just type the address in the url to get the file or display the photo http://site/folder1/img/pic1.jpg then boom.. he can see the image or just download the file my question is: how to prevent this kind of action, how can i guarantee a secure access of the files. any suggestions UPDATE TO CLARIFY MY IDEA i don't want any user who is browsing the website to get access to these files normally by just writing the URL of the file. those files are a CV files, they are being uploaded by the users to a specific folder on the server which we host outside the company. those files are only being viewed by the HR people through a special system. that's the scenario we want. i don't want a WEB GEEK who just wants to see what files has been uploaded to this folder to download them easly to his/her computer and view them or publish them on the internet. i hope you got my idea

    Read the article

  • Debugging UI Problems in IE8 (Was IE8 on Windows 7 Authentication Mess)

    - by alharaka
    UPDATE: I think the real question I need to ask here is: how does a technician debug UI problems with Internet Explorer, and not HTML rendering issues that have pretty good tools? I am aware of the SysInternals tools and others mentioned below, but maybe I am not harnessing their power properly. Someone else in the TechNet forum I mentioned had a similar issue. Again, I have lots of data, I am not sure how to properly interpret it. ORIGINAL POST: So I tried the venerable Technet Forums to solve this isse. In short, the Windows Security dialog has no place to put credentials, rendering pretty much useless. This happens to apply for a whole bunch of our intranet websites, and only a select number of users with a few laptops have this problem. It ends up looking like this. Things I have tried so far: Disabling local Group Policy (not domain connected) Disabling local Security Policy Resetting IE settings A few system restores Re-registering a bunch of IE DLL's and all other steps here Reinstalling IE8 (dism /online /disable-feature /featurename:"internet-explorer-optional-x86, reboot, dism /online /enable-feature /featurename:"internet-explorer-optional-x86, and reboot) And SFC scan, which found nothing Still, nothing. Not only am I fed up, but I have begun to really work with APIExplorer and Procmon as mentioned in the Technet original because I want to know WHAT is happening, not just fix it. Any thoughts?

    Read the article

  • ATI radeon graphics card and screen freeze problem

    - by Thomas
    recently i upgrade my machine with new hardware component. my mother board is Gigabyte, processor Intel i3 3.6 ghz, ram 4 gb, graphics card ATI radeon 4350 1 GB. my OS installed is windows XP. when i am trying to play call of duty black ops then screen gets freeze and when i try to play other game like medal of honour then suddenly game closed suddenly after 15 or 20 minutes. i am not being able to find out the problem. whether i have problem in RAM or Graphics card. i asked few hardware person and one of them told me that i should installed windows 7 rather than windows xp. is it true. please help me to understand the problem and also tell me what should i do to fix this problem. please discuss in detail. thanks in advance. Update: yes i already install lates driver for ATI radeon 4350 but still the problem persist. do i need to install windows 7 instead of win xp because my processor is intel i3.

    Read the article

  • Visual Studio 2012 Very Slow Typing

    - by DaoCacao
    I have a problem. After SP1 update, passing some time, VS 2012 becomes very-very slow when typing text. Solution size is not big, PC is quite powerful, it has 16GB of RAM, SSD drive, and i7-2600. I have attached using another VS and I see in debugger a lot of exceptions: First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC (KernelBase.dll) in devenv.exe: 0xE0434352 (parameters: 0x80131509, 0x00000000, 0x00000000, 0x00000000, 0x64BF0000). First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC in devenv.exe: Microsoft C++ exception: CVcsException at memory location 0x0027DF0C. First-chance exception at 0x753BB9BC (KernelBase.dll) in devenv.exe: 0xE0434352 (parameters: 0x80131509, 0x00000000, 0x00000000, 0x00000000, 0x64BF0000). The thread 0x288c has exited with code 0 (0x0). Anyone have any ideas on what CVcsException is? Googling it gives almost nothing. How do I get rid of this problem?

    Read the article

  • Problems configuring DB2 CLI/ODBC System DSN ODBC Data Source Administrator

    - by Komyg
    I am trying to create a System DSN ODBC connection to a DB2 9.5 database, but I am getting a very strange problem. I've looked through the internet and found the following page that has some instructions on how I should proceed: http://www.ryslander.com/how-to-install-and-configure-db2-odbc-driver/. I followed these instructions and I am able to create a new System DSN, however when I try to configure it it seems as if my configurations don't work at all. For example, when I click on the "Configure" button on my System DSN and I add a TCP/IP protocol configuration on the "Advanced Settings" tab and click "Ok", no errors appear, but when I click on "Configure" again my TCP/IP setting has vanished. This happened to all my other configurations, such as database name, username, password etc. Could you help me figure out what I am doing wrong? Note: my user is in the administrator group and I am using a Windows Server 2008 R2 Enterprise x64. UPDATE I managed to create a User DSN and connect to the database. However the problem with the System DSN remains.

    Read the article

  • Getting Perl DBD::mysql working on OS X 10.7?

    - by Bart B
    I can't seem to get Perl & MySQL to talk to each other on OS X 10.7 Lion. I did all the installs by the book, I used Oracle's PKG installer for the latest MySQL Community Server, and I installed DBI and DBD::mysql via CPAN. There were not problems at all during the install, but, when I try to USE DBD::mysql to connect to my local DB server I get the following error: install_driver(mysql) failed: Can't load '/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle, 1): Library not loaded: /usr/local/mysql/lib/libmysqlclient.16.dylib Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Reason: image not found at /System/Library/Perl/5.12/darwin-thread-multi-2level/DynaLoader.pm line 204. at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a required shared library or dll isn't installed where expected After a lot of googling all I could find were suggested hacks, so I gave this one a go: http://arkoftech.wordpress.com/2011/02/10/fixing-dbdmysql-for-mysql-5-5-89-under-macos-10-6-x/ I had to update some of the paths in the instructions since on Lion it's Perl 5.12 not 5.10. After doing that I got a new error: dyld: lazy symbol binding failed: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace dyld: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace Trace/BPT trap: 5 There must be a simple way to get MySQL & Perl working on OS X? - HELP!

    Read the article

  • Outlook 2007 OST File Indexing and OneNote 2007 Indexing are Broken

    - by Matt
    I'm running Outlook 2007 under Windows 7 Home Premium RTM. My OST file was previously being properly indexed but eventually searches significantly slowed down so I suspected a problem. Searching and indexing appears broken in OneNote 2007 as well as search time is now significantly longer. I brought up the Outlook 2007 Search Options dialog and noticed that my mailbox (running from an Exchange 2007 server) wasn't listed in the "Index messages in these data files:" list box. Next I ran the Windows "Find and fix problem with windows search" wizard which reported no errors. Then I brought up the Windows Indexing Options dialog which shows Outlook listed (as shown here): then clicked Advanced and Rebuilt the index. No dice - the listbox in the Outlook 2007 dialog still didn't show my mailbox. When I clicked the Modify button in the Indexing Options dialog I see the following: When I hover over the "oneindex://..." entry, the alt text indicates "This location is currently unavailable". When I delete it and rebuild the index, this entry returns. UPDATE: Comparison of the last screenshot above with a working PC shows that on the broken PC, the lower half of the dialog lists Outlook but neither Outlook or OneNote are showing in the upper half. The working PC has Outlook and OneNote in both parts of the dialog.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • WDS updating raid drivers in an already existing image WIM

    - by Tim
    Here is my current setup. WDS installed on Server 2008 R2 for the new driverstore and multicast features. A Windows Server 2003 32bit Standard image built to support previous DL360 models. A new HP DL360 G6 which has a new raid controller in it. I need to add the driver for the raid controller into my Server 2003 32bit standard install image but I can't seem to figure out the correct method to do so. So far I've tried the following: Mounting the image and placing the drivers into the Sysprep drivers folder, adding the PCI device codes into the sysprep.inf file and committing the changes to the image. Pushing the image to a DL360 G4, ensuring the driver is in the correct locations and re-sysprepping the image. Hoping that the new driverstore feature would magically work with 2003 (a guy can dream cant he?) Is there some standard method that I can use to update this image with the new drivers or do I need to start from scratch with an entirely new build? Thanks in advance.

    Read the article

< Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >