Search Results

Search found 23614 results on 945 pages for 'for update'.

Page 717/945 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it? Cross-posted from Stack Overflow. Deadlocks are normally a programming problem, but ITL deadlocks relate directly to how Oracle writes to disk, so this may be an area where DBAs have more experience.

    Read the article

  • Outlook 2007 will not send/receive using RPC over HTTP to an exchange server.. works for other users

    - by bob franklin smith harriet
    I have an incredibly frustrating user issue that I have been unable to resolve for over a week, any ideas for this would be greatly appreciated. The user is having troubles using Outlook 2007 to send or receive emails over using RPC over HTTP (Outlook Anywhere) to an exchange server. Basically what happens, the connection will be establised and the user will be prompted for the username and password, those are submitted and then outlook tries to download emails which fails and the connection to the exchange server will remain unavailable. The machine can ping and everything to the exchange server there is no connection issue there. The setup worked fine for him up untill now and also works for possibly hundreds of other users using the exact same settings, also the same settings will work from the users iphone on the same internet connection, and from my own system using outlook. The exchange server has the webmail https feature and that can be logged into and send and receive emails fine. Steps taken so far: removing the .ost file for the account and allowing office to rebuild it (fixes the issue for a short period of time, then the same symptons occur) deleted exchange profile and recreated (no change in issue) uninstalled all antivirus and firewalls (no change in issue) removed all cached passwords (keymgr.dll) (no change in issue) removed all entries from the hosts file (no change in issue) uninstalled and reinstalled office 2007 (Temporary fix of issue) Installing Symantec Endpoint Client caused a lot of email scan popups to be displayed, after a reboot this stopped and a scan it picked a few trojans that it removed. This fixed the issue temporarily as well, the issue is back again now. I am completely out of ideas now, there seems to be nothing that can be done to fix this issue outside of rebuilding the PC which is a massive pandoras box I don't want to enter with this user. --- Update ---- Malware scans from multiple products have been run on the machine and all updates were installed. The real problem with this user is his distance from us, there is no way to supply a spare laptop or rebuild the machine currently.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

  • Deploying workstations - best practices?

    - by V. Romanov
    Hi guys I've been researching on the subject of workstation deployment for a while, and found a ton of info and dozens different methods and tools, but no "best practice" method that doesn't lack at least one feature that i consider required for the solution to be perfect. I'm currently interested in windows workstation deployment, but if the tools can be extended to Linux, then it's an added value. I want the deployment tools I use to be able to do the following: hardware independent - I want my image or installation to have a minimum of hardware and driver dependency, so that i can use a single image/package for all workstations easily updatable - I want to be able to update my image as easily as possible without redeploying/rebuilding/reimaging all configurations PXE bootable deployment - I want the tools to be bootable off the network so that I don't need a boot cd/DOK. scriptable for minimum human input - Ideally, the tool should run automatically after being booted and perform a "default" deployment (including partitioning) unless prompted otherwise. i.e - take a pc, hook it up, power on, PXE boot and forget about it until the OS is deployed. I found no single product or environment that does all this. Closest i came to is the windows deployment services/WIM image format. I also checked out numerous imaging and deployment tools including clonezilla, ghost, g4u, wpkg and others, but most of them lack the hardware Independence and updatability features. We currently have a Symantec Ghost server setup that does imaging over the network, but I'm not satisfied with it as it has all the drawbacks i listed above. Do you have suggestions how to optimize the process of workstation deployment? How do you deploy them in your organization? Thanks! Vadim.

    Read the article

  • Lenvo B450 won't boot on battery only?

    - by Mywiki Witwiki
    We bought a Lenovo B450 laptop almost a year ago. It comes with a NVIDIA GEFORCE with CUDA graphics and so the battery life is terrible. It will only last 1:30 hours max. We try to run it on battery as much as possible but because the battery life is short sometimes we can't notice that the battery is so low until the computer blacks out. Because of the short battery life, the laptop is always plugged on AC power. One night the computer froze. Because it was already late, I just reset the laptop my pressing the power button for 10 seconds. The laptop shut off but I did not bother restarting it. The next morning, the laptop won't turn on on battery only. It will only turn on on AC power. The computer instantly shuts down(improperly) once the adapter is removed. But the battery was at 100% then. Now it is slowly losing charge (currently at 74%). The battery indicator says, "Plugged in, not charging". I want to bring the laptop to school but I can't because it won't be portable at all. Just to summarize it all: 1) The laptop suffered some blackouts already. 2) The laptop was on AC power most of the time. 3) When the computer froze, it was reset (hard shutdown). 4) The laptop won't boot with battery only since then. 5) The laptop will shutdown instantly when AC adapter is removed. 6) The battery won't charge and is gradually losing charge. ======================= UPDATE ============================= We got the battery replaced. Unfortunately, it delivers only 2 hours max of power.

    Read the article

  • New Dvds with 99 Title tracks, which one is the correct track?

    - by Mike Fielden
    I have embarked on the task of backing up my DVD collection. I have noticed that some of the newer movies I am attempting to rip contain 99 Title tracks all with approximately equal overall run times. I use MacTheRipper to rip the DVDs and Handbrake to encode them. My question is, is there a site somewhere that has information regarding which Title track to select? Disclaimer: I cannot stress this enough, I legally own these DVDs. I am merely making a digital copy. Two examples of such DVDs are Star Trek and Carriers. UPDATE: Just an FYI each most of these 99 tracks appear to be the full length tracks. There times look to be very similar to the overall movie run time (within a few seconds of each other). So using the time isn't a valid way to tell which is the correct track. Opening the movie with VLC seems to be the best way to tell. Thank you all.

    Read the article

  • Installing sqlite gem fails on AWS Linux instance with sqlite-devel libraries installed

    - by Scott
    Hi, I'm running an instance built off ami-595a0a1c. I am trying to install the sqlite3 (or sqlite) gem and it's failing with the below error: $ sudo gem install sqlite3 Building native extensions. This could take a while... ERROR: Error installing sqlite3: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb checking for sqlite3.h... no sqlite3.h is missing. Try 'port install sqlite3 +universal' or 'yum install sqlite3-devel' and check your shared library search path (the location where your sqlite3 shared library is located). extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3 for inspection. Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-1.3.3/ext/sqlite3/gem_make.out Typically, this just means you need to install the development libraries and everything is cool. However, I have installed the sqlite-devel packages and still no dice. Since this is the Amazon Linux instance, I'd rather not add more repositories than the ones Amazon provides if possible. What can i do to get this thing to compile? Thanks for any insight! From a brand new instance, here's what I've done: $ sudo yum install rubygems ruby-devel $ sudo gem update --system $ sudo gem install rails $ rails new app $ cd app $ rails server Could not find gem 'sqlite3 (= 0)' in any of the gem sources listed in your Gemfile. $ sudo yum install sqlite-devel $ sudo gem install sqlite (or sqlite3 -- same result) See breakage above

    Read the article

  • How do I get "Back to My Mac" (using MobileMe) from Windows?

    - by benzado
    I have a MobileMe subscription and a Mac at home with "Back to My Mac" enabled. When I'm away from home, this service lets me use another Mac to connect to my Mac back home and access file sharing, screen sharing, etc. As far as I know, the service doesn't use any proprietary protocols, so in theory I should also be able to get "Back to My Mac" from a Windows PC. This MacWorld article explains how it works. Basically, it uses Wide-Area Bonjour to give your Mac a domain name like hostname.username.members.mac.com. Remote computers can find your Mac using that address, then connect to it using a private VPN. The "Wide Area Bonjour" part seems to make it a little more complicated than simply a regular domain name, though. Note that I'm not interested in using the methods described by LifeHacker, which doesn't use the MobileMe service at all. I don't want to use a totally different dynamic DNS service. I'd like to use the one I'm already paying for, or at least find out why that's not possible from Windows. Also, my primary problem is finding a network route back to my mac... once I've got that I know how to enable services so that Windows can talk to it. UPDATE: Based on some additional research, it appears that Apple is only assigning IPv6 addresses to the hostname.username.members.mac.com names. So any solution will require enabling IPv6 support on Windows, if possible.

    Read the article

  • cannot log into mysql locally

    - by Lostsoul
    When I try to log into mysql locally using the command: mysql -u root -p I get this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) I can access the server remotely(not as root) and my web pages are using the mysql fine, but locally I cannot log on(which I need because I need to create some users). Only change I made was to attach another drive to the server and move the sql data there. Here's my.cnf [mysqld] datadir=/media/ephemeral0/data/mysql socket=/media/ephemeral0/data/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # adding more config skip-external-locking long_query_time=1 slow_query_log slow_query_log_file=/var/log/log-slow-queries.log log-bin=mysql-bin server-id= 1 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid myisam_recover_options I read I need to edit the socket info in my.cnf to make sure it points to the right socket file..I double checked and the file exists(although it starts with an S when I do ls -l "srwxrwxrwx 1 mysql mysql 0 Jun 21 03:43 mysql.sock"). I'm not really sure how to resolve this. I have tried to reboot and ran yum update to make sure I was running the latest packages. Please help!

    Read the article

  • Why can't Logman start?

    - by Bill Paetzke
    I'm setting up my first logman counter. But it's not working! There is some file or folder permissions problem. Or maybe I wrote the create-counter statement wrong. Here's my counter commands: logman create counter BillTest -si 30 -v nnnnnn -max 200 -o "C:\Temp" -c "\Processor(*)\*" "\Memory(*)\*" "\LogicalDisk(*)\*" logman start BillTest The first command works. It says counter creation successful. The second command fails: Collection "BillTest" did not start, check the application event log for any errors Here's the error in the Event Viewer: The service was unable to open the log file C:\Temp_000001.blg for log BillTest and will be stopped. Check the log folder for existence, spelling, permissions, and ensure that no other logs or applications are writing to this log file. You can reenter the log file name using the configuration program. This log will not be started. The error returned is: Access is denied. I verified that C:\Temp exists. I'm not a permissions guru, but I did set all the accounts in the security tab of that folder to "full control." Still, the logman start command failed with the same error. I noticed that it was trying to write to C:\Temp_000001.blg instead of C:\Temp\000001.blg. That might be part of the problem. So, I tried to update my counter to "C:\Temp\" instead of "C:\Temp", but that failed with a path-invalid error. Also, all the examples I saw online used did not put a trailing slash. So, no dice there. I tried this on my machine (Windows XP) and my dev server (Windows Server 2003). Both failed with the same error. How can I fix this?

    Read the article

  • How do I create a VBA macro that will copy data from an entry sheet, into a summary sheet by date

    - by Mukkman
    I'm trying to create a macro that will copy data from a data entry sheet into a summary sheet. The entry sheet is going to be cleared daily so I can't use a formula just to reference it. I want the user to be able to enter a date, run a macro, and have the macro copy the data from the entry sheet into the cells for the corresponding date on the summary sheet. I've looked around and found bits and pieces of how to do this but I can't put it all together. Update: Thanks to the information below I was able to find some additional data. I have a pretty crude macro that works if the user manually selects the correct cell. Now I just need to figure out how to automatically select the current cell relative to the current date. Sub Update_Deposits() ' ' Update_Deposits Macro ' Dim selectedDate As String Dim rangeFound As Range selectedDate = Sheets("Summary Sheet").Range("F3") Set rangeFound = Sheets("Deposits").Cells.Find(CDate(selectedDate)) Dim Total1 As Double Dim Total2 As Double Dim Total3 As Double Dim Total4 As Double Dim Total5 As Double Total1 = Sheets("Summary Sheet").Range("E6") Total2 = Sheets("Summary Sheet").Range("E7") Total3 = Sheets("Summary Sheet").Range("E8") Total4 = Sheets("Summary Sheet").Range("E9") Total5 = Sheets("Summary Sheet").Range("E10") If Not (rangeFound Is Nothing) Then rangeFound.Offset(0, 2) = Total1 rangeFound.Offset(0, 3) = Total2 rangeFound.Offset(0, 4) = Total3 rangeFound.Offset(0, 6) = Total4 rangeFound.Offset(0, 7) = Total5 End If ' End Sub This version will find the first value on the page and fill in values: Sub Update_Deposits() ' ' Update_Deposits Macro ' Dim selectedDate As String Dim rangeFound As Range selectedDate = Sheets("Summary Sheet").Range("F3") Set rangeFound = Sheets("Deposits").Cells.Find(CDate(selectedDate)) Dim Total1 As Double Dim Total2 As Double Dim Total3 As Double Dim Total4 As Double Dim Total5 As Double Total1 = Sheets("Summary Sheet").Range("E6") Total2 = Sheets("Summary Sheet").Range("E7") Total3 = Sheets("Summary Sheet").Range("E8") Total4 = Sheets("Summary Sheet").Range("E9") Total5 = Sheets("Summary Sheet").Range("E10") If Not (rangeFound Is Nothing) Then rangeFound.Offset(0, 2) = Total1 rangeFound.Offset(0, 3) = Total2 rangeFound.Offset(0, 4) = Total3 rangeFound.Offset(0, 6) = Total4 rangeFound.Offset(0, 7) = Total5 End If ' End Sub

    Read the article

  • cfengine3 file_copy only on source side change

    - by megamic
    I am using the 'digest' copy method for all file copy promises, because of the way we package and deploy software, I cant rely on mtime for the criteria for updating files. For various reasons, I am not employing the client-server approach with a central configuration server: rather we package and deploy our entire configuration module to each server, so from cf-engine's perspective, the source and target are local on the server it is running. The problem I am having with this approach is that the source will always update the target when they differ - which is what I want most of the time, usually because the source has been updated. However, like many other cfengine users, we are running an operational environment, where occasionally emergency fixes have to be applied immediately - meaning we don't have time to rebuild and redeploy a configuration module, and the fix will often be applied by deploying a tarball with specific changes. Of course this is problematic if cf-engine comes along 5 mintues later and reverts the changes. What we would like is to be able to make small, incremental changes to our servers, without them being reverted, until the next deployment cycle at which time the new source files would be copied. We do not consider random file corruption or mistaken changes to involve enough risk to warrant having cfengine constantly revert deployments to their source copy - the ability to deploy emergency fixes and have them stay that way until the next deployment would be of much greater value and utility. So, after all that, my question is this: is cf-engine capable of detecting whether it was the source or target that changed when the files differ, and if so, is their a way to use the 'digest' copy method but only if the source side changed? I am very open to other ideas and approaches as-well, as I am still quite new to this whole configuration management thing.

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • NIS user not being added to NIS group

    - by Brian
    I have set up a NIS server and several NIS clients. I have a user and a group on the NIS server like so: /etc/passwd: myself:x:5000:5000:,,,:/home/myself:/bin/bash /etc/group: fishy:x:3001:otheruser,etc,myself,moreppl I imported the users and groups on the NIS client by adding +:::::: to /etc/passwd and +::: to /etc/group. I can log in to the NIS client, but when I run groups, fishy is not listed. But getent group fishy shows that it was imported correctly and lists me as a member. And if I do sudo su - myself, then suddenly groups says I am in the group! I also had nscd installed, and the groups worked correctly for a while. It seemed like after being logged in for a while, I would silently be dropped out of the group. If I restarted nscd and logged in again, then the groups worked correctly...for a while. There are no UID or GID conflicts with local users or groups. Update: Contents of /etc/nsswitch.conf: passwd: compat group: compat shadow: compat hosts: files nis dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis aliases: nis files

    Read the article

  • Set up WLAN in 3-level house

    - by Balint Erdi
    I'm having a hard time setting up the network in our house. It has three levels (basement, ground floor, first level). The WLAN is set up by an ASUS RT-N12 router which provides perfect coverage for the ground floor and the basement. However, I set up my "home office" in the basement where the signal barely arrived. So I purchased a TP-Link TL-WA901ND (300 Mbps) Access Point which I set up in the other corner of the ground floor to expand the ASUS router's range. I used the AP's Repeater mode for that. The distance between my computer and the TP-Link AP is 6-7 meters. There is a staircase going down from the ground floor to the basement so there are no solid walls between the computer and the AP. This setup mostly works (I am writing this from the basement) but it is not reliable (the signal strength sometimes goes down to ~40% of the max) sometimes so I wonder if I am doing it correctly or if there is a better way. Screenshot of the router's and the AP's dashboard screen follow: Any comments on what I am doing wrong or hints for improvement are appreciated. Thank you. UPDATE Tried one more thing, setting up the TP-LINK AP in Access Point mode. That way, I can make it use a different SSID. I enabled WDS/Bridge so that it expands the range of the ASUS router (see screenshot). That does not work, either, if I connect to the network set up by the TP-LINK device (PELSTER-2), I can't reach the external network (the Internet). It seems the problem always comes back to this, the TP-LINK does not have access to the external network, whatever its mode of operation.

    Read the article

  • Passwordless SSH not working - keys copied and permissions set

    - by Comcar
    I know this question has been asked, but I'm certain I've done what all the other answers suggest. Machine A: used keygen -t rsa to create id_rsa.pub in ~/.ssh/ copied Machine A's id_rsa.pub to Machine B user's home directory Made the file permissions of id_rsa.pub 600 Machine B added Machine A's pub key to authorised_keys and authorised_keys2: cat ~/id_rsa.pub ~/.ssh/authorised_keys2 made the file permissions of id_rsa.pub 600 I've also ensured both the .ssh directories have the permission 700 on both machine A and B. If I try to login to machine B from machine A, I get asked for the password, not the ssh pass phrase. I've got the root users on both machines to talk to each other using password-less ssh, but I can't get a normal user to do it. Do the user names have to be the same on both sides? Or is there some setting else where I've missed. Machine A is a Ubuntu 10.04 virtual machine running inside VirtualBox on a Windows 7 PC, Machine B is a dedicated Ubuntu 9.10 server UPDATE : I've run ssh with the option -vvv, which provides many many lines of output, but this is the last few commands: debug3: check_host_in_hostfile: filename /home/pete/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host '192.168.1.19' is known and matches the RSA host key. debug1: Found key in /home/pete/.ssh/known_hosts:1 debug2: bits set: 504/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: Wrote 16 bytes for a total of 1015 debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug3: Wrote 48 bytes for a total of 1063 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/pete/.ssh/identity ((nil)) debug2: key: /home/pete/.ssh/id_rsa (0x7ffe1baab9d0) debug2: key: /home/pete/.ssh/id_dsa ((nil)) debug3: Wrote 64 bytes for a total of 1127 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/pete/.ssh/identity debug3: no such identity: /home/pete/.ssh/identity debug1: Offering public key: /home/pete/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug3: Wrote 368 bytes for a total of 1495 debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/pete/.ssh/id_dsa debug3: no such identity: /home/pete/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • Networking problems in VMWare with wireless bridge

    - by Robert Koritnik
    Barebone data: virtualization: VMWare Workstation 6.5 (latest) Host: Windows Server 2008 x64 Guest: Windows Server 2008 x86 Host network adapter: Ethernet (see comment) Host network adapter: Wireless (see comment) Guest ethernet network adapter 1: Bridged VMNet (automatic) Guest ethernet network adapter 2: Host only VMNet comment: my host has LAN and Wifi but only one at the same time. I'm either wired or wireless. Never both. So bridged connection on VM goes either via wire or air. Problem When I'm wirelessly connected on the host and I access internet within VM my connection just gets stalled (not dropped). It doesn't experience any timeout whatsoever, it just stops downloading/communicating. For instance: I start downloading a file with a browser (IE/FF/CR doesn't matter) and I have to pause/restart download when speed drops to 0. I could wait indefinitely but connection won't pick-up automatically. What did I miss in my network configuration? Update 1 I've tested this in various combinations. This works fine when host is connected via Ethernet. But when host is connected via Wifi, the connection on the guest works as previously described. It connects fine. It gets a valid IP from DHCP... Everything is cool as long as you don't start doing some intensive network traffic (ie. download a 2MB file) In this case it starts downloading and stops after a while. Speed just drops to 0B/s... Sometimes it picks up back, sometimes it doesn't. Connection still stays and works. I can ping around with no problem.

    Read the article

  • Why is wget so much faster than Firefox at some downloads?

    - by Earlz
    Recently, I needed to do an update of Xilinx WebPack, mind you, this is one hefty piece of software. It weighs in at 6gigs, which definitely isn't "quick" on any internet I've ever had available to me. So, when I went to download it(using Firefox of course), I was very... unsettled by the fact that the download was only going at 110kByte/s. My internet connection is capable of about 2200kByte/s download, so what gives!? My workaround in the past for this issue has been to take the link to my Linode linux server and download it there with wget, where the download will zip along at 14MByte/s, and then either copying it to my website directory and downloading it that way through HTTP, or using sftp. Both ways work about as well and will sufficiently max out my connection. However, I recently figured out the missing variable. I tried doing the download locally with wget and was able to max out my connection! TL;DR; Now, my question. Why is wget so much faster than firefox at downloading this file? I hardly ever have such a difference in download speeds except for with this one file.

    Read the article

  • Linksys router cannot change default password

    - by Jessica M.
    My wireless internet suddenly stopped working today. I have Windows 7 and Lindsys WRT54G router. I tried to log into the Linksys setup router page by typing in www.192.168.1.1 into firefox and it prompted me for a username and password as usual. The problem is when i tried to enter my regular username and password it did not work. I finally solved my problem when I came across this post here and the very last post solved my problem. It suggested I try username: root / password: admin. For some reason the username and password has been changed. When i tried username: root / password: admin , it worked and allowed me to get into the Linksys setup page. The problem is I can't change the username or password anymore. Every time I want to log into my Linksys setup page I have to enter username: root / password: admin. I can't change the "WPA shared key" (password). For the security settings I selected WPA2Personal + AES. Also the post said "If the firmware was upgraded to non-linksys firmware - the default will be different" . The problem is I didn't update anything and I'm worried that someone installed a virus or something or somehow changed the firmware on my router. How did I get non-Linksys firmware on my router? EDIT: I figured out how to change the password when I log into the Lynksys setup page. Administration -- Management -- password. I still don't understand if my router firmware was changed or who changed it or if it happened by mistake.

    Read the article

  • Updating modules on VPS hosted under OpenVZ

    - by tertle
    Been trying to install OpenVPN on a VPS but come into a few problems when trying to start the openvpn server: Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=1]: ['FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'FATAL: Could not load /lib/modules/2.6.18-028stab070.14/modules.dep: No such file or directory', 'iptables-restore: line 46 failed']: internet/base:1175,internet/base:752,internet/process:45,internet/process:306,internet/_baseprocess:48,internet/process:775,internet/_baseprocess:60,svc/pp:116,svc/svcnotify:26,internet/defer:238,internet/defer:307,internet/defer:323,sagent/ipts:105,sagent/ipts:39,util/error:52,util/error:32 service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['user', 'iptables_openvpn']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Anyway so after a bit of playing around and some advice, I found that the Linux kernel and modules don't match on my server. uname -r returns 2.6.18-028stab070.14 and ls /lib/modules returns 2.6.18-028stab070.7 The server is running OpenVZ and my container uses Ubuntu 9.10. So my question is, is it possible for me to update my modules on a VPS and if so how would I do this, or is this something I'll need to try get my host to do? Thanks in advance.

    Read the article

  • What is causing Null Pointer Exception in the following code in java? [migrated]

    - by Joe
    When I run the following code I get Null Pointer Exception. I cannot figure out why that is happening. Need Help. public class LinkedList<T> { private Link head = null; private int length = 0; public T get(int index) { return find(index).item; } public void set(int index, T item) { find(index).item = item; } public int length() { return length; } public void add(T item) { Link<T> ptr = head; if (ptr == null) { // empty list so append to head head = new Link<T>(item); } else { // non-empty list, so locate last link while (ptr.next != null) { ptr = ptr.next; } ptr.next = new Link<T>(item); } length++; // update length cache } // traverse list looking for link at index private Link<T> find(int index) { Link<T> ptr = head; int i = 0; while (i++ != index) { if(ptr!=null) { ptr = ptr.next; } } return ptr; } private static class Link<S> { public S item; public Link<S> next; public Link(S item) { this.item = item; } } public static void main(String[] args) { new LinkedList<String>().get(1); } }

    Read the article

  • Is there a way to determine which service (in svchost.exe) does an outgoing connection?

    - by fluxtendu
    I'm redoing my firewall configuration with more restrictive policies and I would like to determine the provenance (and/or destination) of some outgoing connections. I have an issue because they come from svchost.exe and go to web content/application delivery providers - or similar: 5 IP in range: 82.96.58.0 - 82.96.58.255 --> Akamai Technologies akamaitechnologies.com 3 IP in range: 93.150.110.0 - 93.158.111.255 --> Akamai Technologies akamaitechnologies.com 2 IP in range: 87.248.194.0 - 87.248.223.255 --> LLNW Europe 2 llnw.net 205.234.175.175 --> CacheNetworks, Inc. cachefly.net 188.121.36.239 --> Go Daddy Netherlands B.V. secureserver.net So is it possible to know which service does a particular connection? Or what's your recommendation about the rules applied to these ones? (Comodo Firewall & Windows 7) Update: netstat -ano & tasklist /svc help me a little but they are many services in one svchost.exe so it's still an issue. moreover the service names returned by "tasklist /svc" are not easy readable. (All the connections are HTTP (port 80) but I don't think it's relevant)

    Read the article

  • Unsigned lenny packages with aptitude safe-upgrade

    - by Liam
    I have several Debian lenny computers. Two have nearly identical sources.list files. On both, I do regular update/safe-upgrades. On one it always goes smoothly. On the other, much of the time I get the following: sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages will be upgraded: krb5-clients krb5-ftpd krb5-rsh-server krb5-telnetd krb5-user libimlib2 libkadm55 libkrb53 libpng12-0 libpulse0 xpdf xpdf-common xpdf-reader 13 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 2906kB of archives. After unpacking 36.9kB will be used. Do you want to continue? [Y/n/?] WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. krb5-rsh-server krb5-user krb5-ftpd krb5-clients libkrb53 xpdf-reader libpng12-0 libkadm55 xpdf libpulse0 libimlib2 krb5-telnetd xpdf-common Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No": no Abort. Needless to say, I don't proceed. What is going on? How do I fix it? These are the non-comment lines in the sources.list for this computer: deb ftp://ftp.debian.org/debian/ lenny main contrib non-free deb-src ftp://ftp.debian.org/debian/ lenny main contrib deb http://security.debian.org/ lenny/updates main contrib non-free Thank you.

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >