Search Results

Search found 15661 results on 627 pages for 'protected mode'.

Page 190/627 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Cast exception being generated when using the same type of object

    - by David Tunnell
    I was previously using static variables to hold variable data that I want to save between postbacks. I was having problems and found that the data in these variables is lost when the appdomain ends. So I did some research and decided to go with ViewStates: static Dictionary<string, linkButtonObject> linkButtonDictonary; protected void Page_Load(object sender, EventArgs e) { if (ViewState["linkButtonDictonary"] != null) { linkButtonDictonary = (Dictionary<string, linkButtonObject>)ViewState["linkButtonDictonary"]; } else { linkButtonDictonary = new Dictionary<string, linkButtonObject>(); } } And here is the very simple class I use: [Serializable] public class linkButtonObject { public string storyNumber { get; set; } public string TaskName { get; set; } } I am adding to linkButtonDictionary as a gridview is databound: protected void hoursReportGridView_OnRowDataBound(Object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { LinkButton btn = (LinkButton)e.Row.FindControl("taskLinkButton"); linkButtonObject currentRow = new linkButtonObject(); currentRow.storyNumber = e.Row.Cells[3].Text; currentRow.TaskName = e.Row.Cells[5].Text; linkButtonDictonary.Add(btn.UniqueID, currentRow); } } It appears that my previous issues are resolved however a new one has arisin. Sometime when I postback I am getting this error: [A]System.Collections.Generic.Dictionary2[System.String,linkButtonObject] cannot be cast to [B]System.Collections.Generic.Dictionary2[System.String,linkButtonObject]. Type A originates from 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' in the context 'LoadNeither' at location 'C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll'. Type B originates from 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' in the context 'LoadNeither' at location 'C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll'. I don't understand how there can be a casting issue when I am using the same class everywhere. What am I doing wrong and how do I fix it?

    Read the article

  • Generic Func<> as parameter to base method

    - by WestDiscGolf
    I might be losing the plot, but I hope someone can point me in the right direction. What am I trying to do? I'm trying to write some base methods which take Func< and Action so that these methods handle all of the exception handling etc. so its not repeated all over the place but allow the derived classes to specify what actions it wants to execute. So far this is the base class. public abstract class ServiceBase<T> { protected T Settings { get; set; } protected ServiceBase(T setting) { Settings = setting; } public void ExecAction(Action action) { try { action(); } catch (Exception exception) { throw new Exception(exception.Message); } } public TResult ExecFunc<T1, T2, T3, TResult>(Func<T1, T2, T3, TResult> function) { try { /* what goes here?! */ } catch (Exception exception) { throw new Exception(exception.Message); } } } I want to execute an Action in the following way in the derived class (this seems to work): public void Delete(string application, string key) { ExecAction(() => Settings.Delete(application, key)); } And I want to execute a Func in a similar way in the derived class but for the life of me I can't seem to workout what to put in the base class. I want to be able to call it in the following way (if possible): public object Get(string application, string key, int? expiration) { return ExecFunc(() => Settings.Get(application, key, expiration)); } Am I thinking too crazy or is this possible? Thanks in advance for all the help.

    Read the article

  • Why does my SQL database not work in Android?

    - by user1426967
    In my app a click on an image button brings you to the gallery. After clicking on an image I want to call onActivityResult to store the image path. But it does not work. In my LogCat it always tells me that it crashes when it tries to save the image path. Can you find the problem? My onActivityResult method: mImageRowId = savedInstanceState != null ? savedInstanceState.getLong(ImageAdapter.KEY_ROWID) : null; @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if(requestCode == PICK_FROM_FILE && data != null && data.getData() != null) { Uri uri = data.getData(); if(uri != null) { Cursor cursor = getContentResolver().query(uri, new String[] { android.provider.MediaStore.Images.ImageColumns.DATA}, null, null, null); cursor.moveToFirst(); String image = cursor.getString(0); cursor.close(); if(image != null) { // HERE IS WHERE I WANT TO SAVE THE IMAGE. HERE MUST BE THE ERROR! if (mImageRowId == null) { long id = mImageHelper.createImage(image); if (id > 0) { mImageRowId = id; } } // Set the image and display it in the edit activity Bitmap bitmap = BitmapFactory.decodeFile(image); mImageButton.setImageBitmap(bitmap); } } } } This is my onSaveInstanceState method: private static final Long DEF_ROW_ID = 0L; @Override protected void onSaveInstanceState(Bundle outState) { outState.putLong(ImageAdapter.KEY_ROWID, mImageRowId != null ? mImageRowId : DEF_ROW_ID); super.onSaveInstanceState(outState); } This is a part from my DbAdapter: public long createImage(String image) { ContentValues cv = new ContentValues(); cv.put(KEY_IMAGE, image); return mImageDb.insert(DATABASE_TABLE, null, cv); }

    Read the article

  • Why onCreate() calling multiple times when i use Thread()?

    - by RajaReddy PolamReddy
    In my app i faced a problem with threads. i am using native code in my app. i try to load library and then calling native functions from the android code. 1. By using Threads() : PjsuaThread pjsuaThread = new PjsuaThread(); pjsuaThread.start(); thread code class PjsuaThread extends Thread { public void run() { if (pjsua_app.initApp() != 0) { // native function calling return; } else { } pjsua_app.startPjsua(ApjsuaActivity.CFG_FNAME); // native function calling finished = true; } When i use code like this, onCreate() function calling multiple times and able to load library and calling some functions properly, after some seconds onCreate calling again because of that it's crashing. 2. Using AsyncTask(): And also i used AsyncTask< for this requirement, it's crashing the application( crashing in lib code ). not able to open any functions class SipTask extends AsyncTask<Void, String, Void> { protected Void doInBackground(Void... args) { if (pjsua_app.initApp() != 0) { return null; } else { } pjsua_app.startPjsua(ApjsuaActivity.CFG_FNAME); finished = true; return null; } @Override protected void onPostExecute(Void result) { super.onPostExecute(result); Log.i(TAG, "On POst "); } } What is annoying is that in most cases it is not the missing library, it's tried to able to load the lib crashing in between. any one know the reason ?

    Read the article

  • Cannot understand the behaviour of dotnet compiler while instantiating a class thru interface(C#)

    - by Newbie
    I have a class that impelemnts an interface. The interface is public interface IRiskFactory { void StartService(); void StopService(); } The class that implements the interface is public class RiskFactoryService : IRiskFactory { } Now I have a console application and one window service. From the console application if I write the following code static void Main(string[] args) { IRiskFactory objIRiskFactory = new RiskFactoryService(); objIRiskFactory.StartService(); Console.ReadLine(); objIRiskFactory.StopService(); } It is working fine. However, when I mwrite the same piece of code in Window service public partial class RiskFactoryService : ServiceBase { IRiskFactory objIRiskFactory = null; public RiskFactoryService() { InitializeComponent(); objIRiskFactory = new RiskFactoryService(); <- ERROR } /// <summary> /// Starts the service /// </summary> /// <param name="args"></param> protected override void OnStart(string[] args) { objIRiskFactory.StartService(); } /// <summary> /// Stops the service /// </summary> protected override void OnStop() { objIRiskFactory.StopService(); } } It throws error: Cannot implicitly convert type 'RiskFactoryService' to 'IRiskFactory'. An explicit conversion exists (are you missing a cast?) When I type casted to the interface type, it started working objIRiskFactory = (IRiskFactory)new RiskFactoryService(); My question is why so? Thanks.(C#)

    Read the article

  • What is wrong with this Asynchronus task?

    - by bluebrain
    the method onPostExecute simply was not executed, I have seen 16 at LogCat but I can not see 16 in LogCAT. I tried to debug it, it seemed that it goes to the first line of the class (package line) after return statement. private class Client extends AsyncTask<Integer, Void, Integer> { protected Integer doInBackground(Integer... params) { Log.e(TAG,10+""); try { socket = new Socket(target, port); Log.e(TAG,11+""); oos = new ObjectOutputStream(socket.getOutputStream()); Log.e(TAG,14+""); ois = new ObjectInputStream(socket.getInputStream()); Log.e(TAG,15+""); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } Log.e(TAG,16+""); return 1; } protected void onPostExecute(Integer result) { Log.e(TAG,13+""); try { Log.e(TAG,12+""); oos.writeUTF(key); Log.e(TAG,13+""); if (ois.readInt() == OKAY) { isConnected = true; Log.e(TAG,14+""); }else{ Log.e(TAG,15+""); isConnected = false; } } catch (IOException e) { e.printStackTrace(); isClosed = true; } } }

    Read the article

  • How to implement the disposable pattern in a class that inherits from another disposable class?

    - by TheRHCP
    Hi, I often used the disposable pattern in simple classes that referenced small amount of resources, but I never had to implement this pattern on a class that inherits from another disposable class and I am starting to be a bit confused in how to free the whole resources. I start with a little sample code: public class Tracer : IDisposable { bool disposed; FileStream fileStream; public Tracer() { //Some fileStream initialization } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!disposed) { if (disposing) { if (fileStream != null) { fileStream.Dispose(); } } disposed = true; } } } public class ServiceWrapper : Tracer { bool disposed; ServiceHost serviceHost; //Some properties public ServiceWrapper () { //Some serviceHost initialization } //protected override void Dispose(bool disposing) //{ // if (!disposed) // { // if (disposing) // { // if (serviceHost != null) // { // serviceHost.Close(); // } // } // disposed = true; // } //} } My real question is: how to implement the disposable pattern inside my ServiceWrapper class to be sure that when I will dispose an instance of it, it will dispose resources in both inherited and base class? Thanks.

    Read the article

  • How to stream authenticated content with MediaPlayer on Android

    - by 102790073222983779908
    I've seen quite a few posts askign this question on SO but there doesn't seem to be a definitive answer (or at least an answer I like!) I've got content protected behind basic auth (username/password) -- I can download it fine using the various HTTP download clases but for the life of me I can't sort out how to tell media player to stream it (and provide the authentication). I saw one post that suggested it wasn't possible since the MediaPlayer is all native code and doesn't things like the Authenticator. There are plenty of examples of how to first download to a cached copy and then play that back but....That sort of sucks (and the files maybe 100's of MB's). I saw at least one proposal to download it in smalish chunks and then start & stop the playback (redirecting to the new file) but that sort of sucks also since there would (I presume) be a stutter (I haven't tried it though) The best idea I have at this point is to start downloading to a cache file and then when it's 'full enough' start up playback while I continue to fill the file.... I hope that this works (but again, haven't tried it). Am I missing something obvious? It's so painful to have all the various pieces almost working and I sort of convinced myself that there had to be a way to natively stream protected content (or have it take a already established & qualified InputStream) but it appears no joy. BTW I'm a Mac/iPhone guy and a newb at Android so I'm still fighting a bit of Java learning.... Excuse me if I'm missing somthing obvious. -john

    Read the article

  • Why my oracleParameter doesnt work?

    - by user1824356
    I'm a .NET developer and this is the first time i work with oracle provider (Oracle 10g and Framework 4.0). When i add parameter to my command in this way: objCommand.Parameters.Add("pc_cod_id", OracleType.VarChar, 4000).Value = codId; objCommand.Parameters.Add("pc_num_id", OracleType.VarChar, 4000).Value = numId; objCommand.Parameters.Add("return_value", OracleType.Number).Direction = ParameterDirection.ReturnValue; objCommand.Parameters.Add("pc_email", OracleType.VarChar, 4000).Direction = ParameterDirection.Output; I have no problem with the result. But when a add parameter in this way: objCommand.Parameters.Add(CreateParameter(PC_COD_ID, OracleType.VarChar, codId, ParameterDirection.Input)); objCommand.Parameters.Add(CreateParameter(PC_NUM_ID, OracleType.VarChar, numId, ParameterDirection.Input)); objCommand.Parameters.Add(CreateParameter(RETURN_VALUE, OracleType.Number, ParameterDirection.ReturnValue)); objCommand.Parameters.Add(CreateParameter(PC_EMAIL, OracleType.VarChar, ParameterDirection.Output)); The implementation of that function is: protected OracleParameter CreateParameter(string name, OracleType type, ParameterDirection direction) { OracleParameter objParametro = new OracleParameter(name, type); objParametro.Direction = direction; if (type== OracleType.VarChar) { objParametro.Size = 4000; } return objParametro; } All my result are a empty string. My question is, these way to add parameters are not the same? And if no, what is the difference? Thanks :) Add: Sorry i forgot mention "CreateParameter" is a function with multiple implementations the base is the above function, the other use that. protected OracleParameter CreateParameter(string name, OracleType type, object value, ParameterDirection direction) { OracleParameter objParametro = CreateParameter(name, type, value); objParametro.Direction = direction; return objParametro; } The last parameters doesn't need value because those are output parameter, those bring me data from the database.

    Read the article

  • Flash Builder 4 "includeIn" property causing design view error

    - by Chris
    I am creating a custom TextInput component that will define an "error" state. I have extended the TextInput class to change the state to "error" if the errorString property's length is greater than 0. In the skin class, I have defined an "error" state, and added some logic to detect the size and position of the error icon. However, if I have this code at the same time I use the "includeIn" property in the bitmap image tag, I get a design view error. If I either A) Only include that code with no "includeIn" property set, it works or B) dont include the code to set the icon size and position and only use the "includeIn" property, it works. Any ideas what could be causing the design view problem when I use both the "includeIn" property and the icon size/position code at the same time? TextInput Class: package classes { import spark.components.TextInput; public class TextInput extends spark.components.TextInput { [SkinState("error")]; public function TextInput() { super(); } override public function set errorString( value:String ):void { super.errorString = value; invalidateSkinState(); } override protected function getCurrentSkinState():String { if (errorString.length>0) { return "error"; } return super.getCurrentSkinState(); } } } TextInput Skin File: override protected function updateDisplayList(unscaledWidth:Number, unscaledHeight:Number):void { //THIS IS THE CODE THAT SEEMS TO BE CAUSING THE PROBLEM if(getStyle("iconSize") == "large") { errorIcon.right = -12; errorIcon.source = new errorIconLg(); } else { errorIcon.right = -5; errorIcon.source = new errorIconSm(); } super.updateDisplayList(unscaledWidth, unscaledHeight); } </fx:Script> <s:states> <s:State name="normal"/> <s:State name="disabled"/> <s:State name="error"/> </s:states> //If I remove the problem code above or if I take out the includeIn //property here, it works <s:BitmapImage id="errorIcon" verticalCenter="0" includeIn="error" /> </s:SparkSkin>

    Read the article

  • destructor being called by subclass

    - by zero
    I'm currently learning more about php objects and constructors/destructors, but i've noticed in my code that the parent class's destructor is being called twice, I thought it was because i was extending the first class to my second class and that the second class was calling it, but this is what the php docs say about that: Like constructors, parent destructors will not be called implicitly by the engine. In order to run a parent destructor, one would have to explicitly call parent::__destruct() in the destructor body. so if it is not being called by the subclass then is it because by extended the first class that i've made a reference to the parent class making it call itself twice or I'm I way off base here? the code: <?php class test{ public $test1 = "this is a test of a pulic property"; private $test2 = "this is a test of a private property"; protected $test3 = "this is a test of a protected property"; const hello = 900000; function __construct($h){ //echo 'this is the constructor test '.$h; } function x($x2){ echo ' this is fn x'.$x2; } function y(){ print "this is fn y"; } } $obj = new test("this is an \"arg\" sent to instance of test"); class hey extends test{ function hey(){ $this->x('<br>from the host with the most'); echo ' <br>from hey class'.$this->test3; } } $obj2 = new hey(); echo $obj2::hello; ?>

    Read the article

  • PHP class extends not working why and is this how to correctly extend a class?

    - by Matthew
    Hi so I'm trying to understand how inherteince works in PHP using object oriented programming. The main class is Computer, the class that is inheriting is Mouse. I'm extedning the Computer class with the mouse class. I use __construct in each class, when I istinate the class I use the pc type first and if it has mouse after. For some reason computer returns null? why is this? class Computer { protected $type = 'null'; public function __construct($type) { $this->type = $type; } public function computertype() { $this->type = strtoupper($this->type); return $this->type; } } class Mouse extends Computer { protected $hasmouse = 'null'; public function __construct($hasmouse){ $this->hasmouse = $hasmouse; } public function computermouse() { if($this->hasmouse == 'Y') { return 'This Computer has a mouse'; } } } $pc = new Computer('PC', 'Y'); echo $pc->computertype; echo $pc->computermouse;

    Read the article

  • moving image to bounce or rotate in a circle

    - by BlueMonster
    I found this small application that i've been playing around with for the past little while. I was wondering, if i wanted to simply rotate the image in a circle? or make the entire image just bounce up and down, how would i modify this program to do so? Everything i've tried will just stretch the image - even if i do get it to move to the left or to the right. Any ideas on what i can do? Code is below public partial class Form1 : Form { private int width = 15; private int height = 15; Image pic = Image.FromFile("402.png"); private Button abort = new Button(); Thread t; public Form1() { abort.Text = "Abort"; abort.Location = new Point(190, 230); abort.Click += new EventHandler(Abort_Click); Controls.Add(abort); SetStyle(ControlStyles.DoubleBuffer| ControlStyles.AllPaintingInWmPaint| ControlStyles.UserPaint, true); t = new Thread(new ThreadStart(Run)); t.Start(); } protected void Abort_Click(object sender, EventArgs e) { t.Abort(); } protected override void OnPaint( PaintEventArgs e ) { Graphics g = e.Graphics; g.DrawImage(pic, 10, 10, width, height); base.OnPaint(e); } public void Run() { while (true) { for(int i = 0; i < 200; i++) { width += 5; Invalidate(); Thread.Sleep(30); } } } }

    Read the article

  • Cannot understand the behaviour of C# compiler while instantiating a class thru interface

    - by Newbie
    I have a class that implements an interface. The interface is public interface IRiskFactory { void StartService(); void StopService(); } The class that implements the interface is public class RiskFactoryService : IRiskFactory { } Now I have a console application and one window service. From the console application if I write the following code static void Main(string[] args) { IRiskFactory objIRiskFactory = new RiskFactoryService(); objIRiskFactory.StartService(); Console.ReadLine(); objIRiskFactory.StopService(); } It is working fine. However, when I mwrite the same piece of code in Window service public partial class RiskFactoryService : ServiceBase { IRiskFactory objIRiskFactory = null; public RiskFactoryService() { InitializeComponent(); objIRiskFactory = new RiskFactoryService(); <- ERROR } /// <summary> /// Starts the service /// </summary> /// <param name="args"></param> protected override void OnStart(string[] args) { objIRiskFactory.StartService(); } /// <summary> /// Stops the service /// </summary> protected override void OnStop() { objIRiskFactory.StopService(); } } It throws error: Cannot implicitly convert type 'RiskFactoryService' to 'IRiskFactory'. An explicit conversion exists (are you missing a cast?) When I type cast to the interface type, it started working objIRiskFactory = (IRiskFactory)new RiskFactoryService(); My question is why so?

    Read the article

  • Enums and inheritance

    - by devoured elysium
    I will use (again) the following class hierarchy: Event and all the following classes inherit from Event: SportEventType1 SportEventType2 SportEventType3 SportEventType4 I have originally designed the Event class like this: public abstract class Event { public abstract EventType EventType { get; } public DateTime Time { get; protected set; } protected Event(DateTime time) { Time = time; } } with EventType being defined as: public enum EventType { Sport1, Sport2, Sport3, Sport4 } The original idea would be that each SportEventTypeX class would set its correct EventType. Now that I think of it, I think this approach is totally incorrect for two reasons: If I want to later add a new SportEventType class I will have to modify the enum If I later decide to remove one SportEventType that I feel I won't use I'm also in big trouble with the enum. I have a class variable in the Event class that makes, afterall, assumptions about the kind of classes that will inherit from it, which kinda defeats the purpose of inheritance. How would you solve this kind of situation? Define in the Event class an abstract "Description" property, having each child class implement it? Having an Attribute(Annotation in Java!) set its Description variable instead? What would be the pros/cons of having a class variable instead of attribute/annotation in this case? Is there any other more elegant solution out there? Thanks

    Read the article

  • BlackBerry Field class extension will not paint.

    - by jlindenbaum
    Using JRE 5.0.0, simulator device is an 8520. On a screen I am using a FlowFieldManager(Manager.VERTICAL_SCROLL) and adding Fields to it to show data. When I do this.flowManager = new FlowFieldManager(Manager.VERTICAL_SCROLL); Field field = new Field() { protected void paint(Graphics graphics) { graphics.drawTest("Test", 0, 0); } protected void layout(int width, int height) { this.setExtend(300, 300); // just testing } } this.flowManager.add(field); The screen renders correctly and 'Test' appears on the screen. If, on the other hand, I try and abstract this into a class called CustomField with the same properties and add it to the flow manager the render will not happen. Debugging shows that the device enters into the Object, into the layout function, but not the paint function. I can't figure out why the paint function is not called when I extend Field. The 4.5 API says that layout and paint are the only functions that I really need to extend. (getPreferredWidth and getPreferredHeight will be used to calculate screen sizes etc.) Thanks in advance.

    Read the article

  • Fortigate Remote VPN : no matching gateway for new request

    - by Kedare
    I am trying to configure a Fortigate 60C to act as an IPSec endpoint for remote VPN. I configured it like this : SCR-F0-FGT100C-1 # diagnose vpn ike config vd: root/0 name: SCR-REMOTEVPN serial: 7 version: 1 type: dynamic mode: aggressive dpd: enable retry-count 3 interval 5000ms auth: psk dhgrp: 2 xauth: server-auto xauth-group: VPN-group interface: wan1 distance: 1 priority: 0 phase2s: SCR-REMOTEVPN-PH2 proto 0 src 0.0.0.0/0.0.0.0:0 dst 0.0.0.0/0.0.0.0:0 dhgrp 5 replay keep-alive dhcp policies: none Here is the configuration: config vpn ipsec phase1-interface edit "SCR-REMOTEVPN" set type dynamic set interface "wan1" set dhgrp 2 set xauthtype auto set mode aggressive set proposal aes256-sha1 aes256-md5 set authusrgrp "VPN-group" set psksecret ENC xxx next config vpn ipsec phase2-interface edit "SCR-REMOTEVPN-PH2" set keepalive enable set phase1name "SCR-REMOTEVPN" set proposal aes256-sha1 aes256-md5 set dhcp-ipsec enable next end But when I try to connect from a remote device (I tested with an Android Phone), the phone fail to connect and the fortinet return this error : 2012-07-20 13:08:51 log_id=0101037124 type=event subtype=ipsec pri=error vd="root" msg="IPsec phase 1 error" action="negotiate" rem_ip=xxx loc_ip=xxx rem_port=1049 loc_port=500 out_intf="wan1" cookies="xxx" user="N/A" group="N/A" xauth_user="N/A" xauth_group="N/A" vpn_tunnel="N/A" status=negotiate_error error_reason=no matching gateway for new request peer_notif=INITIAL-CONTACT I tried searching on the web, but i did not find anything revelant to this. Do you have any idea of what can be the problem ? I tried many combinaisons of settings on the fortigate without success..

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • MultiView 2000 terminal emulator not printing correctly to Generic/Text Printer on Windows 7

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (320-bit) Thanks in advance.

    Read the article

  • How do I upgrade django on ubuntu 9.04?

    - by Lorin Hochstein
    I've got Django 1.0.2 installed on Ubuntu 9.04. I'd like to upgrade Django, because I have an app that needs Django 1.1 or greater. I tried using pip to do the upgrade, but got the following: $ sudo pip install Django==1.1 Downloading/unpacking Django==1.1 Downloading Django-1.1.tar.gz (5.6Mb): 5.6Mb downloaded Running setup.py egg_info for package Django Installing collected packages: Django Found existing installation: Django 1.0.2-final Not uninstalling Django at /var/lib/python-support/python2.6, outside environment /usr Running setup.py install for Django changing mode of build/scripts-2.6/django-admin.py from 644 to 755 changing mode of /usr/local/bin/django-admin.py to 755 Successfully installed Django It seems like it worked, but it refuses to remove the original Django 1.02, and sure enough: $ pip freeze | grep -i django Django==1.0.2-final django-debug-toolbar==0.8.3 django-sphinx==2.2.3 $ /usr/local/bin/django-admin.py --version 1.0.2 final The problem, apparently, is that pip won't uninstall files outside of /usr. I'd like to remove the existing Django files manually, but I have no idea how to do that, because I'm unfamiliar with how Python packages are laid out in Ubuntu. It looks pretty complicated. The site-packages directory is: $ python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /usr/lib/python2.6/dist-packages However, that's not where the django files live: $ ls -ld /usr/lib/python2.6/dist-packages/[Dd]jango* ls: cannot access /usr/lib/python2.6/dist-packages/[Dd]jango*: No such file or directory There's a /var/lib/python-support/python2.6/django directory, and the __init__.py file in that directory points to /usr/share/python-support/python-django/django/__init__.py. Clearly, pip is able to figure out where the files live. Is there any way to retrieve the list of files associated with the django package so I can just delete them manually?

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Linux e1000e (Intel networking driver) problems galore, where do I start?

    - by Evan Carroll
    I'm currently having a major problem with e1000e (not working at all) in Ubuntu Maverick (1.0.2-k4), after resume I'm getting a lot of stuff in dmesg: [ 9085.820197] e1000e 0000:02:00.0: PCI INT A disabled [ 9089.907756] e1000e: Intel(R) PRO/1000 Network Driver - 1.0.2-k4 [ 9089.907762] e1000e: Copyright (c) 1999 - 2009 Intel Corporation. [ 9089.907797] e1000e 0000:02:00.0: Disabling ASPM L1 [ 9089.907827] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9089.907857] e1000e 0000:02:00.0: setting latency timer to 64 [ 9089.908529] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9089.908922] e1000e 0000:02:00.0: Disabling ASPM L0s [ 9089.908954] e1000e 0000:02:00.0: (unregistered net_device): PHY reset is blocked due to SOL/IDER session. [ 9090.024625] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:0a:e4:3e:ce:74 [ 9090.024630] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection [ 9090.024712] e1000e 0000:02:00.0: eth0: MAC: 2, PHY: 2, PBA No: 005302-003 [ 9090.109492] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9090.164219] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X and, a bunch of [ 2128.005447] e1000e 0000:02:00.0: eth0: Detected Hardware Unit Hang: [ 2128.005452] TDH <89> [ 2128.005454] TDT <27> [ 2128.005456] next_to_use <27> [ 2128.005458] next_to_clean <88> [ 2128.005460] buffer_info[next_to_clean]: [ 2128.005463] time_stamp <6e608> [ 2128.005465] next_to_watch <8a> [ 2128.005467] jiffies <6f929> [ 2128.005469] next_to_watch.status <0> [ 2128.005471] MAC Status <80080703> [ 2128.005473] PHY Status <796d> [ 2128.005475] PHY 1000BASE-T Status <4000> [ 2128.005477] PHY Extended Status <3000> [ 2128.005480] PCI Status <10> I decided to compile the latest stable e1000e to 1.2.17, now I'm getting: [ 9895.678050] e1000e: Intel(R) PRO/1000 Network Driver - 1.2.17-NAPI [ 9895.678055] e1000e: Copyright(c) 1999 - 2010 Intel Corporation. [ 9895.678098] e1000e 0000:02:00.0: Disabling ASPM L1 [ 9895.678129] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9895.678162] e1000e 0000:02:00.0: setting latency timer to 64 [ 9895.679136] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9895.679160] e1000e 0000:02:00.0: Disabling ASPM L0s [ 9895.679192] e1000e 0000:02:00.0: (unregistered net_device): PHY reset is blocked due to SOL/IDER session. [ 9895.791758] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:0a:e4:3e:ce:74 [ 9895.791766] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection [ 9895.791850] e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 2, PBA No: 005302-003 [ 9895.892464] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9895.948175] e1000e 0000:02:00.0: irq 44 for MSI/MSI-X [ 9895.949111] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 9895.954694] e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: RX/TX [ 9895.954703] e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO [ 9895.955157] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 9906.832056] eth0: no IPv6 routers present With 1.2.20 I get: [ 9711.525465] e1000e: Intel(R) PRO/1000 Network Driver - 1.2.20-NAPI [ 9711.525472] e1000e: Copyright(c) 1999 - 2010 Intel Corporation. [ 9711.525521] e1000e 0000:02:00.0: Disabling ASPM L1 [ 9711.525554] e1000e 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9711.525586] e1000e 0000:02:00.0: setting latency timer to 64 [ 9711.526460] e1000e 0000:02:00.0: irq 45 for MSI/MSI-X [ 9711.526487] e1000e 0000:02:00.0: Disabling ASPM L0s [ 9711.526523] e1000e 0000:02:00.0: (unregistered net_device): PHY reset is blocked due to SOL/IDER session. [ 9711.639763] e1000e 0000:02:00.0: eth0: (PCI Express:2.5GB/s:Width x1) 00:0a:e4:3e:ce:74 [ 9711.639771] e1000e 0000:02:00.0: eth0: Intel(R) PRO/1000 Network Connection [ 9711.639854] e1000e 0000:02:00.0: eth0: MAC: 3, PHY: 2, PBA No: 005302-003 [ 9712.060770] e1000e 0000:02:00.0: irq 45 for MSI/MSI-X [ 9712.116195] e1000e 0000:02:00.0: irq 45 for MSI/MSI-X [ 9712.117098] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 9712.122684] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX [ 9712.122693] e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO [ 9712.123142] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 9722.920014] eth0: no IPv6 routers present But, I'm still getting these [ 9982.992851] PCI Status <10> [ 9984.993602] e1000e 0000:02:00.0: eth0: Detected Hardware Unit Hang: [ 9984.993606] TDH <5d> [ 9984.993608] TDT <6b> [ 9984.993611] next_to_use <6b> [ 9984.993613] next_to_clean <5b> [ 9984.993615] buffer_info[next_to_clean]: [ 9984.993617] time_stamp <24da80> [ 9984.993619] next_to_watch <5d> [ 9984.993621] jiffies <24f200> [ 9984.993624] next_to_watch.status <0> [ 9984.993626] MAC Status <80080703> [ 9984.993628] PHY Status <796d> [ 9984.993630] PHY 1000BASE-T Status <4000> [ 9984.993632] PHY Extended Status <3000> [ 9984.993635] PCI Status <10> [ 9986.001047] e1000e 0000:02:00.0: eth0: Reset adapter [ 9986.176202] e1000e: eth0 NIC Link is Up 10 Mbps Full Duplex, Flow Control: RX/TX [ 9986.176211] e1000e 0000:02:00.0: eth0: 10/100 speed: disabling TSO I'm not sure where to start troubleshooting this. Any ideas? Here is the result of ethtool -d eth0 MAC Registers ------------- 0x00000: CTRL (Device control register) 0x18100248 Endian mode (buffers): little Link reset: reset Set link up: 1 Invert Loss-Of-Signal: no Receive flow control: enabled Transmit flow control: enabled VLAN mode: disabled Auto speed detect: disabled Speed select: 1000Mb/s Force speed: no Force duplex: no 0x00008: STATUS (Device status register) 0x80080703 Duplex: full Link up: link config TBI mode: disabled Link speed: 10Mb/s Bus type: PCI Express Port number: 0 0x00100: RCTL (Receive control register) 0x04048002 Receiver: enabled Store bad packets: disabled Unicast promiscuous: disabled Multicast promiscuous: disabled Long packet: disabled Descriptor minimum threshold size: 1/2 Broadcast accept mode: accept VLAN filter: enabled Canonical form indicator: disabled Discard pause frames: filtered Pass MAC control frames: don't pass Receive buffer size: 2048 0x02808: RDLEN (Receive desc length) 0x00001000 0x02810: RDH (Receive desc head) 0x00000001 0x02818: RDT (Receive desc tail) 0x000000F0 0x02820: RDTR (Receive delay timer) 0x00000000 0x00400: TCTL (Transmit ctrl register) 0x3103F0FA Transmitter: enabled Pad short packets: enabled Software XOFF Transmission: disabled Re-transmit on late collision: enabled 0x03808: TDLEN (Transmit desc length) 0x00001000 0x03810: TDH (Transmit desc head) 0x00000000 0x03818: TDT (Transmit desc tail) 0x00000000 0x03820: TIDV (Transmit delay timer) 0x00000008 PHY type: IGP2 and ethtool -c eth0 Coalesce parameters for eth0: Adaptive RX: off TX: off stats-block-usecs: 0 sample-interval: 0 pkt-rate-low: 0 pkt-rate-high: 0 rx-usecs: 3 rx-frames: 0 rx-usecs-irq: 0 rx-frames-irq: 0 tx-usecs: 0 tx-frames: 0 tx-usecs-irq: 0 tx-frames-irq: 0 rx-usecs-low: 0 rx-frame-low: 0 tx-usecs-low: 0 tx-frame-low: 0 rx-usecs-high: 0 rx-frame-high: 0 tx-usecs-high: 0 tx-frame-high: 0 Here is also the lspci -vvv for this controller 02:00.0 Ethernet controller: Intel Corporation 82573L Gigabit Ethernet Controller Subsystem: Lenovo ThinkPad X60s Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 45 Region 0: Memory at ee000000 (32-bit, non-prefetchable) [size=128K] Region 2: I/O ports at 2000 [size=32] Capabilities: [c8] Power Management version 2 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME- Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Address: 00000000fee0300c Data: 415a Capabilities: [e0] Express (v1) Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset- DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Latency L0 <128ns, L1 <64us ClockPM+ Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt- LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- AERCap: First Error Pointer: 14, GenCap- CGenEn- ChkCap- ChkEn- Capabilities: [140 v1] Device Serial Number 00-0a-e4-ff-ff-3e-ce-74 Kernel driver in use: e1000e Kernel modules: e1000e I filed a bug on this upstream, still no idea how to get more useful information. Here is a the result of the running that script EEPROM FIX UPDATE $ sudo bash fixeep-82573-dspd.sh eth0 eth0: is a "82573L Gigabit Ethernet Controller" This fixup is applicable to your hardware Your eeprom is up to date, no changes were made Do I still need to do anything? Also here is my EEPROM dump $ sudo ethtool -e eth0 Offset Values ------ ------ 0x0000 00 0a e4 3e ce 74 30 0b b2 ff 51 00 ff ff ff ff 0x0010 53 00 03 02 6b 02 7e 20 aa 17 9a 10 86 80 df 80 0x0020 00 00 00 20 54 7e 00 00 14 00 da 00 04 00 00 27 0x0030 c9 6c 50 31 3e 07 0b 04 8b 29 00 00 00 f0 02 0f 0x0040 08 10 00 00 04 0f ff 7f 01 4d ff ff ff ff ff ff 0x0050 14 00 1d 00 14 00 1d 00 af aa 1e 00 00 00 1d 00 0x0060 00 01 00 40 1f 12 07 40 ff ff ff ff ff ff ff ff 0x0070 ff ff ff ff ff ff ff ff ff ff ff ff ff ff 4a e0 I'd also like to note that I used eth0 every day for years and until recently never had an issue.

    Read the article

  • SSH error 114 when connect with FinalBuilder 7

    - by mamcx
    I'm testing FB 7 and try to connect to my Mac OS X Snow Leopard machine. I can connect with paramiko (python SSH library) but not FB7. The only thing I get is: SSH error encoutered: 114 I try stopping & restarting the share session on Mac OS X. update: I enable server debug and get this log: debug1: sshd version OpenSSH_5.2p1 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: rexec_argv[0]='/usr/sbin/sshd' debug1: rexec_argv[1]='-Dd' debug1: Bind to port 22 on ::. Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. Server listening on 0.0.0.0 port 22. debug1: fd 5 clearing O_NONBLOCK debug1: Server will not fork when running in debugging mode. debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8 debug1: inetd sockets after dupping: 3, 3 Connection from 10.3.7.135 port 49457 debug1: Client protocol version 2.0; client software version SecureBlackbox.8 debug1: no match: SecureBlackbox.8 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: privsep_preauth: successfully loaded Seatbelt profile for unprivileged child debug1: permanently_set_uid: 75/75 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr [email protected] none debug1: kex: server->client aes128-ctr [email protected] none debug1: expecting SSH2_MSG_KEXDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user mamcx service ssh-connection method none debug1: attempt 0 failures 0 debug1: PAM: initializing for "mamcx" Connection closed by 10.3.7.135 debug1: do_cleanup debug1: PAM: setting PAM_RHOST to "10.3.7.135" debug1: do_cleanup debug1: PAM: cleanup debug1: audit_event: unhandled event 12

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    Hi Mates, We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Ubuntu 9.10 RSA authentication: ssh fails, filezilla runs fine

    - by MariusPontmercy
    This is quite a mistery for me. I usually use passwordless RSA authentication to login into my remote *nix servers with ssh and sftp. Never had any problem until now. I cannot connect to an Ubuntu 9.10 machine: user@myclient$ ssh -i .ssh/Ganymede_key [email protected] [...] debug1: Host 'ganymede.server.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:14 debug2: bits set: 494/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: .ssh/Ganymede_key (0xb96a0ef8) debug2: key: .ssh/Ganymede_key ((nil)) debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: .ssh/Ganymede_key debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: .ssh/Ganymede_key debug1: read PEM private key done: type RSA debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password,keyboard-interactive debug2: we did not send a packet, disable method debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Then it falls back to password authentication. If I disable password authentication on the remote machine my connection attempt just fails with a "Permission denied (publickey)." state. Same thing for sftp from command line. The "funny" thing is that the exact same RSA key works like a charm with a Filezilla sftp session instead: 12:08:00 Trace: Offered public key from "/home/user/.filezilla/keys/Ganymede_key" 12:08:00 Trace: Offer of public key accepted, trying to authenticate using it. 12:08:01 Trace: Access granted 12:08:01 Trace: Opened channel for session 12:08:01 Trace: Started a shell/command 12:08:01 Status: Connected to ganymede.server.com 12:08:02 Trace: CSftpControlSocket::ConnectParseResponse() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Retrieving directory listing... 12:08:02 Trace: CSftpControlSocket::SendNextCommand() 12:08:02 Trace: CSftpControlSocket::ChangeDirSend() 12:08:02 Command: pwd 12:08:02 Response: Current directory is: "/root" 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Trace: CSftpControlSocket::ParseSubcommandResult(0) 12:08:02 Trace: CSftpControlSocket::ListSubcommandResult() 12:08:02 Trace: CSftpControlSocket::ResetOperation(0) 12:08:02 Trace: CControlSocket::ResetOperation(0) 12:08:02 Status: Directory listing successful Any thoughts? M

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >