Search Results

Search found 971 results on 39 pages for 'disconnect'.

Page 30/39 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • ??????DataGuard?????????

    - by JaneZhang(???)
         ??????Apply,???log_archive_dest_n ?????“DELAY=",??:DELAY=360(?????),????360??(6??)???:SQL>alter system set log_archive_dest_2='SERVICE=standby LGWR SYNC AFFIRM DELAY=360 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) COMPRESSION=ENABLE  DB_UNIQUE_NAME=standby';    ??????DELAY??,??????????,???30???    ??????,?????????????(real-time apply ),DELAY????????,????????????,??,????alert log?????????????:WARNING: Managed Standby Recovery started with USING CURRENT LOGFILEDELAY 360 minutes specified at primary ignored <<<<<<<<<    ?????,??????????,?????????MRP,??:SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; ???????????:1. ?????????:SQL> show parameter log_archive_dest_2 NAME                                 TYPE        VALUE------------------------------------ ----------- ------------------------------log_archive_dest_2                   string      SERVICE=STANDBY LGWR SYNC AFFI                                                RM VALID_FOR=(ONLINE_LOGFILES,                                                PRIMARY_ROLE) DB_UNIQUE_NAME=S                                                TANDBY 2. ???????5??:SQL> alter system set log_archive_dest_2='SERVICE=STANDBY LGWR SYNC AFFIRM delay=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBY'; 3. ??????: ????:SQL> alter system switch logfile;System altered. SQL>  select max(sequence#) from v$archived_log; MAX(SEQUENCE#)--------------           28 ??:Wed Jun 13 19:48:53 2012Archived Log entry 14 added for thread 1 sequence 28 ID 0x4c9d8928 dest 1:ARCb: Archive log thread 1 sequence 28 available in 5 minute(s)Wed Jun 13 19:48:54 2012Media Recovery Delayed for 5 minute(s) (thread 1 sequence 28) <<<<<<<<????Wed Jun 13 19:53:54 2012Media Recovery Log /home/oracle/arch1/standby/1_28_757620395.arc<<<<<5??????????Media Recovery Waiting for thread 1 sequence 29 (in transit) ?????,???????:http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_apply.htmOracle® Data Guard Concepts and Administration11g Release 2 (11.2)Part Number E25608-03

    Read the article

  • Socket.ReceiveAsync problem

    - by bartol
    Hi, I have a problem using SocketAsyncEventArgs model with .net sockets. Everything works great until the moment that the server wishes to close a client connection. I use following code for this: try { socket.Shutdown(SocketShutdown.Both); } catch { } // throws if client process has already closed finally { socket.Close(); } socket = null; Each connection is using two SocketAsyncEventArgs (one for send and one for receive) and after closing the connection they are returned to a pool from which they can be later reused. And here the problem starts, because when another connection is established and receive args are reused from the pool we get an exception: System.InvalidOperationException: "An asynchronous socket operation is already in progress using this SocketAsyncEventArgs instance."; at System.Net.Sockets.SocketAsyncEventArgs.StartOperationCommon(Socket socket) at System.Net.Sockets.Socket.ReceiveAsync(SocketAsyncEventArgs e) I've done some debugging and it appears that the connection closing code from the beginning of the question does not cancel Socket.ReceiveAsync operation that is in progress when the connection is closed. I've tried many combinations of Shutdown, Disconnect and Linger options for the socket but nothing worked. Any suggestions? Thanks

    Read the article

  • Diffence between FQL query and Graph API object access

    - by jwynveen
    What's the difference between accessing user data with the Facebook Graph API (http://graph.facebook.com/btaylor) and using the Graph API to make a FQL query of the same user (https://api.facebook.com/method/fql.query?query=QUERY). Also, does anyone know which of them the Facebook Developer Toolkit (for ASP.NET) uses? The reason I ask is because I'm trying to access the logged in user's birthday after they begin a Facebook Connect session on my site, but when I use the toolkit it doesn't return it. However, if I make a manual call to the Graph API for that user object, it does return it. It's possible I might have something wrong with my call from the toolkit. I think I may need to include the session key, but I'm not sure how to get it. Here's the code I'm using: _connectSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); try { if (!_connectSession.IsConnected()) { // Not authenticated, proceed as usual. statusResponse = "Please sign-in with Facebook."; } else { // Authenticated, create API instance _facebookAPI = new Api(_connectSession); // Load user user user = _facebookAPI.Users.GetInfo(); statusResponse = user.ToString(); ViewData["fb_user"] = user; } } catch (Exception ex) { //An error happened, so disconnect session _connectSession.Logout(); statusResponse = "Please sign-in with Facebook."; }

    Read the article

  • NHibernate, Databinding to DataGridView, Lazy Loading, and Session managment - need advice

    - by Tom Bushell
    My main application form (WinForms) has a DataGridView, that uses DataBinding and Fluent NHibernate to display data from a SQLite database. This form is open for the entire time the application is running. For performance reasons, I set the convention DefaultLazy.Always() for all DB access. So far, the only way I've found to make this work is to keep a Session (let's call it MainSession) open all the time for the main form, so NHibernate can lazy load new data as the user navigates with the grid. Another part of the application can run in the background, and Save to the DB. Currently, (after considerable struggle), my approach is to call MainSession.Disconnect(), create a disposable Session for each Save, and MainSession.Reconnect() after finishing the Save. Otherwise SQLite will throw "The database file is locked" exceptions. This seems to be working well so far, but past experience has made me nervous about keeping a session open for a long time (I ran into performance problems when I tried to use a single session for both Saves and Loads - the cache filled up, and bogged down everything - see http://stackoverflow.com/questions/2526675/commit-is-very-slow-in-my-nhibernate-sqlite-project). So, my question - is this a good approach, or am I looking at problems down the road? If it's a bad approach, what are the alternatives? I've considered opening and closing my main session whenever the user navigates with the grid, but it's not obvious to me how I would do that - hook every event from the grid that could possibly cause a lazy load? I have the nagging feeling that trying to manage my own sessions this way is fundamentally the wrong approach, but it's not obvious what the right one is.

    Read the article

  • SQL Server Express 2005 Merge Replication using RMO causes Null Reference exception

    - by Craig Shearer
    I'm trying to use RMO to programmatically perform merge synchronization. I've basically copied the SQL Server example code, as follows: // Create a connection to the Subscriber. ServerConnection conn = new ServerConnection(subscriberName); MergePullSubscription subscription; try { // Connect to the Subscriber. conn.Connect(); // Define the pull subscription. subscription = new MergePullSubscription(subscriptionDbName, publisherName, publicationDbName, publicationName, conn, false); // If the pull subscription exists, then start the synchronization. if (subscription.LoadProperties()) { // Check that we have enough metadata to start the agent. if (subscription.PublisherSecurity != null || subscription.DistributorSecurity != null) { subscription.SynchronizationAgent.Synchronize(); } else { throw new ApplicationException("There is insufficent metadata to " + "synchronize the subscription. Recreate the subscription with " + "the agent job or supply the required agent properties at run time."); } } else { // Do something here if the pull subscription does not exist. throw new ApplicationException(String.Format( "A subscription to '{0}' does not exist on {1}", publicationName, subscriberName)); } } catch (Exception ex) { // Implement appropriate error handling here. throw new ApplicationException("The subscription could not be " + "synchronized. Verify that the subscription has " + "been defined correctly.", ex); } finally { conn.Disconnect(); } I've got the server merge publication defined correctly, but when I run the above code, I get a null reference exception on the call to: subscription.SynchronizationAgent.Synchronize(); The stack trace is as follows: at Microsoft.SqlServer.Replication.MergeSynchronizationAgent.StatusEventSinkMethod(String message, Int32 percent, Int32* returnValue) at Test.ConsoleTest.Program.SynchronizePullSubscription() in F:\Visual Studio Projects\Test\source\Test.ConsoleTest\Program.cs:line 124 It seems, from the stack trace, like something to do with the Status event, but I don't have a handler defined, and defining one makes no difference.

    Read the article

  • ASP MVC2 model binding issue on POST

    - by Brandon Linton
    So I'm looking at moving from MVC 1.0 to MVC 2.0 RTM. One of the conventions I'd like to start following is using the strongly-typed HTML helpers for generating controls like text boxes. However, it looks like it won't be an easy jump. I tried migrating my first form, replacing lines like this: <%= Html.TextBox("FirstName", Model.Data.FirstName, new {maxlength = 30}) %> ...for lines like this: <%= Html.TextBoxFor(x => x.Data.FirstName, new {maxlength = 30}) %> Previously, this would map into its appropriate view model on a POST, using the following method signature: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Registration(AccountViewInfo viewInfo) Instead, it currently gets an empty object back. I believe the disconnect is in the fact that we pass the view model into a larger aggregate object that has some page metadata and other fun stuff along with it (hence x.Data.FirstName instead of x.FirstName). So my question is: what is the best way to use the strongly-typed helpers while still allowing the MVC framework to appropriately cast the form collection to my view-model as it does in the original line? Is there any way to do it without changing the aggregate type we pass to the view? Thanks!

    Read the article

  • infinite loop shutting down ensime

    - by Jeff Bowman
    When I run M-X ensime-disconnect I get the following forever: string matching regex `\"((?:[^\"\\]|\\.)*)\"' expected but `^@' found and I see this exception when I use C-c C-c Uncaught exception in com.ensime.server.SocketHandler@769aba32 java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) at java.net.SocketOutputStream.write(SocketOutputStream.java:153) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:220) at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:290) at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:294) at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:140) at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229) at java.io.BufferedWriter.flush(BufferedWriter.java:253) at com.ensime.server.SocketHandler.write(server.scala:118) at com.ensime.server.SocketHandler$$anonfun$act$1$$anonfun$apply$mcV$sp$1.apply(server.scala:132) at com.ensime.server.SocketHandler$$anonfun$act$1$$anonfun$apply$mcV$sp$1.apply(server.scala:127) at scala.actors.Actor$class.receive(Actor.scala:456) at com.ensime.server.SocketHandler.receive(server.scala:67) at com.ensime.server.SocketHandler$$anonfun$act$1.apply$mcV$sp(server.scala:127) at com.ensime.server.SocketHandler$$anonfun$act$1.apply(server.scala:127) at com.ensime.server.SocketHandler$$anonfun$act$1.apply(server.scala:127) at scala.actors.Reactor$class.seq(Reactor.scala:262) at com.ensime.server.SocketHandler.seq(server.scala:67) at scala.actors.Reactor$$anon$3.andThen(Reactor.scala:240) at scala.actors.Combinators$class.loop(Combinators.scala:26) at com.ensime.server.SocketHandler.loop(server.scala:67) at scala.actors.Combinators$$anonfun$loop$1.apply(Combinators.scala:26) at scala.actors.Combinators$$anonfun$loop$1.apply(Combinators.scala:26) at scala.actors.Reactor$$anonfun$seq$1$$anonfun$apply$1.apply(Reactor.scala:259) at scala.actors.ReactorTask.run(ReactorTask.scala:36) at scala.actors.ReactorTask.compute(ReactorTask.scala:74) at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:147) at scala.concurrent.forkjoin.ForkJoinTask.quietlyExec(ForkJoinTask.java:422) at scala.concurrent.forkjoin.ForkJoinWorkerThread.mainLoop(ForkJoinWorkerThread.java:340) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:325) Is there something else I'm missing in my config or I should check on? Thanks, Jeff

    Read the article

  • EAAccessory Notification problem

    - by Deepak
    Hi, I am using a POS device for card swipe. its working good. i have used following codes. (id) init { self = [super init]; if (self != nil) { NSNotificationCenter *notificationCenter = [NSNotificationCenter defaultCenter]; EAAccessoryManager *accessoryMamaner = [EAAccessoryManager sharedAccessoryManager]; [accessoryMamaner registerForLocalNotifications]; [notificationCenter addObserver: self selector: @selector (accessoryDidConnect:) name: EAAccessoryDidConnectNotification object: nil]; [notificationCenter addObserver: self selector: @selector (accessoryDidDisconnect:) name: EAAccessoryDidDisconnectNotification object: nil]; NSArray *accessories = [accessoryMamaner connectedAccessories]; accessory = nil; session = nil; for (EAAccessory *obj in accessories) { if ([[obj protocolStrings] containsObject:@"com.XXXXX"] || [[obj protocolStrings] containsObject:@"com.YYYYYY"] ) { accessory = obj; break; } } if (accessory) { session = [[EASession alloc] initWithAccessory:accessory forProtocol:@"com.dailysystems.DS247"]; if (!session) session = [[EASession alloc] initWithAccessory:accessory forProtocol:@"com.usaepay.ipos"]; if (session) { self.deviceConnected = YES; [[session inputStream] setDelegate:self]; [[session inputStream] scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [[session inputStream] open]; [[session outputStream] setDelegate:self]; [[session outputStream] scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [[session outputStream] open]; } else { UIAlertView *accessoryInfo = [[UIAlertView alloc] initWithTitle:@"Alert!" message:@"Hardware is not connected." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [accessoryInfo show]; [accessoryInfo release]; } } } return self; } When i disconnect the accessory it gives me accessoryDidDisconnect and when i connect it gives me accessoryDidConnect, But Problem is after that accessory stop working it does not respond to command. i tried to release the alloc and alloc again but no use. Please tell me if some one have any idea how to get the accessory work again. Thanks in advance.

    Read the article

  • C# Windows Mobile 6.5 and TCP connections

    - by Phillip
    Hello, I am developing an application which makes a TCP connection out to our company server to pull down data and provide real-time data updates when the information changes. I am using the .NET Compact Framework for the development and the .NET Framework 3.5 (soon to update to 4.0) for the server-side TCP connection. I want to leave the connection open after the initial data is sent to the device from the server in order to keep the server in contact with the device should data updates need to be sent to the device. We already considered doing a WCF or connect/disconnect type of connection but we believe the overhead on the server for creating the session, transmitting and session cleanup would be unacceptable. (each device would be connecting every 60-90 seconds.) So, leaving the connection open is the best option. What I need to know is, when I leave the TCP connection open, do I need to manually transmit a heartbeat (and if so how do I do that with the .NET Compact Framework) or will the framework/stack do that for me? We have code that allows up to reconnect if the device gets disconnected (from network switching or a voice call) so that is handled. Thanks,

    Read the article

  • signalR groups - connecting/disconnecting and sending - am I missing something?

    - by Terry_Brown
    very new to signalR, and have rolled up a very simple app that will take questions for moderation at conferences (felt like a straight forward use case) I have 2 hubs at the moment: - Question (for asking questions) - Speaker (these should receive questions and allow moderation, but that will come later) Solution lives at https://github.com/terrybrown/InterASK After watching a video (by David Fowler/Damian Edwards) (http://channel9.msdn.com/Shows/Web+Camps+TV/Damian-Edwards-and-David-Fowler-Demonstrate-SignalR) and another that I can't find the URL for atm, I thought I'd go with 'groups' as the concept to keep messages flowing to the right people. I implemented IConnected, IDisconnect as I'd seen in one of the videos, and upon debugging I can see Connect fire (and on reload I can see disconnect fire), but it seems nothing I do adds a person to a group. The signalR documentation suggests "Groups are not persisted on the server so applications are responsible for keeping track of what connections are in what groups so things like group count can be achieved" which I guess is telling me that I need to keep some method (static or otherwise?) of tracking who is in a group? Certainly I don't seem able to send to groups currently, though I have no problem distributing to anyone currently connected to the app and implementing the same JS method (2 machines on the same page). I suspect I'm just missing something - I read a few of the other questions on here, but none of them seem to mention IConnected/IDisconnect, which tells me these are either new (and nobody is using them) or that they're old (and nobody is using them). I know this could be considered a subjective question, though what I'm looking for is just a simple means of managing the groups so that I can do what I want to - send a question from one hub, and have people connected to a different hub receive it - groups felt the cleanest solution for this? Many thanks folks. Terry

    Read the article

  • WPF custom BalloonTips problem with multithreading

    - by Erika
    Hi, I have read other related question but i cant really get them to relate to this so I thought it were best to ask, Im pretty new to WPF and so on so please bear with me. I am using this http://www.codeproject.com/KB/WPF/wpf_notifyicon.aspx api to work with custom WPF Windows (in particular FancyBalloon). However, i'm coming across the following problem, I seem unable to start off BalloonTips in a separate thread ( i need this because i'm parsing emails and hence if there are 3 emails for instance, it displays the first email (that works fine), but when it comes to the second email it crashes with a TargetInvocationException , {"Specified element is already the logical child of another element. Disconnect it first."}. Thing is, im supposedly working with the same instance and i have attempted calling it to close it before, disposing it etc but to no avail. (then again if i dispose it, i cant create another instance as apparently WPF UI components must be called from a static thread so throughout the looping of emails + displaying balloon, i am trying to use the same BalloonTip. Any suggestions please? I am really at a loss here and i've been on it for quite a while now :/ I was wondering if there was anyone

    Read the article

  • Static factory pattern with EJB3/JBoss

    - by purecharger
    I'm fairly new to EJBs and full blown application servers like JBoss, having written and worked with special purpose standalone Java applications for most of my career, with limited use of JEE. I'm wondering about the best way to adapt a commonly used design pattern to EJB3 and JBoss: the static factory pattern. In fact this is Item #1 in Joshua Bloch's Effective Java book (2nd edition) I'm currently working with the following factory: public class CredentialsProcessorFactory { private static final Log log = LogFactory.getLog(CredentialsProcessorFactory.class); private static Map<CredentialsType, CredentialsProcessor> PROCESSORS = new HashMap<CredentialsType, CredentialsProcessor>(); static { PROCESSORS.put(CredentialsType.CSV, new CSVCredentialsProcessor()); } private CredentialsProcessorFactory() {} public static CredentialsProcessor getProcessor(CredentialsType type) { CredentialsProcessor p = PROCESSORS.get(type); if(p == null) throw new IllegalArgumentException("No CredentialsProcessor registered for type " + type.toString()); return p; } However, in the implementation classes of CredentialsProcessor, I require injected resources such as a PersistenceContext, so I have made the CredentialsProcessor interface a @Local interface, and each of the impl's marked with @Stateless. Now I can look them up in JNDI and use the injected resources. But now I have a disconnect because I am not using the factory anymore. My first thought was to change the getProcessor(CredentialsType) method to do a JNDI lookup and return the SLSB instance that is required, but then I need to configure and pass the proper qualified JNDI name. Before I go down that path, I wanted to do more research on accepted practices. How is this design pattern treated in EJB3 / JEE?

    Read the article

  • How to avoid timeouts in WCF?

    - by Jader Dias
    I use netNamedPipeBinding, and my service methods return nothing (void), but they timeout: TimeoutException: "The open operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout." Server stack trace: at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade) at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) To avoid this I turned my service into a OneWay operation. But the timeout still occurs. I expected that it solved my problem. Its the netMsmqBinding the only one that could avoid such timeout? I also tried to make all processing in a separate thread, so the service can disconnect earlier, with no success.

    Read the article

  • SMO restore of SQL database doesn't overwrite

    - by Tom H.
    I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten. The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed. I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned. Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me? Thanks for any help or advice that you can give! function Invoke-SqlRestore { param( [string]$backup_file_name, [string]$server_name, [string]$database_name, [switch]$norecovery=$false ) # Get a new connection to the server [Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name Write-Host "Starting restore to $database_name on $server_name." Try { $backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File") # Get local paths to the Database and Log file locations If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath } Else { $database_path = $server.Settings.DefaultFile} If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath } Else { $database_log_path = $server.Settings.DefaultLog} # Load up the Restore object settings $restore = New-Object Microsoft.SqlServer.Management.Smo.Restore $restore.Action = 'Database' $restore.Database = $database_name $restore.ReplaceDatabase = $true if ($norecovery.IsPresent) { $restore.NoRecovery = $true } Else { $restore.Norecovery = $false } $restore.Devices.Add($backup_device) # Get information from the backup file $restore_details = $restore.ReadBackupHeader($server) $data_files = $restore.ReadFileList($server) # Restore all backup files ForEach ($data_row in $data_files) { $logical_name = $data_row.LogicalName $physical_name = Get-FileName -path $data_row.PhysicalName $restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") $restore_data.LogicalFileName = $logical_name if ($data_row.Type -eq "D") { # Restore Data file $restore_data.PhysicalFileName = $database_path + "\" + $physical_name } Else { # Restore Log file $restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name } [Void]$restore.RelocateFiles.Add($restore_data) } $restore.SqlRestore($server) # If there are two files, assume the next is a Log if ($restore_details.Rows.Count -gt 1) { $restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log $restore.FileNumber = 2 $restore.SqlRestore($server) } } Catch { $ex = $_.Exception Write-Output $ex.message $ex = $ex.InnerException while ($ex.InnerException) { Write-Output $ex.InnerException.message $ex = $ex.InnerException } Throw $ex } Finally { $server.ConnectionContext.Disconnect() } Write-Host "Restore ended without any errors." }

    Read the article

  • How do I get google protocol buffer messages over a socket connection without disconnecting the clie

    - by Dan
    Hi there, I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows: iPhone: Person *person = [[[[Person builder] setId:1] setName:@"Bob"] build]; RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build]; NSData *data = [request data]; AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self]; if (![socket connectToHost:@"192.168.0.6" onPort:6666 error:nil]){ [self updateLabel:@"Problem connecting to socket!"]; } else { [self updateLabel:@"Sending data to server..."]; [socket writeData:data withTimeout:-1 tag:0]; [self updateLabel:@"Data sent, disconnecting"]; //[socket disconnect]; } Java: try { RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream()); Person person = wrapper.getPerson(); if (person != null) { System.out.println("Persons name is " + person.getName()); socket.close(); } On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with [request writeToOutputStream:[socket getCFWriteStream]]; Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method. Any help on the matter would be greatly appreciated! Cheers! Dan (EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)

    Read the article

  • Perl DBI doesn't work with Oracle 11g

    - by John
    I am getting the following error connecting to an Oracle 11g database using a simple perl script: failed: ERROR OCIEnvNlsCreate. Check ORACLE_HOME (Linux) env var or PATH (Windows) and or NLS settings, permissions, etc. at The script is as follows: #!/usr/local/bin/perl use strict; use DBI; if ($#ARGV < 3) { print "Usage: perl testDbAccess.pl dataBaseUser dataBasePassword SID dataBasePort\n"; exit 0; } my ($user, $pwd, $sid, $port) = @ARGV; my $host = `hostname`; my $dbh; my $sth; my $dbname = "dbi:Oracle:HOST=$host;SID=$sid;PORT=$port"; openDbConnection(); closeDbConnection(); sub openDbConnection() { $dbh = DBI->connect ($dbname, $user ,$pwd , { RaiseError => 1}) || die "Database connection not made: $DBI::errstr"; } sub closeDbConnection() { #$sth->finish(); $dbh->disconnect(); } Anyone seen this problem before? /john

    Read the article

  • Indy IdSMTP and attachments in Thunderbird

    - by Lobuno
    Hello! Using the latest snapshot of Indy tiburon on D2010. A very simple project like: var stream: TFileStream; (s is TidSMTP and m is TidMessage) begin s.Connect; Stream := TFileStream.Create('c:\Test.zip', fmOpenRead or fmShareExclusive); try with TIdAttachmentMemory.Create(m.MessageParts, Stream) do begin ContentType := 'application/x-zip-compressed'; Name := ExtractFilePath('C:\'); //' FileName := 'Test.zip'; end; finally FreeAndNil(Stream); end; s.Send(m); s.Disconnect(); end; Everything works Ok in Outlook, The bat!, OE, yahoo, etc... but in Thunderbird the attachment is not shown. Looking at the source of the message in Thunderbird, the attachment is there. The only difference I can find between messages send by indy and other clients is that Indy messages have this order: Content-Type: multipart/mixed; boundary="Z\=_7oeC98yIhktvxiwiDTVyhv9R9gwkwT1" MIME-Version: 1.0 while any other clients have the order: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="Z\=_7oeC98yIhktvxiwiDTVyhv9R9gwkwT1" Don't know if THAT is the source of the problem, but if so: is this a bug on Thunderbird or is this a problem with indy which "malforms" the headers of the messages? Is this order a problem? Does that matter anyway?

    Read the article

  • Viewing a large-resolution VNC server through a small-resolution viewer in Ubuntu

    - by Madiyaan Damha
    I have two Ubuntu computers, one with a large screen resolution (1920x1600) that is running default ubuntu vnc server. I have another computer that has a resolution of about 1200x1024 that I use to vnc into the server (I use the default ubuntu vnc viewer). Now everything works fine except there are annoying scrollbars in the viewer because the server's desktop resolution is so much higher than the viewer's. Is there a way to: 1) Scale the server's desktop down to the viewer's resolution. I know there will be a loss of image quality, but I am willing to try it out. This should be something like how windows media player or vlc scales down the window (and does some interpolation of pixels). 2) Automatically shrink the resolution of the server to the client's when I connect and scale the resolution back when I disconnect. This seems like a less attractive solution. 3) Any other solution that gurus out there use? I am sure someone has experienced this before (annoying scroll bars) so there must be a solution out there. Thanks,

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • HttpURLConnection: What is the minimum best-practice implementation?

    - by stormin986
    I've come across a lot of HttpURLConnection examples that are nothing more than openConnection(), getInputStream(), and then they just read the buffer and are done. It's simple but seems like it's not the best implementation ... it handles no problems. I don't yet know much about http, so I keep thinking I have everything covered until a new problem arises. I'm currently experiencing a similar problem to this one. Most times I try to read the same resource a second time (from a different HttpURLConnection object, after I .disconnect()'ed the previous one), the response code returns as -1 (but no exception is thrown!!). Before I knew to check the response code, I was baffled since I was throwing no exceptions. So, is there a minimum 'best practice' HttpURLConnection implementation? What are notable exceptions to handle? Request code checking? Any other error checks? What connection parameters do and don't need to be set (like doInput / doOutput, are these even necessary? Some examples have em, some don't). Etc. I realize this is kind of a broad question but I think it has potential to be a very useful resource if many of the common use cases and FAQs are addressed in one central place. This seems like the kind of thing a community wiki would be good for...

    Read the article

  • C# StreamReader.EndOfStream produces IOException

    - by Ziplin
    I'm working on an application that accepts TCP connections and reads in data until an </File> marker is read and then writes that data to the filesystem. I don't want to disconnect, I want to let the client sending the data to do that so they can send multiple files in one connection. I'm using the StreamReader.EndOfStream around my outter loop, but it throws an IOException when the client disconnects. Is there a better way to do this? private static void RecieveAsyncStream(IAsyncResult ar) { TcpListener listener = (TcpListener)ar.AsyncState; TcpClient client = listener.EndAcceptTcpClient(ar); // init the streams NetworkStream netStream = client.GetStream(); StreamReader streamReader = new StreamReader(netStream); StreamWriter streamWriter = new StreamWriter(netStream); while (!streamReader.EndOfStream) // throws IOException { string file= ""; while (file!= "</File>" && !streamReader.EndOfStream) { file += streamReader.ReadLine(); } // write file to filesystem } listener.BeginAcceptTcpClient(RecieveAsyncStream, listener); }

    Read the article

  • Smack API giving error while logging into Tigase Server setup locally

    - by Ameya Phadke
    Hi, I am currently developing android XMPP client to communicate with the Tigase server setup locally.Before starting development on Android I am writing a simple java code on PC to test connectivity with XMPP server.My XMPP domain is my pc name "mwbn43-1" and administrator username and passwords are admin and tigase respectively. Following is the snippet of the code I am using class Test { public static void main(String args[])throws Exception { System.setProperty("smack.debugEnabled", "true"); XMPPConnection.DEBUG_ENABLED = true; ConnectionConfiguration config = new ConnectionConfiguration("mwbn43-1", 5222); config.setCompressionEnabled(true); config.setSASLAuthenticationEnabled(true); XMPPConnection con = new XMPPConnection(config); // Connect to the server con.connect(); con.login("admin", "tigase"); Chat chat = con.getChatManager().createChat("aaphadke@mwbn43-1", new MessageListener() { public void processMessage(Chat chat, Message message) { // Print out any messages we get back to standard out. System.out.println("Received message: " + message); } }); try { chat.sendMessage("Hi!"); } catch (XMPPException e) { System.out.println("Error Delivering block"); } String host = con.getHost(); String user = con.getUser(); String id = con.getConnectionID(); int port = con.getPort(); boolean i = false; i = con.isConnected(); if (i) System.out.println("Connected to host " + host + " via port " + port + " connection id is " + id); System.out.println("User is " + user); con.disconnect(); } } When I run this code I get following error Exception in thread "main" Resource binding not offered by server: at org.jivesoftware.smack.SASLAuthentication.bindResourceAndEstablishSession(SASLAuthenticatio n.java:416) at org.jivesoftware.smack.SASLAuthentication.authenticate(SASLAuthentication.java:331) at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:395) at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:349) at Test.main(Test.java:26) I found this articles on the same problem but no concrete solution here Could anyone please tell me the solution for this problem.I checked the XMPPConnection.java file in the Smack API and it looks the same as given in the link solution. Thanks, Ameya

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

  • AContext.data can be nil?

    - by waza123
    In this code, as you see on Connect, AContext.Data is filled with something TmyTThreadList = class(TThreadList) id: integer; end; var unique_id:integer; procedure TfrmTestIdTCPServer.IdTCPServerConnect(AContext: TIdContext); begin CS.Enter; try inc(unique_id); finally CS.Leave; end; AContext.Data := myTThreadList.Create; list := myTThreadList(AContext.Data).LockList; try myTThreadList(AContext.Data).id := my_unique_id; list.Add(myTThreadList(AContext.Data)); finally myTThreadList(AContext.Data).UnlockList; end; end; then on disconnect, coder is checking here for Acontext.Data < nil procedure TfrmTestIdTCPServer.IdTCPServerDisconnect(AContext: TIdContext); var begin if AContext.Data <> nil then begin The question is, why he is checking for nil ? Thanks. EDIT: I'm asking this, because when I do the same, onExecute I access AContext.Data , and sometimes (when in same time is connecting many clients) AContext.Data is empty, access violation appears.

    Read the article

  • DBD::SQLite::st execute failed: datatype mismatch

    - by Barton Chittenden
    Here's a snippit of perl code: sub insert_timesheet { my $dbh = shift; my $entryref = shift; my $insertme = join(',', @_); my $values_template = '?, ' x scalar(@_); chop $values_template; chop $values_template; #remove trailing comma my $insert = "INSERT INTO timesheet( $insertme ) VALUES ( $values_template );"; my $sth = $dbh->prepare($insert); debug("$insert"); my @values; foreach my $entry (@_){ push @values, $$entryref{$entry} } debug("@values"); my $rv = $sth->execute( @values ) or die $dbh->errstr; debug("sql return value: $rv"); $dbh->disconnect; } The value of $insert: [INSERT INTO timesheet( idx,Start_Time,End_Time,Project,Ticket_Number,Site,Duration,Notes ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ? );] Here are @values: [null '1270950742' '1270951642' 'asdf' 'asdf' 'adsf' 15 ''] Here's the schema of 'timesheet' timesheet( idx INTEGER PRIMARY KEY AUTOINCREMENT, Start_Time VARCHAR, End_Time VARCHAR, Duration INTEGER, Project VARCHAR, Ticket_Number VARCHAR, Site VARCHAR, Notes VARCHAR) Here's how things line up: ---- Insert Statement Schema @values ---- idx idx INTEGER PRIMARY KEY AUTOINCREMENT null: # this is not a mismatch, passing null will allow auto-increment. Start_Time Start_Time VARCHAR '1270950742' End_Time End_Time VARCHAR '1270951642' Project Project VARCHAR 'asdf' Ticket_Number Ticket_Number VARCHAR 'asdf' Site Site VARCHAR 'adsf' Duration Duration INTEGER 15 Notes Notes VARCHAR '' ... I can't see the data-type mis-match.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >