Search Results

Search found 6520 results on 261 pages for 'sent'.

Page 234/261 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Why am I unable to telnet to a local port that has a listening service?

    - by Skip Huffman
    I suspect this is either a very simple question, or a very complex one. I have a headless server running ubuntu 10.04 that I can ssh into. I have full root access to the system. I am trying to set up an ssh tunnel to allow me to vnc to the system (but that isn't my question. I have vnc running on port 5903, here is the netstat output for that: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5903 0.0.0.0:* LISTEN 7173/Xtightvnc tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 465/sshd But when I try to telnet to that port, from within the same system and login, I get unable to connect errors # telnet localhost 5903 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out I am able to telnet to port 22 (as a verification) ~# telnet localhost 22 Trying ::1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 I have tried to open up any possible ports using ufw (probably clumsy fashion) # ufw status numbered Status: active To Action From -- ------ ---- [ 1] 5903 ALLOW IN Anywhere [ 2] 22 ALLOW IN Anywhere What else might be blocking this connection locally? Thank you, Edit: The only reference to port 5903 in iptable -L -n is this: Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5903 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5903 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:8080 I can post the whole output if that will be useful. hosts.allow and hosts.deny both contain only comments. Re-Edit: Some other questions pointed me to nmap, so I ran a portscan through that utility: # nmap -v -sT localhost -p1-65535 Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-09 09:58 PST NSE: Loaded 0 scripts for scanning. Warning: Hostname localhost resolves to 2 IPs. Using 127.0.0.1. Initiating Connect Scan at 09:58 Scanning localhost (127.0.0.1) [65535 ports] Discovered open port 22/tcp on 127.0.0.1 Connect Scan Timing: About 18.56% done; ETC: 10:01 (0:02:16 remaining) Connect Scan Timing: About 44.35% done; ETC: 10:00 (0:01:17 remaining) Completed Connect Scan at 10:00, 112.36s elapsed (65535 total ports) Host localhost (127.0.0.1) is up (0.00s latency). Interesting ports on localhost (127.0.0.1): Not shown: 65533 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 112.43 seconds Raw packets sent: 0 (0B) | Rcvd: 0 (0B) I think this shows that 5903 is blocked somehow. Which I pretty much knew. The question remains what is blocking it and how to modify. Re-re-edit: To check Paul Lathrop's suggested answer, I first verified my ip address with ifconfig: eth0 Link encap:Ethernet HWaddr 02:16:3e:42:28:8f inet addr:10.0.10.3 Bcast:10.0.10.255 Mask:255.255.255.0 Then tried to telnet to 5903 from that address: # telnet 10.0.10.3 5903 Trying 10.0.10.3... telnet: Unable to connect to remote host: Connection timed out No luck. Re-re-re-re-edit: Ok, I think I have isolated it a bit to vncserver, not the firewall, darn it. I shut off vncserver and had netcat listen on port 5903. My vnc client then was able to establish a connnection and sit and wait for a response. Looks like I should be chasing a vnc problem. At least that is progress Thanks for the help

    Read the article

  • SVN Error 403 Forbidden

    - by Chris
    I can't figure this out. I try to import a new project into a svn repository from Netbeans and get 403 Forbidden. I just setup svn on my serverbox today. I can get to it through a browser just fine, though its empty as I haven't imported my project yet. Apache's path for html files is /var/www I setup the svn repo in /var/svn This is the structure of /var/svn [root@localhost svn]# ls -lR /var/svn /var/svn: total 4 drwxrwxrwx 7 apache apache 4096 2010-03-26 10:18 repo /var/svn/repo: total 36 drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 conf drwxrwxrwx 3 apache apache 4096 2010-03-26 10:18 dav drwxrwsrwx 6 apache apache 4096 2010-03-26 11:19 db -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 format drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 hooks drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 locks -rwxrwxrwx 1 apache apache 229 2010-03-26 09:47 README.txt -rwxrwxrwx 1 apache apache 15 2010-03-26 09:47 svnauth -rwxrwxrwx 1 apache apache 43 2010-03-26 09:48 svnpass /var/svn/repo/conf: total 12 -rwxrwxrwx 1 apache apache 1080 2010-03-26 09:47 authz -rwxrwxrwx 1 apache apache 309 2010-03-26 09:47 passwd -rwxrwxrwx 1 apache apache 2279 2010-03-26 09:47 svnserve.conf /var/svn/repo/dav: total 4 drwxrwxrwx 2 apache apache 4096 2010-03-26 11:19 activities.d /var/svn/repo/dav/activities.d: total 0 /var/svn/repo/db: total 48 -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 current -rwxrwxrwx 1 apache apache 22 2010-03-26 09:47 format -rwxrwxrwx 1 apache apache 1920 2010-03-26 09:47 fsfs.conf -rwxrwxrwx 1 apache apache 5 2010-03-26 09:47 fs-type -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 min-unpacked-rev -rwxrwxrwx 1 apache apache 4096 2010-03-26 09:47 rep-cache.db drwxrwsrwx 3 apache apache 4096 2010-03-26 09:47 revprops drwxrwsrwx 3 apache apache 4096 2010-03-26 09:47 revs drwxrwsrwx 2 apache apache 4096 2010-03-26 11:19 transactions -rwxrwxrwx 1 apache apache 2 2010-03-26 11:19 txn-current -rwxrwxrwx 1 apache apache 0 2010-03-26 09:47 txn-current-lock drwxrwsrwx 2 apache apache 4096 2010-03-26 11:19 txn-protorevs -rwxrwxrwx 1 apache apache 37 2010-03-26 09:47 uuid -rwxrwxrwx 1 apache apache 0 2010-03-26 09:47 write-lock /var/svn/repo/db/revprops: total 4 drwxrwsrwx 2 apache apache 4096 2010-03-26 09:47 0 /var/svn/repo/db/revprops/0: total 4 -rwxrwxrwx 1 apache apache 50 2010-03-26 09:47 0 /var/svn/repo/db/revs: total 4 drwxrwsrwx 2 apache apache 4096 2010-03-26 09:47 0 /var/svn/repo/db/revs/0: total 4 -rwxrwxrwx 1 apache apache 115 2010-03-26 09:47 0 /var/svn/repo/db/transactions: total 0 /var/svn/repo/db/txn-protorevs: total 0 /var/svn/repo/hooks: total 36 -rwxrwxrwx 1 apache apache 1955 2010-03-26 09:47 post-commit.tmpl -rwxrwxrwx 1 apache apache 1638 2010-03-26 09:47 post-lock.tmpl -rwxrwxrwx 1 apache apache 2267 2010-03-26 09:47 post-revprop-change.tmpl -rwxrwxrwx 1 apache apache 1567 2010-03-26 09:47 post-unlock.tmpl -rwxrwxrwx 1 apache apache 3404 2010-03-26 09:47 pre-commit.tmpl -rwxrwxrwx 1 apache apache 2410 2010-03-26 09:47 pre-lock.tmpl -rwxrwxrwx 1 apache apache 2764 2010-03-26 09:47 pre-revprop-change.tmpl -rwxrwxrwx 1 apache apache 2100 2010-03-26 09:47 pre-unlock.tmpl -rwxrwxrwx 1 apache apache 2758 2010-03-26 09:47 start-commit.tmpl /var/svn/repo/locks: total 8 -rwxrwxrwx 1 apache apache 139 2010-03-26 09:47 db.lock -rwxrwxrwx 1 apache apache 139 2010-03-26 09:47 db-logs.lock I've got httpd.conf loading svn.conf which contains: <Location /svn> DAV on DAV svn #SVNParentPath /var/svn SVNPath /var/svn/repo Authtype Basic AuthName "Subversion" AuthUserFile /var/svn/repo/svnpass Require valid-user AuthzSVNAccessFile /var/svn/repo/svnauth </Location> Full error message is: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to CHECKOUT request for '/svn/!svn/bln/0' Sorry for the incredibly long post, but I thought more info would be better than less. I've been fidgeting with this problem for a long time now.

    Read the article

  • Invalid or expired security context token in WCF web service

    - by Damian
    All, I have a WCF web service (let's called service "B") hosted under IIS using a service account (VM, Windows 2003 SP2). The service exposes an endpoint that use WSHttpBinding with the default values except for maxReceivedMessageSize, maxBufferPoolSize, maxBufferSize and some of the time outs that have been increased. The web service has been load tested using Visual Studio Load Test framework with around 800 concurrent users and successfully passed all tests with no exceptions being thrown. The proxy in the unit test has been created from configuration. There is a sharepoint application that use the Office Sharepoint Server Search service to call web services "A" and "B". The application will get data from service "A" to create a request that will be sent to service "B". The response coming from service "B" is indexed for search. The proxy is created programmatically using the ChannelFactory. When service "A" takes less than 10 minutes, the calls to service "B" are successfull. But when service "A" takes more time (~20 minutes) the calls to service "B" throw the following exception: Exception Message: An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail Inner Exception Message: The message could not be processed. This is most likely because the action 'namespace/OperationName' is incorrect or because the message contains an invalid or expired security context token or because there is a mismatch between bindings. The security context token would be invalid if the service aborted the channel due to inactivity. To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding. The binding settings are the same, the time in both client server and web service server are synchronize with the Windows Time service, same time zone. When i look at the server where web service "B" is hosted i can see the following security errors being logged: Source: Security Category: Logon/Logoff Event ID: 537 User NT AUTHORITY\SYSTEM Logon Failure: Reason: An error occurred during logon Logon Type: 3 Logon Process: Kerberos Authentication Package: Kerberos Status code: 0xC000006D Substatus code: 0xC0000133 After reading some of the blogs online, the Status code means STATUS_LOGON_FAILURE and the substatus code means STATUS_TIME_DIFFERENCE_AT_DC. but i already checked both server and client clocks and they are syncronized. I also noticed that the security token seems to be cached somewhere in the client server because they have another process that calls the web service "B" using the same service account and successfully gets data the first time is called. Then they start the proccess to update the office sharepoint server search service indexes and it fails. Then if they called the first proccess again it will fail too. Has anyone experienced this type of problems or have any ideas? Regards, --Damian

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Linux - serial port read returning EAGAIN...

    - by Andre
    Hello all! I am having some trouble reading some data from a serial port I opened the following way. I've used this instance of code plenty of times and all worked fine, but now, for some reason that I cant figure out, I am completely unable to read anything from the serial port. I am able to write and all is correctly received on the other end, but the replies (which are correctly sent) are never received (No, the cables are all ok ;) ) The code I used to open the serial port is the following: fd = open("/dev/ttyUSB0", O_RDWR | O_NONBLOCK | O_NOCTTY); if (fd == -1) { Aviso("Unable to open port"); return (fd); } else { //Get the current options for the port... bzero(&options, sizeof(options)); /* clear struct for new port settings */ tcgetattr(fd, &options); /*-- Set baud rate -------------------------------------------------------*/ if (cfsetispeed(&options, SerialBaudInterp(BaudRate))==-1) perror("On cfsetispeed:"); if (cfsetospeed(&options, SerialBaudInterp(BaudRate))==-1) perror("On cfsetospeed:"); //Enable the receiver and set local mode... options.c_cflag |= (CLOCAL | CREAD); options.c_cflag &= ~PARENB; /* Parity disabled */ options.c_cflag &= ~CSTOPB; options.c_cflag &= ~CSIZE; /* Mask the character size bits */ options.c_cflag |= SerialDataBitsInterp(8); /* CS8 - Selects 8 data bits */ options.c_cflag &= ~CRTSCTS; // disable hardware flow control options.c_iflag &= ~(IXON | IXOFF | IXANY); // disable XON XOFF (for transmit and receive) options.c_cflag |= CRTSCTS; /* enable hardware flow control */ options.c_cc[VMIN] = 0; //min carachters to be read options.c_cc[VTIME] = 0; //Time to wait for data (tenths of seconds) //Set the new options for the port... tcflush(fd, TCIFLUSH); if (tcsetattr(fd, TCSANOW, &options)==-1) { perror("On tcsetattr:"); } PortOpen[ComPort] = fd; } return PortOpen[ComPort]; After the port is initializeed I write some stuff to it through simple write command... int nc = write(hCom, txchar, n); where hCom is the file descriptor (and it's ok), and (as I said) this works. But... when I do a read afterwards, I get a "Resource Temporarily Unavailable" error from errno. I tested select to see when the file descriptor had something t read... but it always times out! I read data like this: ret = read(hCom, rxchar, n); and I always get an EAGAIN and I have no idea why. All help would be appreciated. Cheers

    Read the article

  • Spring mail support - no subject

    - by Trick
    I have updated my libraries, and now e-mails are sent without subject. I don't know where this happened... Mail API is 1.4.3., Spring 2.5.6. and Spring Integration Mail 1.0.3.RELEASE. <!-- Definitions for SMTP server --> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="${mail.host}" /> <property name="username" value="${mail.username}" /> <property name="password" value="${mail.password}" /> </bean> <bean id="adminMailTemplate" class="org.springframework.mail.SimpleMailMessage" > <property name="from" value="${mail.admin.from}" /> <property name="to" value="${mail.admin.to}" /> <property name="cc"> <list> <value>${mail.admin.cc1}</value> </list> </property> </bean> <!-- Mail service definition --> <bean id="mailService" class="net.bbb.core.service.impl.MailServiceImpl"> <property name="sender" ref="mailSender"/> <property name="mail" ref="adminMailTemplate"/> </bean> And properties mail.host,mail.username,mail.password,mail.admin.from,mail.admin.to, mail.admin.cc1. Java class: /** The sender. */ private MailSender sender; /** The mail. */ private SimpleMailMessage mail; public void sendMail() { this.mail.setSubject("Subject"); this.mail.setText("msg body"); try { getSender().send(this.mail); } catch (MailException e) { log.error("Error sending mail!",e); } } public SimpleMailMessage getMail() { return this.mail; } public void setMail(SimpleMailMessage mail) { this.mail = mail; } public MailSender getSender() { return this.sender; } public void setSender(MailSender mailSender1) { this.sender = mailSender1; } Everything worked before, I am wondering if there may be any conflicts with new libraries.

    Read the article

  • iPhone 3DES encryption key length issue

    - by Russell Hill
    Hi, I have been banging my head on a wall with this one. I need to code my iPhone application to encrypt a 4 digit "pin" using 3DES in ECB mode for transmission to a webservice which I believe is written in .NET. + (NSData *)TripleDESEncryptWithKey:(NSString *)key dataToEncrypt:(NSData*)encryptData { NSLog(@"kCCKeySize3DES=%d", kCCKeySize3DES); char keyBuffer[kCCKeySize3DES+1]; // room for terminator (unused) bzero( keyBuffer, sizeof(keyBuffer) ); // fill with zeroes (for padding) [key getCString: keyBuffer maxLength: sizeof(keyBuffer) encoding: NSUTF8StringEncoding]; // encrypts in-place, since this is a mutable data object size_t numBytesEncrypted = 0; size_t returnLength = ([encryptData length] + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); // NSMutableData* returnBuffer = [NSMutableData dataWithLength:returnLength]; char* returnBuffer = malloc(returnLength * sizeof(uint8_t) ); CCCryptorStatus ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithm3DES , kCCOptionECBMode, keyBuffer, kCCKeySize3DES, nil, [encryptData bytes], [encryptData length], returnBuffer, returnLength, &numBytesEncrypted); if (ccStatus == kCCParamError) NSLog(@"PARAM ERROR"); else if (ccStatus == kCCBufferTooSmall) NSLog(@"BUFFER TOO SMALL"); else if (ccStatus == kCCMemoryFailure) NSLog(@"MEMORY FAILURE"); else if (ccStatus == kCCAlignmentError) NSLog(@"ALIGNMENT"); else if (ccStatus == kCCDecodeError) NSLog(@"DECODE ERROR"); else if (ccStatus == kCCUnimplemented) NSLog(@"UNIMPLEMENTED"); if(ccStatus == kCCSuccess) { NSLog(@"TripleDESEncryptWithKey encrypted: %@", [NSData dataWithBytes:returnBuffer length:numBytesEncrypted]); return [NSData dataWithBytes:returnBuffer length:numBytesEncrypted]; } else return nil; } } I do get a value encrypted using the above code, however it does not match the value from the .NET web service. I believe the issue is that the encryption key I have been supplied by the web service developers is 48 characters long. I see that the iPhone SDK constant "kCCKeySize3DES" is 24. So I SUSPECT, but don't know, that the commoncrypto API call is only using the first 24 characters of the supplied key. Is this correct? Is there ANY way I can get this to generate the correct encrypted pin? I have output the data bytes from the encryption PRIOR to base64 encoding it and have attempted to match this against those generated from the .NET code (with the help of a .NET developer who sent the byte array output to me). Neither the non-base64 encoded byte array nor the final base64 encoded strings match.

    Read the article

  • Problems with transitionWithView and animateWithDuration

    - by MusicMathTech
    I have problems with transitionWithView and animateWithDuration. One of my animateWithDuration blocks doesn't transition, it is a sudden change, and transitionWithView does not temporarily disable user interaction. I have checked the docs and believe I am doing everything correctly, but obviously something is wrong. Here are the two blocks of code: This is in my main View Controller ViewController which has three container views/child view controllers. This block moves one of the container views, but does not block the user from other interactions in ViewController while the transition is occurring. [UIView transitionWithView:self.view duration:0.5 options:UIViewAnimationOptionCurveEaseOut animations:^ { CGRect frame = _containerView.frame; frame.origin.y = self.view.frame.size.height - _containerView.frame.size.height; _containerView.frame = frame; }completion:^(BOOL finished) { // do something }]; This is in one of my container view controllers. The animation seems to have no effect as the text of the productTitleLabel and productDescriptionTextView changes suddenly as if the animation block does not exist. - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [self.viewController toggleFlavoredOliveOilsTableView]; if (indexPath.row > 0) { NSDictionary *selectedCellDict = [[_flavoredOliveOilsDict objectForKey:@"Unflavored Olive Oils"] objectAtIndex:indexPath.row - 1]; [UIView animateWithDuration:0.5 delay:0 options:UIViewAnimationOptionTransitionCrossDissolve animations:^ { self.viewController.productTitleLabel.text = [_flavoredOliveOilsTableView cellForRowAtIndexPath:indexPath].textLabel.text; self.viewController.productDescriptionTextView.text = [selectedCellDict objectForKey:@"Description"]; }completion:nil]; if (indexPath.row == 1) { [self.viewController setProductDescriptionTextViewFrameForInformationTab]; } else { [self.viewController setProductDescriptionTextViewFrameForNonInformationTab]; //self.viewController.productImageView.image = [UIImage imageNamed:[selectedCellDict objectForKey:@"Image"]]; } } } I think the problems are somewhat related as most of my animation and transition blocks don't work completely as expected. Thanks for any help. Edit What I am trying to accomplish is moving a container view in ViewController and set the text and image properties of a label, text view, and image view; all of which are in the main view. The details of these properties are sent via the child view controller. The transitionWithView is in a method called toggleFlavoredOiveOilsTableView which is called in didSelectRowAtIndexPath. I think the problem is that I am trying to call two different animation/transition blocks at the same time.

    Read the article

  • Integrating POP3 client functionality into a C# application?

    - by flesh
    I have a web application that requires a server based component to periodically access POP3 email boxes and retrieve emails. The service then needs to process the emails which will involve: Validating the email against some business rules (does it contain a valid reference in the subject line, which user sent the mail, etc.) Analysing and saving any attachments to disk Take the email body and attachment details and create a new item in the database Or update an existing item where the reference matches the incoming email subject line What is the best way to approach this? I really don't want to have to write a POP3 client from scratch, but I need to be able to customize the processing of emails. Ideally I would be able to plug in some component that does the access and retrieval for me, returning arrays of attachments, body text, subject line, etc. ready for my processing... [ UPDATE: Reviews ] OK, so I have spent a fair amount of time looking into (mainly free) .NET POP3 libraries so I thought I'd provide a short review of some of those mentioned below and a few others: Pop3.net - free - works OK, very basic in terms of functionality provided. This is pretty much just the POP3 commands and some base64 encoding, but it's very straight forward - probably a good introduction Pop3 Wizard - commercial / some open source code - couldn't get this to build, missing DLLs, I wouldn't bother with this C#Mail - free - works well, comes with Mime parser and SMTP client, however the comments are in Japanese (not a big deal) and it didn't work with SSL 'out of the box' - I had to change the SslStream constructor after which it worked no problem OpenPOP - free - hasn't been updated for about 5 years so it's current state is .NET 1.0, doesn't support SSL but that was no problem to resolve - I just replaced the existing stream with an SslStream and it worked. Comes with Mime parser. Of the free libraries, I'd go for C#Mail or OpenPOP. I looked at a few commercial libraries: Chillkat, Rebex, RemObjects, JMail.net. Based on features, price and impression of the company I would probably go for Rebex and may in the future if my requirements change or I run into production issues with either of C#Mail or OpenPOP. In case anyone's needs it, this is the replacement SslStream constructor that I used to enable SSL with C#Mail and OpenPOP: SslStream stream = new SslStream(clientSocket.GetStream(), false, delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors errors) { return true; });

    Read the article

  • James Server connection failure exception

    - by John
    Hi, I'm trying to connect Javamail application to James server, but I'm getting javax.mail.MessagingException: Could not connect to SMTP host:localhost, port:4555; nested exception is: java.net.SocketException: Invalid arguent: connect Here's the code, which is creating a little problem for me: import java.security.Security; import java.io.IOException; import java.io.PrintWriter; import java.util.Properties; import javax.mail.*; import javax.mail.internet.*; public class mail { public static void main(String[] argts) { Security.addProvider(new com.sun.net.ssl.internal.ssl.Provider()); String mailHost = "your.smtp.server"; String to = "blue@localhost"; String from = "red@localhost"; String subject = "jdk"; String body = "Down to wind"; if ((from != null) && (to != null) && (subject != null) && (body != null)) // we have mail to send { try { //Get system properties Properties props = System.getProperties(); props.put("mail.smtp.user", "red"); props.put("mail.smtp.host", "localhost"); props.put("mail.debug", "true"); props.put("mail.smtp.port", 4555); props.put("mail.smtp.socketFactory.port", 4555); props.put("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.put("mail.smtp.socketFactory.fallback", "false"); Session session = Session.getInstance(props,null); Message message = new MimeMessage(session); message.setFrom(new InternetAddress(from)); message.setRecipients(Message.RecipientType.TO, new InternetAddress[] { new InternetAddress(to) }); message.setSubject(subject); message.setContent(body, "text/plain"); message.setText(body); Transport.send(message); System.out.println("<b>Thank you. Your message to " + to + " was successfully sent.</b>"); } catch (Throwable t) { System.out.println("Teri maa ki "+t); } } } } Thanks in advance. :)

    Read the article

  • UML assignment question

    - by waitinforatrain
    Hi guys, Sorry, I know this is a very lame question to ask and not of any use to anyone else. I have an assignment in UML due tomorrow and I don't even know the basics (all-nighter ahead!). I'm not looking for a walkthrough, I simply want your opinion on something. The assignment is as follows (you only need to skim over it!): ============= Gourmet Surprise (GS) is a small catering firm with five employees. During a typical weekend, GS caters fifteen events with twenty to fifty people each. The business has grown rapidly over the past year and the owner wants to install a new computer system for managing the ordering and buying process. GS has a set of ten standard menus. When potential customers call, the receptionist describes the menus to them. If the customer decides to book an event (dinner, lunch, picnic, finger food etc.), the receptionist records the customer information (e.g., name, address, phone number, etc.) and the information about the event (e.g., place, date, time, which one of the standard menus, total price) on a contract. The customer is then faxed a copy of the contract and must sign and return it along with a deposit (often a credit card or by check) before the event is officially booked. The remaining money is collected when the catering is delivered. Sometimes, the customer wants something special (e.g., birthday cake). In this case, the receptionist takes the information and gives it to the owner who determines the cost; the receptionist then calls the customer back with the price information. Sometimes the customer accepts the price, other times, the customer requests some changes that have to go back to the owner for a new cost estimate. Each week, the owner looks through the events scheduled for that weekend and orders the supplies (e.g., plates) and food (e.g., bread, chicken) needed to make them. The owner would like to use the system for marketing as well. It should be able to track how customers learned about GS, and identify repeat customers, so that GS can mail special offers to them. The owner also wants to track the events on which GS sent a contract, but the customer never signed the contract and actually booked a GS. Exercise: Create an activity diagram and a use case model (complete with a set of detail use case descriptions) for the above system. Produce an initial domain model (class diagram) based on these descriptions. Elaborate the use cases into sequence diagrams, and include any state diagrams necessary. Finally use the information from these dynamic models to expand the domain model into a full application model. ============= In your opinion, do you think this question is asking me to come up with a package for an online ordering system to replace the system described above, or to create UML diagrams that facilitate the existing telephone-based system?

    Read the article

  • How to get BinarySecurityToken into the wcf soap request

    - by Mr Bell
    I need to sign my soap request to a 3rd party. The provided an example what the call should look like. And I am trying, rather unsuccessfully to make this call with wcf. I need to make a wcf soap call where the header contains BinarySecurityToken, Signature, and SecurityTokenReference. Here is the example they sent me (with some of the values omitted) I have a certificate for signing, but I cant for the life of me figure out how to make this work <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:BinarySecurityToken EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" wsu:Id="SecurityToken-..omitted.." xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">..omitted..</wsse:BinarySecurityToken> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#Body"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>..omitted...</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> ..omitted.. </ds:SignatureValue> <ds:KeyInfo><wsse:SecurityTokenReference><wsse:Reference URI="#SecurityToken-..omitted.." ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/></wsse:SecurityTokenReference></ds:KeyInfo></ds:Signature></wsse:Security></soapenv:Header><soapenv:Body wsu:Id="Body"><in0 xmlns="http://test.3rdParty.com">123</in0></soapenv:Body></soapenv:Envelope>

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • getting autodiscover URL from Exchange email address

    - by Anthony
    I'm starting with an address for an Exchange 2007 server: [email protected] And I attempted to send an autodiscover request, as documented at MSDN. I attempted to use the generic autodiscover address documented at the TechNet White Paper. So, using curl on PHP, I sent the following request: <Autodiscover xmlns="http://schemas.microsoft.com/exchange/autodiscover/outlook/requestschema/2006"> <Request> <EMailAddress>[email protected]</EMailAddress> <AcceptableResponseSchema> http://schemas.microsoft.com/exchange/autodiscover/outlook/responseschema/2006a </AcceptableResponseSchema> </Request> </Autodiscover> to the following URL: https://domain.exchangeserver.org/autodiscover/autodiscover.xml But got no response, just an eventual timeout. I also tried: https://autodiscover.domain.exchangeserver.org/autodiscover/autodiscover.xml With the same result. Now, since my larger goal is to use Autodiscover with Exchange Web Services, and since all of the EWS URLs start with the same sub-domain as the Outlook Web Access address, I thought I'd give that a try: OWA: https://wmail.domain.exchangeserver.org So I tried: https://wmail.domain.exchangeserver.org/autodiscover/autodiscover.xml And sure enough, I got back the expected response. However, I only knew the OWA sub-domain because it's the server I have access to and that I'm using to test everything. I would not know it for sure or be able to guess it if this were a live app and the user was entering in their own Exchange email. I know that whatever generic autodiscover settings must be turned on, because I can enter: [email protected] into Apple Mail on Snow Leopard and it finds everything without trouble. So the question is... Should https://domain.exchangeserver.org/autodiscover/autodiscover.xml have worked, and I just missed a step when trying to connect to it? Or, Is there some trick (maybe involving pinging the email address?) that Apple Mail and other clients use to resolve the address to the OWA subdomain before sending the autodiscover request? Thanks to anyone who knows or can take a wild guess.

    Read the article

  • WCF ReliableSession and Timeouts

    - by user80108
    I have a WCF service used mainly for managing documents in a repository. I used the chunking channel sample from MS so that I could upload/download huge files. Now I implemented reliable session with the service and I am seeing some strange behaviors. Here are the timeout values I am using. this.SendTimeout = new TimeSpan(0,10,0); this.OpenTimeout = new TimeSpan(0, 1, 0); this.CloseTimeout = new TimeSpan(0, 1, 0); this.ReceiveTimeout = new TimeSpan(0,10, 0); reliableBe.InactivityTimeout = new TimeSpan(0,2,0); I have the following issues. 1. If the Service is not up & running, the clients are not get disconnected after OpenTimeout. I tried it with my test client. Scenario 1: Without Reliable Session: I get the following exception: Could not connect to net.tcp://localhost:8788/MediaManagementService/ep1. The connection attempt lasted for a time span of 00:00:00.9848790. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8788 This is the correct behavior as I have given the OpenTimeout as 1 sec. Scenario 2: With ReliableSession: I get the same exception: Could not connect to net.tcp://localhost:8788/MediaManagementService/ep1. The connection attempt lasted for a time span of 00:00:00.9692460. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8788. But this message comes after around 10 mintes . (I believe after SendTimeout) So here I just have enabled the reliable session and now it looks like the OpenTimeout = SendTimeout for the client. Is this desired behavior? 2: Issue while uploading huge files with ReliableSession: The general rule is that you have to set a huge value for the maxReceivedMessageSize, SendTimeout and ReceiveTimeout. But in the case of Chunking channel, the max received message size doesn't matter as the data is sent in chunks. So I set a huge value for Send and ReceiveTimeout : say 10 hours. Now the upload is going fine, but it has a side effect that, even if the Service is not up, it takes 10 hours to timeout the client connection due to the behavior mentioned in (1). Please let me know your thoughts on this behavior.

    Read the article

  • Where to stop/destroy threads in Android Service class?

    - by niko
    Hi, I have created a threaded service the following way: public class TCPClientService extends Service{ ... @Override public void onCreate() { ... Measurements = new LinkedList<String>(); enableDataSending(); } @Override public IBinder onBind(Intent intent) { //TODO: Replace with service binding implementation return null; } @Override public void onLowMemory() { Measurements.clear(); super.onLowMemory(); }; @Override public void onDestroy() { Measurements.clear(); super.onDestroy(); try { SendDataThread.stop(); } catch(Exception e) { } }; private Runnable backgrounSendData = new Runnable() { public void run() { doSendData(); } }; private void enableDataSending() { SendDataThread = new Thread(null, backgrounSendData, "send_data"); SendDataThread.start(); } private void addMeasurementToQueue() { if(Measurements.size() <= 100) { String measurement = packData(); Measurements.add(measurement); } } private void doSendData() { while(true) { try { if(Measurements.isEmpty()) { Thread.sleep(1000); continue; } //Log.d("TCP", "C: Connecting..."); Socket socket = new Socket(); socket.setTcpNoDelay(true); socket.connect(new InetSocketAddress(serverAddress, portNumber), 3000); //socket.connect(new InetSocketAddress(serverAddress, portNumber)); if(!socket.isConnected()) { throw new Exception("Server Unavailable!"); } try { //Log.d("TCP", "C: Sending: '" + message + "'"); PrintWriter out = new PrintWriter( new BufferedWriter( new OutputStreamWriter(socket.getOutputStream())),true); String message = Measurements.remove(); out.println(message); Thread.sleep(200); Log.d("TCP", "C: Sent."); Log.d("TCP", "C: Done."); connectionAvailable = true; } catch(Exception e) { Log.e("TCP", "S: Error", e); connectionAvailable = false; } finally { socket.close(); announceNetworkAvailability(connectionAvailable); } } catch (Exception e) { Log.e("TCP", "C: Error", e); connectionAvailable = false; announceNetworkAvailability(connectionAvailable); } } } } After I close the application the phone works really slow and I guess it is due to thread termination failure. Does anyone know what is the best way to terminate all threads before terminating the application?

    Read the article

  • EXC_BAD_ACCESS error on a UITableView with ARC?

    - by Arnaud Drizard
    I am building a simple app with a custom bar tab which loads the content of the ViewController from a UITableView located in another ViewController. However, every time I try to scroll on the tableview, I get an exc_bad_access error. I enabled NSzombies and guard malloc to get more info on the issue. In the console I get: "message sent to deallocated instance 0x19182f20" and after profiling I get: # Address Category Event Type RefCt Timestamp Size Responsible Library Responsible Caller 56 0x19182f20 FirstTabBarViewController Zombie -1 00:16.613.309 0 UIKit -[UIScrollView(UIScrollViewInternal) _scrollViewWillBeginDragging] Here is a bit of the code of the ViewController in which the error occurs: .h file: #import <UIKit/UIKit.h> #import "DataController.h" @interface FirstTabBarViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { IBOutlet UITableView* tabBarTable; } @property (strong, nonatomic) IBOutlet UIView *mainView; @property (strong, nonatomic) IBOutlet UITableView *tabBarTable; @property (nonatomic, strong) DataController *messageDataController; @end .m file: #import "FirstTabBarViewController.h" #import "DataController.h" @interface FirstTabBarViewController () @end @implementation FirstTabBarViewController @synthesize tabBarTable=_tabBarTable; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)awakeFromNib { [super awakeFromNib]; self.messageDataController=[[DataController alloc] init]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger) tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{ return [self.messageDataController countOfList]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"mainCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil){ cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; }; NSString *expenseAtIndex = [self.messageDataController objectInListAtIndex:indexPath.row]; [[cell textLabel] setText:expenseAtIndex]; return cell; } - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return NO; } @end This FirstTabBarViewController is loaded in the MainViewController with the following custom segue: #import "customTabBarSegue.h" #import "MainViewController.h" @implementation customTabBarSegue -(void) perform { MainViewController *src= (MainViewController *) [self sourceViewController]; UIViewController *dst=(UIViewController *)[self destinationViewController]; for (UIView *view in src.placeholderView.subviews){ [view removeFromSuperview]; } src.currentViewController =dst; [src.placeholderView addSubview:dst.view]; } @end The Datacontroller class is just a simple NSMutableArray containing strings. I am using ARC so I don't understand where the memory management error comes from. Does anybody have a clue? Any help much appreciated ;) Thanks!!

    Read the article

  • Detecting which UIButton was pressed in a UITableView

    - by rein
    Hi, I have a UITableView with 5 UITableViewCells. Each cell contains a UIButton which is set up as follows: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *identifier = @"identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell == nil) { cell = [[UITableView alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:identifier]; [cell autorelelase]; UIButton *button = [[UIButton alloc] initWithFrame:CGRectMake(10, 5, 40, 20)]; [button addTarget:self action:@selector(buttonPressedAction:) forControlEvents:UIControlEventTouchUpInside]; [button setTag:1]; [cell.contentView addSubview:button]; [button release]; } UIButton *button = (UIButton *)[cell viewWithTag:1]; [button setTitle:@"Edit" forState:UIControlStateNormal]; return cell; } My question is this: in the buttonPressedAction: method, how do I know which button has been pressed. I've considered using tags but I'm not sure this is the best route. I'd like to be able to somehow tag the indexPath onto the control. - (void)buttonPressedAction:(id)sender { UIButton *button = (UIButton *)sender; // how do I know which button sent this message? // processing button press for this row requires an indexPath. } What's the standard way of doing this? Edit: I've kinda solved it by doing the following. I would still like to have an opinion whether this is the standard way of doing it or is there a better way? - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSString *identifier = @"identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell == nil) { cell = [[UITableView alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:identifier]; [cell autorelelase]; UIButton *button = [[UIButton alloc] initWithFrame:CGRectMake(10, 5, 40, 20)]; [button addTarget:self action:@selector(buttonPressedAction:) forControlEvents:UIControlEventTouchUpInside]; [cell.contentView addSubview:button]; [button release]; } UIButton *button = (UIButton *)[cell.contentView.subViews objectAtIndex:0]; [button setTag:indexPath.row]; [button setTitle:@"Edit" forState:UIControlStateNormal]; return cell; } - (void)buttonPressedAction:(id)sender { UIButton *button = (UIButton *)sender; int row = button.tag; } What's important to note is that I can't set the tag in the creation of the cell since the cell might be dequeued instead. It feels very dirty. There must be a better way.

    Read the article

  • Using GraphicsServices.h/GSEvent as well as compiling CLI iPhone tools with XCode

    - by Peter Hajas
    I sent this to KennyTM (has all the private framework headers on GitHub) but I figured I'd ask here too just in case someone has some good ideas or any way to help me out. I'm trying to write a command line utility that sends GSEvents to operate the keyboard, touch/drag elements onscreen, and operate hardware buttons (volume, home, sleep, etc.) I grabbed the MouseSupport code and tried to look through it, but I couldn't find the easiest way to send GSEvents. I'm hoping someone here can help me. First, what's the simplest way to declare a GSEvent and send it? I looked at the iPhone development wiki, but the documentation was very vague. I understand that there's a purple event port (?) that I have to send these events to, but I don't understand how to do that. Could someone offer examples for, say, touching at a coordinate, typing a certain key, or pressing a hardware button? Also, do I have to write or do anything special if I want this utility to operate all applications as well as Springboard? I don't know if this is a special case because I want it at the OS level. Ideally, I would SSH into the phone, start the program, and it would send GSEvents that would be handled by whatever application was open. As far as compiling this code, is there any way to do so under XCode? I don't know what sort of project template I should use (if any) and this is throwing me off. I don't need "build and go" support, I'm more than happy to scp the program over to the phone. I understand that compiling the code is also feasible on the phone. I have all of the headers from the SDK on my phone along with iphone-gcc, but when compiling some test programs I still get errors about not finding mach headers and CoreFoundation. Is there an easier way to do this? Lastly, are there other guides or pieces of literature that anyone can point me towards for learning more about this? I'm excited to get into open iPhone development (I have experience with the official SDK, but I want to go deeper). Thanks for any and all help people can offer!

    Read the article

  • Can't see why JavaMail doesn't work in a Spring MVC application on Tomcat

    - by Kartoch
    I have wrote a small web application using Spring MVC. It runs on tomcat and use the JavaMail library. At the present time I can't find why the mails are not sent. But I have no pertinent log messages in tomcat log files to find where is the problem. At the present time, my logging is configured in a log4j.properties file in the root of my CLASSPATH: log4j.rootLogger= DEBUG, CONSOLE log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n How can i see the java mail log message in debug mode ? I think it is related to JDK logging system (as I'm using Sun's JavaMail) but I don't know how to configure it. Edit: Well, one problem solves and another arises. I change my bean definition to include debug support for javamail: mail.debug=true But still no pertinent info in it: DEBUG: JavaMail version 1.4.1ea-SNAPSHOT DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers (No such file or directory) DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.providers DEBUG: successfully loaded resource: /META-INF/javamail.default.providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name:{com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.address.map DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map (No such file or directory)

    Read the article

  • How to send audio stream via UDP in java?

    - by Nob Venoda
    Hi to all :) I have a problem, i have set MediaLocator to microphone input, and then created Player. I need to grab that sound from the microphone, encode it to some lower quality stream, and send it as a datagram packet via UDP. Here's the code, i found most of it online and adapted it to my app: public class AudioSender extends Thread { private MediaLocator ml = new MediaLocator("javasound://44100"); private DatagramSocket socket; private boolean transmitting; private Player player; TargetDataLine mic; byte[] buffer; private AudioFormat format; private DatagramSocket datagramSocket(){ try { return new DatagramSocket(); } catch (SocketException ex) { return null; } } private void startMic() { try { format = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, 8000.0F, 16, 2, 4, 8000.0F, true); DataLine.Info info = new DataLine.Info(TargetDataLine.class, format); mic = (TargetDataLine) AudioSystem.getLine(info); mic.open(format); mic.start(); buffer = new byte[1024]; } catch (LineUnavailableException ex) { Logger.getLogger(AudioSender.class.getName()).log(Level.SEVERE, null, ex); } } private Player createPlayer() { try { return Manager.createRealizedPlayer(ml); } catch (IOException ex) { return null; } catch (NoPlayerException ex) { return null; } catch (CannotRealizeException ex) { return null; } } private void send() { try { mic.read(buffer, 0, 1024); DatagramPacket packet = new DatagramPacket( buffer, buffer.length, InetAddress.getByName(Util.getRemoteIP()), 91); socket.send(packet); } catch (IOException ex) { Logger.getLogger(AudioSender.class.getName()).log(Level.SEVERE, null, ex); } } @Override public void run() { player = createPlayer(); player.start(); socket = datagramSocket(); transmitting = true; startMic(); while (transmitting) { send(); } } public static void main(String[] args) { AudioSender as = new AudioSender(); as.start(); } } And only thing that happens when I run the receiver class, is me hearing this Player from the sender class. And I cant seem to see the connection between TargetDataLine and Player. Basically, I need to get the sound form player, and somehow convert it to bytes[], therefore I can sent it as datagram. Any ideas? Everything is acceptable, as long as it works :)

    Read the article

  • How to convert an NSArray of NSManagedObjects to NSData

    - by carok
    Hi, I'm new to Objective C and was wondering if anyone can help me. I am using core data with a sqlite database to hold simple profile objects which have a name and a score attribute (both of which are of type NSString). What I want to do is fetch the profiles and store them in an NSData object, please see my code below: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"GamerProfile" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"Name" ascending:YES]; NSArray *sortDecriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDecriptors]; [sortDescriptor release]; [sortDecriptors release]; NSError *error; NSArray *items = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error]; NSData *data = [NSKeyedArchiver archivedDataWithRootObject:items]; [self SendData:data]; [fetchRequest release]; When I run the code I'm getting the error "Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '* -[GamerProfile encodeWithCoder:]: unrecognized selector sent to instance 0x3f4b530'" I presume I have to add an encodeWithCoderClass to my core data NSManagedObject object (GamerProfile) but I'm not sure how to do this even after reading the documentation, My attempt at doing this is below. I'm not sure if I'm going along the right lines with this as get a warning stating "NSManagedObject" may not respond to '-encodeWithCoder'" I would really, really appreciate it if someone could point me in the right direction!! Thanks C Here is the code for my GamerProfile (CoreData NSManagedObject Object) with my attempt at adding an encodeWithCoder method... Header File #import <CoreData/CoreData.h> @interface GamerProfile : NSManagedObject <NSCoding> { } @property (nonatomic, retain) NSString * GamerScore; @property (nonatomic, retain) NSString * Name; - (void)encodeWithCoder:(NSCoder *)coder; @end Code File #import "GamerProfile.h" @implementation GamerProfile @dynamic GamerScore; @dynamic Name; - (void)encodeWithCoder:(NSCoder *)coder { [super encodeWithCoder:coder]; [coder encodeObject:GamerScore forKey:@"GamerScore"]; [coder encodeObject:Name forKey:@"Name"]; }

    Read the article

  • Lots of questions about file I/O (reading/writing message strings)

    - by Nazgulled
    Hi, For this university project I'm doing (for which I've made a couple of posts in the past), which is some sort of social network, it's required the ability for the users to exchange messages. At first, I designed my data structures to hold ALL messages in a linked list, limiting the message size to 256 chars. However, I think my instructors will prefer if I save the messages on disk and read them only when I need them. Of course, they won't say what they prefer, I need to make a choice and justify the best I can why I went that route. One thing to keep in mind is that I only need to save the latest 20 messages from each user, no more. Right now I have an Hash Table that will act as inbox, this will be inside the user profile. This Hash Table will be indexed by name (the user that sent the message). The value for each element will be a data structure holding an array of size_t with 20 elements (20 messages like I said above). The idea is to keep track of the disk file offsets and bytes written. Then, when I need to read a message, I just need to use fseek() and read the necessary bytes. I think this could work nicely... I could use just one single file to hold all messages from all users in the network. I'm saying one single file because a colleague asked an instructor about saving the messages from each user independently which he replied that it might not be the best approach cause the file system has it's limits. That's why I'm thinking of going the single file route. However, this presents a problem... Since I only need to save the latest 20 messages, I need to discard the older ones when I reach this limit. I have no idea how to do this... All I know is about fread() and fwrite() to read/write bytes from/to files. How can I go to a file offset and say "hey, delete the following X bytes"? Even if I could do that, there's another problem... All offsets below that one will be completely different and I would have to process all users mailboxes to fix the problem. Which would be a pain... So, any suggestions to solve my problems? What do you suggest?

    Read the article

  • C#.NET Socket Programming: Connecting to remote computers.

    - by Gio Borje
    I have a typical server in my end and a friend using a client to connect to my IP/Port and he consistently receives the exception: "A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond {MY_IP}:{MY_PORT}"—You don't need to know my IP. The client and server, however, work fine on the loopback address (127.0.0.1). I also do not have any firewall nor is windows firewall active. Server: static void Main(string[] args) { Console.Title = "Socket Server"; Console.WriteLine("Listening for messages..."); Socket serverSock = new Socket( AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IPAddress serverIP = IPAddress.Any; IPEndPoint serverEP = new IPEndPoint(serverIP, 33367); SocketPermission perm = new SocketPermission(NetworkAccess.Accept, TransportType.Tcp, "98.112.235.18", 33367); serverSock.Bind(serverEP); serverSock.Listen(10); while (true) { Socket connection = serverSock.Accept(); Byte[] serverBuffer = new Byte[8]; String message = String.Empty; while (connection.Available > 0) { int bytes = connection.Receive( serverBuffer, serverBuffer.Length, 0); message += Encoding.UTF8.GetString( serverBuffer, 0, bytes); } Console.WriteLine(message); connection.Close(); } } Client: static void Main(string[] args) { // Design the client a bit Console.Title = "Socket Client"; Console.Write("Enter the IP of the server: "); IPAddress clientIP = IPAddress.Parse(Console.ReadLine()); String message = String.Empty; while (true) { Console.Write("Enter the message to send: "); // The messsage to send message = Console.ReadLine(); IPEndPoint clientEP = new IPEndPoint(clientIP, 33367); // Setup the socket Socket clientSock = new Socket( AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); // Attempt to establish a connection to the server Console.Write("Establishing connection to the server... "); try { clientSock.Connect(clientEP); // Send the message clientSock.Send(Encoding.UTF8.GetBytes(message)); clientSock.Shutdown(SocketShutdown.Both); clientSock.Close(); Console.Write("Message sent successfully.\n\n"); } catch (Exception ex) { Console.WriteLine(ex.Message); } } }

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >