Search Results

Search found 9208 results on 369 pages for 'mail archive'.

Page 180/369 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • How to deal with overly aggressive "Link Take Down Demands"?

    - by Eoin
    I've been receiving a large number of emails recently requesting I clean from link spam from my forum. Initially the emails were very polite and professional, and I was happy to remove the links. Recently the email have gotten very abrasive, here is a particularly rude example: From: [email protected] To: [email protected] Hi, This is the second time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We really do need to remove this link. We have to report to Google any link we were unable to remove, and I wouldn't want to have to include your site in the list. Could you please remove our link from this page and any other page on your site? Thank You, Name Changed Behind the superficial pleasantries I feel there is some very real maliciousness. Note the email address, DMCA Violations, I don't see how the DMCA is involved here, except as a word which tends to strike fear in many people. Also relating to the email address, it doesn't match the company being linked to at all. How am I to trust they are truely operating on behalf of company-two when they don't even use one of it's email addresses. My email is hidden by privacypost. While a service with legitimate uses, I feel it's highly unprofessional for communications between to companies. The claim "This is the second time..." Every email I've received has started like this, but a check of my spam filters has never revealed a 1st mail. Initially I gave them the benefit of the doubt, by now though it's clear this is a cheap ploy to start me off on the defensive. And finally worst of all- the threats of reporting me to Google if I don't do everything they ask. I sent a polite reply asking for more information. I have no idea if the email address was even valid but I never received any response. Much later I got this followup mail From: [email protected] To: [email protected] Hi, This is the final time we are reaching out to you regarding your link to our site hxxp://www.company-two.com from hxxp://www.my-forum.com/some-topic-id. We will soon be reporting to Google any link we were unable to remove, and currently your site will have to be on the list. Could you please remove our link from this page and any other page on your site? I appreciate your urgent attention to this matter. Thank You, Name Changed This time the from address was more personal, though still not obviously connected to the spammed company. Lets be honest, I don't for one second believe that the companies were the victim of a 3rd party spammer as they claim. The links in questions were generated well over a year ago, and I firmly believe the companies were directly responsible for the spam links in question, a type of spam that has plagued my forum. Now they have the audacity to demand I spend my time cleaning up their mess, using threats to ensure they get their way. Have recent changes in Googles algorithms meant all the cash they spent spamming the web has now turned into a liability? If so I can see why these companies are all of a sudden running scared. Frankly, cleaning up my forum is a good things, but the threats they are using sickens me. So my question here is specifically about the threats: Are they vaild, and would such reports to Google destroy my page rankings? Is there a way I can report this abusive behaviour to Google?

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Ask the Readers: How Many Monitors Do You Use with Your Computer?

    - by Asian Angel
    Most people have a single monitor for their computers, many have two, and some individuals enjoy “3 monitor plus” goodness. This week we would like to know how many monitors you use with your computer. Photo by DamnedNice. A good majority of people have a single monitor that they use with their computers and that single monitor serves their needs very well. It could be that these individuals do not engage in a heavy amount of work or play on their computers…they just need to do the basics like checking e-mail, using I.M., working with photos, etc. Another possibility is the use of virtual desktop software such as Dexpot, Yodm 3D, or Sysinternals Desktops on Windows systems. Linux systems such as Ubuntu already have that wonderful multi-desktop functionality built in. The wonderful part about virtual desktops is that a single monitor can feel equivalent to a small army of monitors. The ability to separate your open windows into “categories” and spread them out across multiple desktops is definitely nice. With each passing year dual monitor setups are becoming more common. Having twice the screen real-estate visible at the same time can be extremely convenient when you are multi-tasking. Perhaps you like to monitor your system’s stats and an e-mail account on the second monitor while working with software on the first. It certainly beats having windows popping up and down on your screen constantly while keeping on top of everything! Next we have the people who have three or more monitors in use with their computers. This may be a result of the type of work they do, an experiment to see if multiple monitors are right for them, or the cool, geeky factor that comes with having all those monitors. Needless to say these individuals can induce a good amount of envy and/or inspiration in the rest of us when we see their awesome setups. Are you perfectly content with a single monitor? Do you have two or more monitors that you use? If you have two or more monitors are they actually that useful to you? Perhaps you are getting ready even now to add additional monitors to your system. Whatever your situation may be at the moment, let us know your thoughts (and possible multi-monitor plans) in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster World of Warcraft Theme for Windows 7 Ubuntu Font Family Now Available for Download Oh No! WikiLeaks Published Santa Claus’s Naughty List [Video] Remember the Milk Now Supports HTTPS Encryption for the Entire Session

    Read the article

  • Webcast On-Demand: Building Java EE Apps That Scale

    - by jeckels
    With some awesome work by one of our architects, Randy Stafford, we recently completed a webcast on scaling Java EE apps efficiently. Did you miss it? No problem. We have a replay available on-demand for you. Just hit the '+' sign drop-down for access.Topics include: Domain object caching Service response caching Session state caching JSR-107 HotCache and more! Further, we had several interesting questions asked by our audience, and we thought we'd share a sampling of those here for you - just in case you had the same queries yourself. Enjoy! What is the largest Coherence deployment out there? We have seen deployments with over 500 JVMs in the Coherence cluster, and deployments with over 1000 JVMs using the Coherence jar file, in one system. On the management side there is an ecosystem of monitoring tools from Oracle and third parties with dashboards graphing values from Coherence's JMX instrumentation. For lifecycle management we have seen a lot of custom scripting over the years, but we've also integrated closely with WebLogic to leverage its management ecosystem for deploying Coherence-based applications and managing process life cycles. That integration introduces a new Java EE archive type, the Grid Archive or GAR, which embeds in an EAR and can be seen by a WAR in WebLogic. That integration also doesn't require any extra WebLogic licensing if Coherence is licensed. How is Coherence different from a NoSQL Database like MongoDB? Coherence can be considered a NoSQL technology. It pre-dates the NoSQL movement, having been first released in 2001 whereas the term "NoSQL" was coined in 2009. Coherence has a key-value data model primarily but can also be used for document data models. Coherence manages data in memory currently, though disk persistence is in a future release currently in beta testing. Where the data is managed yields a few differences from the most well-known NoSQL products: access latency is faster with Coherence, though well-known NoSQL databases can manage more data. Coherence also has features that well-known NoSQL database lack, such as grid computing, eventing, and data source integration. Finally Coherence has had 15 years of maturation and hardening from usage in mission-critical systems across a variety of industries, particularly financial services. Can I use Coherence for local caching? Yes, you get additional features beyond just a java.util.Map: you get expiration capabilities, size-limitation capabilities, eventing capabilites, etc. Are there APIs available for GoldenGate HotCache? It's mostly a black box. You configure it, and it just puts objects into your caches. However you can treat it as a glass box, and use Coherence event interceptors to enhance its behavior - and there are use cases for that. Are Coherence caches updated transactionally? Coherence provides several mechanisms for concurrency control. If a project insists on full-blown JTA / XA distributed transactions, Coherence caches can participate as resources. But nobody does that because it's a performance and scalability anti-pattern. At finer granularity, Coherence guarantees strict ordering of all operations (reads and writes) against a single cache key if the operations are done using Coherence's "EntryProcessor" feature. And Coherence has a unique feature called "partition-level transactions" which guarantees atomic writes of multiple cache entries (even in different caches) without requiring JTA / XA distributed transaction semantics.

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

  • Specifying the bounce-back address for email

    - by Kirk Broadhurst
    I'm having a problem getting emails to bounce to a specific email address, different to the From address. A particular client requires that we send emails from a specific email address (call it [email protected]). Our Exchange admins have created an account on the Exchange box so that we can log in and send from that address. Our Exchange server is spoofing that address / domain. This works fine. Unfortunately the emails sent from [email protected] are not bouncing back to us. They are presumably bouncing back to the contact account at clientcompany.com (which may or may not exist). I've inserted a header [email protected] with the assumption that this field determines where bouncebacks are sent. Other documents indicate that this field should never be populated by the originating SMTP system. Other websites again talk about a field called Errors-To which is apparently non-standard. So - which field is the correct one, and what does it depend on? Any ideas why my Return-Path is not working? I'd really like to get Exchange to correctly bounce a message addressed to an invalid server! update: Continuing to dig, and my Return-Path work was only adding an extended property at the end of the header block, but Exchange appears to be still adding its own Return-Path value at the top. Delivered-To: [email protected] Received: by 1.1.1.1 with SMTP ... Return-Path: <[email protected]> Received: from ... ... ... Subject: Test Message-ID: ... Return-Path: [email protected] According to the Microsoft.com, I cannot set the Return-Path as it is determined by the MAIL FROM - which seems consistent with what I've previously read. But now I'm stuck - how do I change this MAIL FROM value programmatically within Exchange 2007?

    Read the article

  • jQuery: AJAX umlauts & special characters are a mess

    - by rayne
    I've just created my first ajax function with jQuery which actually works, but unfortunately the character encoding (for characters like ä, ö, ü, ß, c, c, å, ø) is a nightmare. My files and my database are all UTF-8. I've tried a multitude of options in the ajax function and the PHP function, none of which were satisfactory. This is my ajax var dataString = { 'name': name, 'mail': mail // other stuff } $.ajax({ type: "POST", url: "/post.php", data: dataString, contentType: "application/x-www-form-urlencoded;charset=UTF-8", cache: false, success: function(html){ // do stuff } I've tried it without contentType: "application/x-www-form-urlencoded;charset=UTF-8" and I've tried to wrap the affected data in encodeURIComponent(), none of which worked. When I use that AJAX with htmlentities() in my php, my umlauts look like this in plain text: UE Ã?, AE Ã?, OE Ã?, ue ü, ae ä, oe o And like this in the database: UE Ãœ , AE Ä, OE Ö, ue ü, ae ä, oe o If I don't use htmlentities() but mysql_real_escape_string() instead (or neither), they look good in plain text, but they look like this in the database: AE Ä, OE Ö, UE Ãœ, ae ä oe ö ue ü I've been trying tons of options for hours now, but I can't find a solution that works. So far the only option I seem to have is having them look like a total mess in the database, but that would be very contraproductive if those data sets need to be edited.

    Read the article

  • JQuery Validate: only takes the last addMethod?

    - by Neuquino
    Hi, I need to add multiple custom validations to one form. I have 2 definitions of addMethod. But it only takes the last one... here is the code. $(document).ready(function() { $.validator.addMethod("badSelectionB",function(){ var comboValues = []; for(var i=0;i<6;i++){ var id="comision_B_"+(i+1); var comboValue=document.getElementById(id).value; if($.inArray(comboValue,comboValues) == -1){ comboValues.push(comboValue); }else{ return false; } } return true; },"Seleccione una única prioridad por comisión."); $.validator.addMethod("badSelectionA",function(){ var comboValues = []; for(var i=0;i<6;i++){ var id="comision_A_"+(i+1); var comboValue=document.getElementById(id).value; if($.inArray(comboValue,comboValues) == -1){ comboValues.push(comboValue); }else{ return false; } } return true; },"Seleccione una única prioridad por comisión."); $("#inscripcionForm").validate( { rules : { nombre : "required", apellido : "required", dni : { required: true, digits: true, }, mail : { required : true, email : true, }, comision_A_6: { badSelectionA:true, }, comision_B_6: { badSelectionB: true, } }, messages : { nombre : "Ingrese su nombre.", apellido : "Ingrese su apellido.", dni : { required: "Ingrese su dni.", digits: "Ingrese solo números.", }, mail : { required : "Ingrese su correo electrónico.", email: "El correo electrónico ingresado no es válido." } }, }); }); Do you have any clue of what is happening? Thanks in advance,

    Read the article

  • Indy 10 - IdSMTP.Send() hangs when sending messages from GMail account

    - by LukLed
    I am trying to send an e-mail using gmail account (Delphi 7, Indy 10) with these settings: TIdSmtp: Port = 587; UseTLS := utUseExplicitTLS; TIdSSLIOHandlerSocketOpenSSL: SSLOptions.Method := sslvTLSv1; Everything seems to be set ok. I get this response: Resolving hostname smtp.gmail.com. Connecting to 74.125.77.109. SSL status: "before/connect initialization" SSL status: "before/connect initialization" SSL status: "SSLv3 write client hello A" SSL status: "SSLv3 read server hello A" SSL status: "SSLv3 read server certificate A" SSL status: "SSLv3 read server done A" SSL status: "SSLv3 write client key exchange A" SSL status: "SSLv3 write change cipher spec A" SSL status: "SSLv3 write finished A" SSL status: "SSLv3 flush data" SSL status: "SSLv3 read finished A" SSL status: "SSL negotiation finished successfully" SSL status: "SSL negotiation finished successfully" Cipher: name = RC4-MD5; description = RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 ; bits = 128; version = TLSv1/SSLv3; And then it hangs and doesn't finish. E-mail is not sent. What can be the problem?

    Read the article

  • Getting Error System.Runtime.InteropServices.COMException

    - by Savan Parmar
    Hey All, I am using Vb.Net to Create Labels in Microsoft. For that i am using below mantioend Code. Public Sub CreateLabel(ByVal StrFilter As String, ByVal Path As String) WordApp = CreateObject("Word.Application") ''Add a new document. WordDoc = WordApp.Documents.Add() Dim oConn As SqlConnection = New SqlConnection(connSTR) oConn.Open() Dim oCmd As SqlCommand Dim oDR As SqlDataReader oCmd = New SqlCommand(StrFilter, oConn) oDR = oCmd.ExecuteReader Dim intI As Integer Dim FilePath As String = "" With WordDoc.MailMerge With .Fields Do While oDR.Read For intI = 0 To oDR.FieldCount - 1 .Add(WordApp.Selection.Range, oDR.Item(intI)) Next Loop End With Dim objAutoText As Word.AutoTextEntry = WordApp.NormalTemplate.AutoTextEntries.Add("MyLabelLayout", WordDoc.Content) WordDoc.Content.Delete() .MainDocumentType = Word.WdMailMergeMainDocType.wdMailingLabels FilePath = CreateSource(StrFilter) .OpenDataSource(FilePath) Dim NewLabel As Word.CustomLabel = WordApp.MailingLabel.CustomLabels.Add("MyLabel", False) WordApp.MailingLabel.CreateNewDocument(Name:="MyLabel", Address:="", AutoText:="MyLabelLayout") objAutoText.Delete() .Destination = Word.WdMailMergeDestination.wdSendToNewDocument WordApp.Visible = True .Execute() End With oConn.Close() WordDoc.Close() End Sub Private Function CreateSource(ByVal StrFilter As String) As String Dim CnnUser As SqlConnection = New SqlConnection(connSTR) Dim sw As StreamWriter = File.CreateText("C:\Mail.Txt") Dim Path As String = "C:\Mail.Txt" Dim StrHeader As String = "" Try Dim SelectCMD As SqlCommand = New SqlCommand(StrFilter, CnnUser) Dim oDR As SqlDataReader Dim IntI As Integer SelectCMD.CommandType = CommandType.Text CnnUser.Open() oDR = SelectCMD.ExecuteReader For IntI = 0 To oDR.FieldCount - 1 StrHeader &= oDR.GetName(IntI) & " ," Next StrHeader = Mid(StrHeader, 1, Len(StrHeader) - 2) sw.WriteLine(StrHeader) sw.Flush() sw.Close() StrHeader = "" Do While oDR.Read For IntJ As Integer = 0 To oDR.FieldCount - 1 StrHeader &= oDR.GetString(IntJ) & " ," Next Loop StrHeader = Mid(StrHeader, 1, Len(StrHeader) - 2) sw = File.AppendText(Path) sw.WriteLine(StrHeader) CnnUser.Close() sw.Flush() sw.Close() Catch ex As Exception MessageBox.Show(ex.Message, "TempID", MessageBoxButtons.OK, MessageBoxIcon.Error) End Try Return Path End Function Now when i am running the programm i am getting this error.I tried hard but not able to locate what could be the problem the error is:- System.Runtime.InteropServices.COMException --Horizontal and vertical pitch must be greater than or equal to the label width and height, respectively. Even though i tried to set the Horizontal and vertical pitch programatically but it gives same err. Plz if any one can help

    Read the article

  • Some JPEG images are not working in IE

    - by jwandborg
    Hello, I have a press archive. The press archive displays automattically created thumbnails as links to a PDF document. This is what i get in IE 6, 7 & 8: While it works fine in Chrome: The thumbnails are automatically created by imagemagick: $cmd = 'convert ' . $_FILES['file']['tmp_name'] # This is a PDF file . '[0]' # This indicates that it the first page that should be converted . ' -resize "120x120" ' # This is the size of the thumbnail . $thumb_path; # This is the destination $resize_output = exec($cmd); A command can look like this convert /tmp/AcXDYe[0] -resize "120x120>" /var/www[...] However I looked a little closer on the images and it seems that they are a little different and this is a theme among all the failing images Image that does'nt work in IE: http://regex.info/exif.cgi?url=http://bit.ly/aFTL3T Image that works fine: http://regex.info/exif.cgi?url=http://bit.ly/b71B7R So, can i change my imagemagick command so that creates IE-compatible JPEGS?

    Read the article

  • Starting Java applet directly from jar file

    - by Thomas
    The goal is to have an applet run from a jar file. The problem is that the applet only seems to want to run from an exploded jar file. Samples on the Internet suggest this applet tag: <applet code="com.blabla.MainApplet" archive="applet.jar" width="600" height="600"> This will not even try to look in the jar file and fails with: Caused by: java.io.IOException: open HTTP connection failed:http://localhost:8080/helloWord/com/blabbla/MainApplet.class at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more Setting the codebase instead of the archive attribute to the jar file. Looks a bit better. However, the JVM does not realize that it has to open the jar file: <applet code="com.blabla.MainApplet" codebase="applet.jar" width="600" height="600"> Caused by: java.io.IOException: open HTTP connection failed:http://localhost:8080/helloWord/applet.jar/com/blabbla/MainApplet.class at sun.plugin2.applet.Applet2ClassLoader.getBytes(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader.access$000(Unknown Source) at sun.plugin2.applet.Applet2ClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 7 more How does the applet tag have to be formulated to start an applet class from inside of a jar file?

    Read the article

  • UIImageJPEGRepresentation - memory release issue

    - by FredM
    On a iPhone app, I need to send a jpg by mail with a maximum size of 300Ko (I don't no the maximum size mail.app can have, but it's another problem). To do that, I'm trying to decrease quality until obtain an image under 300Ko. In order to obtain the good value of the quality (compressionLevel) who give me a jpg under 300Ko, I have made the following loop. It's working, but each time time the loop is executed, the memory increase of the size of the original size of my jpg (700Ko) despite the "[tmpImage release];". float compressionLevel = 1.0f; int size = 300001; while (size 300000) { UIImage *tmpImage =[[UIImage alloc] initWithContentsOfFile:[self fullDocumentsPathForTheFile:@"countOpix_imageToAnalyse.jpg"]]; size = [UIImageJPEGRepresentation(tmpImage, compressionLevel) length]; [tmpImage release]; //In the following line, the 0.001f decrement is choose just in order test the increase of the memory //compressionLevel = compressionLevel - 0.001f; NSLog(@"Compression: %f",compressionLevel); } Any ideas about how can i get it off, or why it happens? thanks

    Read the article

  • Adding new record to a VFP data table in VB.NET with ADO recordsets

    - by Gerry
    I am trying to add a new record to a Visual FoxPro data table using an ADO dataset with no luck. The code runs fine with no exceptions but when I check the dbf after the fact there is no new record. The mDataPath variable shown in the code snippet is the path to the .dbc file for the entire database. A note about the For loop at the bottom; I am adding the body of incoming emails to this MEMO field so thought I needed to break the addition of this string into 256 character Chunks. Any guidance would be greatly appreciated. cnn1.Open("Driver={Microsoft Visual FoxPro Driver};" & _ "SourceType=DBC;" & _ "SourceDB=" & mDataPath & ";Exclusive=No") Dim RS As ADODB.RecordsetS = New ADODB.Recordset RS.Open("select * from gennote", cnn1, 1, 3, 1) RS.AddNew() 'Assign values to the first three fields RS.Fields("ignnoteid").Value = NextIDI RS.Fields("cnotetitle").Value = "'" & mail.Subject & "'" RS.Fields("cfilename").Value = "''" 'Looping through 254 characters at a time and add the data 'to Ado Field buffer For i As Integer = 1 To Len(memo) Step liChunkSize liStartAt = i liWorkString = Mid(mail.Body, liStartAt, liChunkSize) RS.Fields("mnote").AppendChunk(liWorkString) Next 'Update the recordset RS.Update() RS.Requery() RS.Close()

    Read the article

  • Bash scripting problem

    - by komidore64
    I'm writing a bash script to sync my iTunes music directory to a directory on a removable hard drive. The script works fine when there is absolutely nothing in the folder on the external hard drive. Once all files have been copied to the external drive, then the script begins to act strange. Even though i just sync'd everything over, it proceeds to recopy certain files again. After the initial sync, it chooses the same files to resync each consecutive time the script is executed without any changes being made to the source directory. #!/bin/bash # shell script to sync music with gigabeat and/or firewire drive musicdir="/Users/komidore64/Music/iTunes/iTunes Media/Music" gigadir="/Volumes/GIGABEAT/music" # fwdir="/Volumes/" remove() { find "$1" \ ! \( -name "*.wav" \ -o -name "*.ogg" \ -o -name "*.flac" \ -o -name "*.aac" \ -o -name "*.mp3" \ -o -name "*.m4a" \ -o -name "*.wma" \ -o -name "*.m4p" \ -o -name "*.ape" \ -o -type d \) \ -exec rm -i {} \; } if [ $# == 0 ]; then echo "no device argument present" echo "specify '-g' for gigabeat" echo "or '-f' for firewire drive" else remove "$musicdir" while [ $1 ]; do case $1 in -g | --gigabeat ) rsync --archive --verbose --delete "$musicdir/" "$gigadir" ;; -f | --firewire ) rsync --archive --verbose --delete "$musicdir/" "$fwdir" esac shift done echo "music synced" fi

    Read the article

  • Creating a (ClickOnce) setup for VSTO Outlook Add-in

    - by Ward Werbrouck
    So I created an Outlook Add-in and used the click-once setup to deploy it. The setup runs fine when the user is administrator, but otherwise: no go. Running the setup with "run as..." and logging in as admin works, but than the add-in is installed under the admin, not the current user. The addin doesn't show up in outlook. I tried following this guide: http://blogs.msdn.com/mshneer/archive/2008/04/24/deploying-your-vsto-add-in-to-all-users-part-iii.aspx But I get stuck at part I: http://blogs.msdn.com/mshneer/archive/2007/09/04/deploying-your-vsto-add-in-to-all-users-part-i.aspx I follow the examples and start excel as described: Now start Excel application. Examine the registry keys in HKCU hive e.g. you will find two interesting registry keys that appear under your HKCU hive: HKCU\Software\Microsoft\Office\TestKey registry key containing registry value TestValue You now also have HKCU\Software\Microsoft\Office\12.0\User Settings\TestPropagation registry key with Count value set to 1 But on my machine, the keys are not created... What can I try next?

    Read the article

  • How to mock HTTPSession/FlexSession with TestNG and some Mocking Framework

    - by ifischer
    I'm developing a web application running on Tomcat 6, with Flex as Frontend. I'm testing my backend with TestNG. Currently, I'm trying to test the following method in my Java-Backend: public UserPE login(String mail, String password) { UserPE dbuser = findUserByMail(mail); if (dbuser == null || !dbuser.getPassword().equals(password)) throw new RuntimeException("Invalid username and/or password"); // Save logged in user FlexSession session = FlexContext.getFlexSession(); session.setAttribute("user", dbuser); return dbuser; } The method needs access to the FlexContext which only exists when i run it on the Servlet container (don't bother if you don't know Flex, it's more a Java-Mocking question in general). Otherwise i get a Nullpointer exception when calling session.setAttribute(). Unfortunately, I cannot set the FlexContext from outside, which would make me able to set it from my tests. It's just obtained inside the method. What would be the best way to test this method with a Mocking framework, without changing the method or the class which includes the method? And which framework would be the easiest for this use case (there are hardly other things i have to mock in my app, it's pretty simple)? Sorry I could try out all of them for myself and see how i could get this to work, but i hope that i'll get a quickstart with some good advices!

    Read the article

  • Error while dynamically loading mapi32.dll

    - by The_Fox
    Our application uses Simple MAPI to send e-mails. One of our clients has problems sending e-mail from a session on his terminal server. The mapi32.dll is loaded with a call to LoadLibrary which succeeds, but then our application tries to get the addresses of the functions MAPILogon, MAPILogOff, MAPISendMail, MAPIFreeBuffer and MAPIResolveName. The problem is that GetProcAddress fails for those functions with an ERROR_ACCESS_DENIED (code: 5) except for MAPIFreeBuffer. It looks like some sort of security thing. How can I fix this or should I use another method to send mail? FWI, here some more information about OS and contents of registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows Messaging Subsystem: OS info: 5.2.3790 VER_PLATFORM_WIN32_NT Service Pack 2 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem InstallCmd: rundll32 setupapi,InstallHinfSection MSMAIL 132 msmail.inf MAPI: 1 CMCDLLNAME: mapi.dll CMCDLLNAME32: mapi32.dll CMC: 1 MAPIX: 1 MAPIXVER: 1.0.0.1 OLEMessaging: 1 Contents of SOFTWARE\Microsoft\Windows Messaging Subsystem\MSMapiApps inetsw95.exe: choosusr.dll: msab32.dll: nwab32.dll: outstore.dll: Microsoft Outlook CDOEXM.DLL: EMSMDB32.DLL: EMSABP32.DLL: newprof.exe: Microsoft Outlook outlook.exe: wfxmsrvr.exe: Microsoft Outlook msexcimc.exe: exchng32.exe: schdmapi.dll: Microsoft Outlook pilotcfg.exe: Microsoft Outlook mailmig.exe: Microsoft Outlook admin.exe: msspc32.dll: Microsoft Outlook cnfnot32.exe: Microsoft Outlook ilpilot.exe: Microsoft Outlook events.exe: I'm on Delphi 7.0, but that shouldn't matter. Edit, added version information: Fileversion info of C:\WINDOWS\system32\mapi32.dll Fileversion: 6.5.7226.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32 Comments=Service Pack 1 LegalCopyRight=Copyright (C) 1986-2003 Microsoft Corp. All rights reserved. LegalTradeMarks=Microsoft(R) and Windows(R) are registered trademarks of Microsoft Corporation. OriginalFileName=MAPI32.DLL ProductName=Microsoft Exchange ProductVersion=6.5 Fileversion info of C:\Program Files\Common Files\SYSTEM\MSMAPI\1043\msmapi32.dll Fileversion: 11.0.5601.0 FileDescription=Extended MAPI 1.0 for Windows NT CompanyName=Microsoft Corporation InternalName=MAPI32.DLL LegalCopyRight=Copyright © 1995-2003 Microsoft Corporation. All rights reserved. OriginalFileName=MAPI32.DLL ProductName=MAPI32 ProductVersion=11.0.5601

    Read the article

  • Parse MIME messages

    - by Abhimanyu
    Hi, For my new project which has email module.i need to show all the email information on web.when i m making a call to server i m getting the base64 encoded mime data. after applying base64 decoding technique i m getting the mime data as follows: /*****************Mime data start *******************************/ From [email protected] Tue Jun 23 12:01:02 2009 Date: Tue, 23 Jun 2009 12:01:02 +0530 From: Prashant R Naik <[email protected]> To: [email protected] Subject: This is a test mail Message-ID: <[email protected]> Reply-To: Prashant R Naik <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="ReaqsoxgOBHFXBhH" Content-Disposition: inline User-Agent: Mutt/1.5.18 (2008-05-17) Status: RO Content-Length: 1912 Lines: 52 --ReaqsoxgOBHFXBhH Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Test mail. Initiated by prashant Regards, -- Prashant R Naik Principal Technologist | Symbian & Web2.0 Geodesic Limited | www.geodesic.com Tel: +91-80-66551000 --ReaqsoxgOBHFXBhH Content-Type: image/gif Content-Disposition: attachment; filename="trash.gif" Content-Transfer-Encoding: base64 R0lGODlhEAAQANUoADJ8wTqU2DmR1TqV2DN9wTSBxTWFyTaGyTJ9wTWGyTaKzjmS1TOAxTuV 2DaFyTN8wDiN0jiO0jSAxTeKzjqS1DN8wTqR1TWFyjB4vTOBxTmO0TmS1DaKzTeJzTqV1zSA xDJ8wDqS1TeKzTF4vDF4vTiO0f///zuX2gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAACgALAAA AAAQABAAAAaDQNRpSCwWhcakcsk8mZ5Qpik5pUKvT2W1uDVWp+BiYNAImAZmz/lcDoQEFoFp QTFtTPKFQLCAREolJiURJhCCJhqAJRMiIhwmjSYdJgqUjQoODgkJJgecBp0mBgYXBx8ZBQxY UAUSDAUACLEPDwgEAAAEIBUEtygkIyMkwMMYw8EjKEEAOw== --ReaqsoxgOBHFXBhH Content-Type: image/jpeg Content-Disposition: attachment; filename="bx.jpg" Content-Transfer-Encoding: base64 /9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/2wBDAQEBAQEBAQEBAQEBAQEB AQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQH/wAAR CAAUAAoDAREAAhEBAxEB/8QAFQABAQAAAAAAAAAAAAAAAAAAAAn/xAAYEAEAAwEAAAAAAAAA AAAAAAAAGWen5//EABQBAQAAAAAAAAAAAAAAAAAAAAD/xAAUEQEAAAAAAAAAAAAAAAAAAAAA /9oADAMBAAIRAxEAPwCb4AJHym0Vp3PQJTaK07noJHgA/9k= --ReaqsoxgOBHFXBhH Content-Type: image/png Content-Disposition: attachment; filename="day_bg.png" Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAAGQAAAApCAYAAADDJIzmAAAABmJLR0QA/wD/AP+gvaeTAAAA CXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH2AwCCS0kTriU2QAAAB10RVh0Q29tbWVudABD cmVhdGVkIHdpdGggVGhlIEdJTVDvZCVuAAAAXElEQVR42u3bQQEAMAgDMZiqiZtP5AwbfeQk NO/WvPtLMR0TABEQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECACAgQAREQIAICRECA CAgQARGQ7NpPPasFT+0FZPjBRwYAAAAASUVORK5CYII= --ReaqsoxgOBHFXBhH-- /*****************Mime data end *******************************/ now the problem is i have to parse this data and use it in my application.since this data is not a xml so it difficult to parse it (because parsing with some tag is easy).so any one who knows how to parse mime data help be.i m using erlang to parse this data. Thank you in advance

    Read the article

  • AppletClassLoader exception : class not found

    - by gautam
    Hi, I am getting exception when i am trying to open an applet in my jsp. My jsp and applet is in same directory. My code is like: <object classid="clsid:8AD9C840-044E-11D1-B3E9-00805F499D93" width="720" height="500"`enter code here` type="application/x-java-applet;version=1.5"> <param name="type" value="application/x-java-applet;version=1.5"> <param name="code" value="com.timer.AppletGenerator.class"> <param name="codebase" value="."> <param name="ARCHIVE" value="Applet.jar, jcommon-1.0.0.jar, jfreechart-1.0.0.jar, log4j-1.2.13.jar, itext-1.3.1.jar" embed width="720" height="500" type="application/x-java-applet;version=1.5" code="com.timer.AppletGenerator.class" codebase="." ARCHIVE='Applet.jar, jcommon-1.0.0.jar, jfreechart-1.0.0.jar, log4j-1.2.13.jar, itext-1.3.1.jar' > </embed> </object> My AppletGenerator.class is like: public class AppletGenerator extends JApplet { public void init() { //Some code } } earlier it was woking fine. But now when i sign and verify the new jar its giving exception: load: class com.timer.AppletGenerator.class not found. java.lang.ClassNotFoundException: com.timer.AppletGenerator.class at sun.applet.AppletClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.applet.AppletClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.applet.AppletClassLoader.loadCode(Unknown Source) at sun.applet.AppletPanel.createApplet(Unknown Source) at sun.plugin.AppletViewer.createApplet(Unknown Source) at sun.applet.AppletPanel.runLoader(Unknown Source) at sun.applet.AppletPanel.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.IOException: open HTTP connection failed. at sun.applet.AppletClassLoader.getBytes(Unknown Source) at sun.applet.AppletClassLoader.access$100(Unknown Source) at sun.applet.AppletClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) ... 10 more I have searched a lot but didnt get any correct answer. There are few posts regarding this in stactoverflow but that also didnt work. Thanks and Regards,

    Read the article

  • Outlook Interop Send Message from Account

    - by Reiste
    Okay, the specs have changed on this one somewhat. Maybe someone can help me with this new problem. Manually, what the user is doing is opening an new message in Outlook (2007 now) which has the "From..." field exposed. They open this up, select a certain account from the Global Address List, and send the message on behalf of that account. Is this possible to do? I can get the AddressEntry from the Global address list like so: AddressList list = null; foreach (AddressList addressList in _outlookApp.Session.AddressLists) { if (addressList.Name.ToLower().Equals("global address list")) { list = addressList; break; } } if (list != null) { AddressEntry entry = null; foreach (AddressEntry addressEntry in list.AddressEntries) { if (addressEntry.Name.ToLower().Equals("outgoing mail account")) { entry = addressEntry; break; } } } But I'm not sure I can make an Account type from the Address Entry. It seems to happen manually, when they select the address to send from. How do I mirror this in the Interop? Thanks! (My Original Question): I developed a small C# program to send email using the Outlook 2007 interop. The client required that the mail not be send using the default account - they had a secondary account they needed used. No problem - I used the Microsoft.Office.Interop.Outlook.Account class to access the availabled accounts, and choose the correct one. Now, it turns out they need this to work in Outlook 2003. Of course, the Account class doesn't exist in the Outlook interop 11.0. How can I achieve the same thing with Outlook 2003? Thanks in advance.

    Read the article

  • Swift Mailer Error

    - by Selom
    Hi, Im using the following code to send a message: try { require_once "lib/Swift.php"; require_once "lib/Swift/Connection/SMTP.php"; $smtp =& new Swift_Connection_SMTP("mail.somedomain.net", 587); $smtp->setUsername("username"); $smtp->setpassword("password"); $swift =& new Swift($smtp); //Create the sender from the details we've been given $sender =& new Swift_Address($email, $name); $message =& new Swift_Message("message title"); $message->attach(new Swift_Message_Part("Hello")); //Try sending the email $sent = $swift->send($message, "$myEmail", $sender); //Disconnect from SMTP, we're done $swift->disconnect(); if($sent) { print 'sent'; } else { print 'not sent'; } } catch (Exception $e) { echo"$e"; } The issue is that it is working fine on my local server (which my xampp server) but not working when the file is uploaded on the real server. It throwing this error: 'The SMTP connection failed to start [mail.somedomain.net:587]: fsockopen returned Error Number 110 and Error String 'Connection timed out'' Please what should I do to correct this error. Thanks for reading

    Read the article

  • Extracting a specific value from command-line output using powershell

    - by Andrew Shepherd
    I've just started learning PowerShell this morning, and I'm grappling with the basics. Here's the problem: I want to query subversion for the revision number of a repository, and then create a new directory with this revision number. The command svn_info outputs a number of label\value pairs, delimited by colons. For example Path: c# URL: file:///%5Copdy-doo/Archive%20(H)/Development/CodeRepositories/datmedia/Development/c%23 Repository UUID: b1d03fca-1949-a747-a1a0-046c2429f46a Revision: 58 Last Changed Rev: 58 Last Changed Date: 2011-01-12 11:36:12 +1000 (Wed, 12 Jan 2011) Here is my current code which solves the problem. Is there a way to do this with less code? I'm thinking that with the use of piping you could do this with one or two lines. I certainly shouldn't have to use a temporary file. $RepositoryRoot = "file:///%5Cdat-ftp/Archive%20(H)/Development/CodeRepositories/datmedia" $BuildBase="/Development/c%23" $RepositoryPath=$RepositoryRoot + $BuildBase # Outputing the info into a file svn info $RepositoryPath | Out-File -FilePath svn_info.txt $regex = [regex] '^Revision: (\d{1,4})' foreach($info_line in get-content "svn_info.txt") { $match = $regex.Match($info_line); if($match.Success) { $revisionNumber = $match.Groups[1]; break; } } "Revision number = " + $revisionNumber;

    Read the article

  • sermyadmin 2.01

    - by lepricon123
    I am able to install sermyadmin 2.0.1 and got everything working. Only user registration is not working. When I register a user the entry goe sin DB etc find but the email is not send to the user. We are using our own smtp server with no authentication. And I am sure there is nothing wrong with the smtp server. I can see that it has received a request from sermyadmin but in the logs Is ee that the FROM adddress is missing. So it is throwing 553 5.1.8 Domain of sender address does not exist. The domain does exist. Here is my resources.xml with all the valid entries. Since I don;t need the smtp authentication which the server is not using, i removed that. <beans xmlns="www.springframework.org/schema/beans" xmlns:xsi="www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" www.springframework.org/schema/beans www.springframework.org/schema/beans/spr...ns-2.0.xsd"> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="10.100.22.365" /> </bean> <bean id="mailMessage" class="org.springframework.mail.SimpleMailMessage"> <property name="from" value="[email protected]" /> </bean> </beans>

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >