Search Results

Search found 1188 results on 48 pages for 'thomas miller'.

Page 46/48 | < Previous Page | 42 43 44 45 46 47 48  | Next Page >

  • How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study

    - by Ruud
    This white paper demonstrates the architected latency hiding features of Oracle’s UltraSPARC T2+ and SPARC T4 processors That is the first sentence from this technical white paper, but what does it exactly mean? Let's consider a very simple example, the computation of a = b + c. This boils down to the following (pseudo-assembler) instructions that need to be executed: load @b, r1 load @c, r2 add r1,r2,r3 store r3, @a The first two instructions load variables b and c from an address in memory (here symbolized by @b and @c respectively). These values go into registers r1 and r2. The third instruction adds the values in r1 and r2. The result goes into register r3. The fourth instruction stores the contents of r3 into the memory address symbolized by @a. If we're lucky, both b and c are in a nearby cache and the load instructions only take a few processor cycles to execute. That is the good case, but what if b or c, or both, have to come from very far away? Perhaps both of them are in the main memory and then it easily takes hundreds of cycles for the values to arrive in the registers. Meanwhile the processor is doing nothing and simply waits for the data to arrive. Actually, it does something. It burns cycles while waiting. That is a waste of time and energy. Why not use these cycles to execute instructions from another application or thread in case of a parallel program? That is exactly what latency hiding on the SPARC T-Series processors does. It is a hardware feature totally transparent to the user and application. As soon as there is a delay in the execution, the hardware uses these otherwise idle cycles to execute instructions from another process. As a result, the throughput capacity of the system improves because idle cycles are no longer wasted and therefore more jobs can be run per unit of time. This feature has been in the SPARC T-series from the beginning, so why this paper? The difference with previous publications on this topic is in the amount of detail given. How this all works under the hood is fully explained using two example programs. Starting from the assembly language instructions, it is demonstrated in what way these programs execute. To really see what is happening we go down to the processor pipeline level, where the gaps in the execution are, and show in what way these idle cycles are filled by other copies of the same program running simultaneously. Both the SPARC T4 as well as the older UltraSPARC T2+ processor are covered. You may wonder why the UltraSPARC T2+ is included. The focus of this work is on the SPARC T4 processor, but to explain the basic concept of latency hiding at this very low level, we start with the UltraSPARC T2+ processor because it is architecturally a much simpler design. From the single issue, in-order pipelines of this processor we then shift gears and cover how this all works on the much more advanced dual issue, out-of-order architecture of the T4. The analysis and performance experiments have been conducted on both processors. The results depend on the processor, but in all cases the theoretical estimates are confirmed by the experiments. If you're interested to read a lot more about this and find out how things really work under the hood, you can download a copy of the paper here. A paper like this could not have been produced without the help of several other people. I want to thank the co-author of this paper, Jared Smolens, for his very valuable contributions and our highly inspiring discussions. I'm also indebted to Thomas Nau (Ulm University, Germany), Shane Sigler and Mark Woodyard (both at Oracle) for their feedback on earlier versions of this paper. Karen Perkins (Perkins Technical Writing and Editing) and Rick Ramsey at Oracle were very helpful in providing editorial and publishing assistance.

    Read the article

  • Lead, Follow, or Get out of the way

    - by Daniel Moth
    This is one of the sayings (attributed to Thomas Paine) that totally resonated with me from the first time I heard it, which was only 3 years ago during some training course at work: "Lead, Follow, or Get out of the way" You'll find many books with this title and you'll find it quoted by politicians and other leaders in various countries at various times... the quote is open to interpretation and works on many levels. To set the tone of what this means to me, I'll use a simple micro example: In any given conversation, you are either leading it or following it, at different times/snapshots of the conversation. If you are not willing or able to lead it, and you are not willing or able to follow it, then you should depart. The bad alternative which this guidance encourages you NOT to do is to stick around and obstruct progress by not following, not leading, and simply complaining or trying to derail the discussion in no particular direction. The same pattern applies at your position/role at work. Either follow your management/leadership team, or try to lead them to what you think is a better place, or change jobs. Don't stick around complaining about the direction things are going, while not actively trying to either change things or make peace with it. In the previous paragraph you can replace the word "your management" with "the people reporting to you" and the guidance still holds. Either lead your direct reports to where you think they should go, or follow their lead, or change jobs. Complaining about folks not taking direction while doing nothing is not a maintainable state. To me this quote is not about a permanent state, it is not about some people always leading and some always following: It is about a role/hat that anybody can play/wear at any given moment. One minute I am leading you, the next I am following you, and the next we are both following someone else and so on... When there is disagreement, debate the different directions for as long as it takes for you to be comfortable that you can either follow or lead. If you don't become comfortable with either of those, get out of the way. Something to remember is that it is impossible to learn how to lead well, without learning how to follow well (probably deserves its own blog entry)... Things go wrong when someone thinks that they must always be leading, or when everybody wants to follow and nobody steps up to lead... Things go wrong when more than one person wants to lead and they don't try to reach agreement on a shared direction, stubbornly sticking to their guns pulling the rest of the team in multiple directions... Things go wrong when more than one person wants to lead and after numerous and lengthy discussions, none of them decides to follow or get out of the way... Things go wrong when people don't want to lead, don't want to follow, and insist on sticking around... While there are a few ways things that can go wrong as enumerated in the previous paragraph, the most common one in my experience is the last one I mentioned. You'll recognize these folks as the ones that always complain about everything that is wrong with their company/product but do nothing about it. Every time you hear someone giving feedback on how something is wrong or suboptimal, ask them "So now that you identified the problem, what do you think the solution is and what are you doing to drive us to that solution?" The next time things start going wrong, step up and remind everyone: Lead, Follow, or Get out of the way. For more perspectives, and for input to help you form your own interpretation, search the web for this phrase to see in what contexts it is being used (bing, google). Finally, regardless of your political views, I hope you can appreciate if only as an example this perspective of someone leading by actually getting out of the way. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • BizTalk: Instance Subscription: Details

    - by Leonid Ganeline
    It has interesting behavior and it is not always what we are waiting for. An orchestration can be enlisted with many subscriptions. In other word it can have several Receive shapes. Usually the first Receive uses the Activation subscription but other Receives create the Instance subscriptions. [See “Publish and Subscribe Architecture” in MSDN] Here is a sample process. This orchestration has two receives. It is a typical Sequential Convoy. [See "BizTalk Server 2004 Convoy Deep Dive" in MSDN by Stephen W. Thomas]. Let's experiment started.   There are three typical scenarios. First scenario: everything is OK Activation subscription for the Sample message is created when the orchestration the SampleProcess is enlisted. The Instance subscription is created only when the SampleProcess orchestration instance is started and it is removed when the orchestration instance is ended. So far so good, the Message_2 was delivered exactly in this time interval and was consumed. Second scenario: no consumers Three Sample_2 messages were delivered. One was delivered before the SampleProcess was started and before the instance subscription was created. Second message was delivered in the correct time interval. The third one was delivered after the SampleProcess orchestration was ended and the instance subscription was removed. Note: ·         It was not the first Sample_2 was consumed. It was first in the queue but in was not waiting, it was suspended when it was delivered to the Message Box and didn’t have any subscribers at this moment. The first and the last Sample_2 messages were Suspended (Nonresumable) in the Message Box. For each of this message we have got two (!) service instances associated with this suspended message. One service instance has the ServiceClass of Messaging, and we can see its Error Description:   The second service instance has the ServiceClass of RoutingFailureReport, and we can see its Error Description:   Third scenario: something goes wrong Two Sample_2 messages were delivered. Both were delivered in the same interval when the SampleProcess orchestration was working and the instance subscription was created and was working too. First Sample_2 was consumed. The second Sample_2 has the subscription but the subscriber, the SampleProcess orchestration, will not consume it. After the SampleProcess orchestration is ended (And only after! I will discuss this in the next article.), it is suspended (Nonresumable). In this time only one service instance associated with this kind of scenario is suspended. This service instance has the ServiceClass of Orchestration, and we can see its Error Description: In the Message tab we will see the Sample_2 message in the Suspended (Resumable) status. Note: ·         This behavior looks ambiguous. We see here the orchestration consumes the extra message(s) and gets suspended together with those extra messages. These messages are not consumed in term of “processed by orchestration”. But they are consumed in term of the “delivered to the subscriber”. The receive shape in the orchestration is not received these extra messages. But these messages are routed to the orchestration.     Unified Sequential convoy  Now one more scenario. It is the unified sequential convoy. That means the activation subscription is for the same message type as it for the instance subscription. The Sample_2 message is now the Sample message. For simplicity the SampleProcess orchestration consumes only two Sample messages. Usually the orchestration consumes a lot of messages inside loop, but now it is only two of them. First message starts the orchestration, the second message goes inside this orchestration. Then the next pair of messages follows, and so on. But if the input messages follow in shorter intervals we have got the problem. We lost messages in unpredictable manner. Note: ·         Maybe the better behavior would be if the orchestration removes the instance subscription after the message is consumed, not in the end on the orchestration. Right now it is a “feature” of the BizTalk subscription mechanism.

    Read the article

  • Process Power to the People that Create Engagement

    - by Michael Snow
    Organizations often speak about their engagement problems as if the problem is the people they are trying to engage - employees,  partners, customers and citizens.  The reality of most engagement problems is that the processes put in place to engage are impersonal, inflexible, unintuitive, and often completely ignorant of the population they are trying to serve. Life, Liberty and the Pursuit of Delight? How appropriate during this short week of the US Independence Day Holiday that we're focusing on People, Process and Engagement. As we celebrate this holiday in the US and the historic independence we gained (sorry Brits!) - it's interesting to think back to 1776 to the creation of that pivotal document, the Declaration of Independence. What tremendous pressure to create an engaging document and founding experience they must have felt. "On June 11, 1776, in anticipation of the impending vote for independence from Great Britain, the Continental Congress appointed five men — Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston — to write a declaration that would make clear to people everywhere why this break from Great Britain was both necessary and inevitable. The committee then appointed Jefferson to draft a statement. Jefferson produced a "fair copy" of his draft declaration, which became the basic text of his "original Rough draught." The text was first submitted to Adams, then Franklin, and finally to the other two members of the committee. Before the committee submitted the declaration to Congress on June 28, they made forty-seven emendations to the document. During the ensuing congressional debates of July 1-4, 1776, Congress adopted thirty-nine further revisions to the committee draft. (http://www.constitution.org) If anything was an attempt for engaging the hearts and minds of the 13 Colonies at the time, this document certainly succeeded in its mission. ...Their tools at the time were pen and ink and parchment. Although the final document would later be typeset with lead type for a printing press to distribute to the colonies, all of the original drafts were hand written. And today's enterprise complains about using "Review and Track Changes" at times.  Can you imagine the manual revision control process? or lack thereof?  Collaborative process? Time delays? Would  implementing a better process have helped our founding fathers collaborate better? Declaration of Independence rough draft below. One of many during the creation process. Great comparison across multiple versions of the document here. (from http://www.ushistory.org/): While you may not be creating a new independent nation, getting your employees to engage is crucial to your success as a company in today's world. Oracle WebCenter provides the tools that power engagement. Employees that have better tools for communication, collaboration and getting their job done are more engaged employees. Better engaged employees create more engaged customers and partners. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • OPN Exchange General Sessions –Fowler, Kurian & More!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With so much excitement about to take place at OPN Exchange @ OpenWorld, it’s hard to decide what to attend, where to go, who to meet and what to eat! Let us help you decide by first asking a question… How often do you get to choose between seven key Oracle Executives as they address the five biggest topics facing the industry today? After the Partner Keynote with Judson Althoff, join us for the OPN Exchange General Sessions: DATE: Sunday September 30th TIME: 3:30-4:30 pm LOCATION: Moscone South, Esplanade Level John Fowler & Tom LaRocca (Technology for Partners: Room 306): Learn how to grow your top and bottom line by selling Oracle on Oracle. Chris Leone (Applications for Partners: Room 303): Catch the partner-only value prop, selling secrets and competitive compares to win with the Fusions Applications product family. Angelo Pruscino & Sohan DeMel (Engineered Systems for Partners: Room 301): Get the secrets to selling and implementing Oracle’s transformational Engineered Systems products. You won’t want to miss the Oracle Database Appliance Unplugged demonstration! Sonny Singh (Industry Solutions: Room 302): Develop profitable practices answering the challenges faced by companies operating in discrete industries and the opportunity represented by Machine2Machine Java. Thomas Kurian (Cloud for Partnesr: Room 304): Today it is all about the Cloud. Oracle offers both traditional cloud infrastructure solutions, as well cloud platform and software services. Attend this session to learn more about Oracle’s Platform, Application, and Social cloud services. Put on your thinking caps because these speakers are ready to blow your mind with five tracks of exclusive content catered to you, our partners. Boom! The OPN Communication Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • Cannot get net 4.5rc to work

    - by ThomasD
    I have installed .net 4.5rc from http://msdn.microsoft.com/en-us/netframework/hh854779.aspx because I would like to use the new spatial features when developing with visual visual web developer 2010 express. But when I want to change the target framework to .net 4.5 in the project properties it is not there. I have checked the directory in C:\Windows\Microsoft.NET\Framework64 and can see that there is no v4.5 folder but the v4.0 directory has been updated with a timestamp corresponding to when I installed v4.5. The version of the v4.0 directory is v4.0.30319. I am running windows 7 on my computer. Any ideas why I cannot find v4.5 ? thanks Thomas *UPDATE Based on the comment below I have found out that I am running 4.5. For others reading this post, .net 4.5 replaces the files in the .net 4.0 directory but without renaming the directory to .net 4.5 (a bit confusing). To check whether your assemblies have been updated check the product version of eg. system.dll (right click - details). According to this post http://www.west-wind.com/weblog/posts/2012/Mar/13/NET-45-is-an-inplace-replacement-for-NET-40 if the product version is above 17000 it is running 4.5.

    Read the article

  • How to prepopulate an Outlook MailItem and avoiding a com Exception from the object model guard

    - by torrente
    Hi, I work for a company that develops a CRM tool and offers integration with MS Office(2003 & 2007) from windows XP to 7. (I'm working using Win7) My task is to call an Outlook instance (using C#) from this CRM tool when the user wants to send an email and prepopulate with data of the CRM tool (email, recipient, etc..) All of this already works just fine. The problem I'm having is that Outlook's "object model guard" is throwing com Exception (Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))) the moment I try to read a protected value from the mailItem (such as mail.bodyHTML). Example Snippet: using MSOutlook = Microsoft.Office.Interop.Outlook; //untrusted Instance _outlook = new MSOutlook.Application(); MSOutlook.MailItem mail = (MSOutlook.MailItem)_outlook.CreateItem(MSOutlook.OlItemType.olMailItem); //this where the Exception occurs string outlookStdHTMLBody = mail.HTMLBody; I've done quite a bit of reading and know that my Outlook Instance (derived by using new Application) is considered untrusted and therefore the "omg" kicks in. I do have a workaround for development: I'm running VS2010 as Administrator and if I run Outlook as Administrator as well - all is good. I suppose this is due to them having the same integrity levels(high) and the UAC(?) is not complaining. But that just ain't the way to go for deployment. Now the question is: Is there a way to obtain a trusted instance of Outlook so that I can avoid this exception? I've already read that when developing an Office Add-In using VSTO one can obtain a trusted Instance from the OnComplete event and/or using "ThisAddin" But I "merely" want to start an outlook instance and preopulate it, and do not want to develop an Add-In since this is not the requirement. And to make it clear - I have no problem with pop ups informing the user that outlook is being accessed - I just want to get rid of the exception! So how can I get around this problem using code? Any help is highly appreciated! Thomas

    Read the article

  • using ontouch to zoom in

    - by user357032
    i have used some sample code and am trying to tweak it to let me allow the user to touch the screen and zoom in the code runs fine with no errors but when i touch the screen nothing happens package com.thomas.zoom; import android.content.Context; import android.graphics.Canvas; import android.graphics.drawable.Drawable; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.View; public class Zoom extends View { private Drawable image; private int zoomControler=20; public Zoom(Context context) { super(context); image=context.getResources().getDrawable(R.drawable.icon); setFocusable(true); } @Override protected void onDraw(Canvas canvas) { // TODO Auto-generated method stub super.onDraw(canvas); //here u can control the width and height of the images........ this line is very important image.setBounds((getWidth()/2)-zoomControler, (getHeight()/2)-zoomControler, (getWidth()/2)+zoomControler, (getHeight()/2)+zoomControler); image.draw(canvas); } public boolean onTouch(int action, MotionEvent event) { action= event.getAction(); if(action == MotionEvent.ACTION_DOWN){ zoomControler+=10; } invalidate(); return true; } }

    Read the article

  • Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API

    - by ageektrapped
    I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}", info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } My question: Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace as requested: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • How to get a volume measurement of iPhone recording in dB, with a limit of at least 120dB

    - by Cyber
    Hello, I am trying to make a simple volume meter for the iPhone. I want the volume displayed in dB. When using this turorial, I am only getting measurements up to 78 dB. I've read that that is because the dBFS spectrum for 16 bit audio recordings is only 96 dB. I tried modifying this piece of code in the init funcyion: dataFormat.mSampleRate = 44100.0f; dataFormat.mFormatID = kAudioFormatLinearPCM; dataFormat.mFramesPerPacket = 1; dataFormat.mChannelsPerFrame = 1; dataFormat.mBytesPerFrame = 2; dataFormat.mBytesPerPacket = 2; dataFormat.mBitsPerChannel = 16; dataFormat.mReserved = 0; I changed the value of mBitsPerChannel, hoping to increase the bit value of the recording. dataFormat.mBitsPerChannel = 32; With that variable set to 32, the "mAveragePower" function returns only 0. So, how can i measure more decibels? All my code is practically the same as in the tutorial i posted above. Thanks in advance, Thomas

    Read the article

  • String method crashes program...

    - by TimothyTech
    Alright so i have two identical string methods... string CreateCust() { string nameArray[] ={"Tom","Timo","Sally","Kelly","Bob","Thomas","Samantha","Maria"}; int d = rand() % (8 - 1 + 1) + 1; string e = nameArray[d]; return e; } string CreateFood() { string nameArray[] = {"spagetti", "ChickenSoup", "Menudo"}; int d = rand() % (3 - 1 + 1) + 1; string f = nameArray[d]; return f; } however no matter what i do it the guts of CreateFood it will always crash. i created a test chassis for it and it always fails at the cMeal = CreateFood(); Customer Cnow; cout << "test1" << endl; cMeal = Cnow.CreateFood(); cout << "test1" << endl; cCustomer = Cnow.CreateCust(); cout << "test1" << endl; i even switched CreateCust with CreateFood and it still fails at the CreateFood Function... NOTE: if i make createFood a int method it does work...

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • should variable be released or not? iphone-sdk

    - by psebos
    Hi, In the following piece of code (from a book) data is an NSDictionary *data; defined in the header (no property). In the viewDidLoad of the controller the following occurs: - (void)viewDidLoad { [super viewDidLoad]; NSArray *keys = [NSArray arrayWithObjects:@"home", @"work", nil]; NSArray *homeDVDs = [NSArray arrayWithObjects:@"Thomas the Builder", nil]; NSArray *workDVDs = [NSArray arrayWithObjects:@"Intro to Blender", nil]; NSArray *values = [NSArray arrayWithObjects:homeDVDs, workDVDs, nil]; data = [[NSDictionary alloc] initWithObjects:values forKeys:keys]; } Since I am really new to objective-c can someone explain to me why I do not have to retain the variables keys,homeDVDs,workDVDs and values prior exiting the function? I would expect prior the data allocation something like: [keys retain]; [homeDVDs retain]; [workDVDs retain]; [values retain]; or not? Does InitWithObjects copies (recursively) all objects into a new table? Assuming we did not have the last line (data allocation) should we release all the NSArrays prior exiting the function (or we could safely assumed that all NSArrays will be autoreleased since there is no alloc for each one?) Thanks!!!!

    Read the article

  • Making RDoc Ruby Gem Default on Mac OS X

    - by jkale
    Hey all, I've recently installed RDoc version (2.4.3) through Ruby gems to replace the one shipped with Mac OS X (version 1.0.1). Unfortunately, I can still only use RDoc 1.0.1 when I call run "rdoc" at the command line. rdoc -v returns: RDoc V1.0.1 - 20041108 I tried amending the $PATH variable to point the first entry to the RDoc 2.4.3 folder but no luck. I couldn't find anything about this online either, so I thought I'd ask here. Cheers! Update: Running "gem list -d --version 1.0.1 rdoc" returns: *** LOCAL GEMS *** rdoc (2.4.3) Authors: Eric Hodel, Dave Thomas, Phil Hagelberg, Tony Strauss Rubyforge: http://rubyforge.org/projects/rdoc Homepage: http://rdoc.rubyforge.org Installed at: /usr/local/lib/ruby/gems/1.8 RDoc is an application that produces documentation for one or more Ruby source files Therefore, it's definitely the Mac OSX version of RDoc that's interfering with the Gems version. Update 2: I found out, using: `bash --debugger rdoc` that the old version of RDoc was in /opt/local/bin. I deleted it and added my gems directory to my $PATH `export PATH=/usr/local/lib/ruby/gems/1.8/gems/` I now have a fresh working copy of the latest RDoc!

    Read the article

  • How do I [legally] get the current first responder on the screen on an iPhone?

    - by Anthony D
    I submitted my app a little over a week ago and got the dreaded rejection email today. It reads as follows: Dear -----------, Thank you for submitting --------- to the App Store. Unfortunately it cannot be added to the App Store because it is using a private API. Use of non-public APIs, which as outlined in the iPhone Developer Program License Agreement section 3.3.1 is prohibited: "3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs." The non-public API that is included in your application is firstResponder. Regards, iPhone Developer Program Now, the offending API call is actually a solution I found here on SO: UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow]; UIView *firstResponder = [keyWindow performSelector:@selector(firstResponder)]; So this is my question; How do I get the current first responder on the screen? I'm looking for a legal way that won't get my app rejected. Thanks. I figured this out based on the solution provided by Thomas below. Here is what the final code looks like: @implementation UIView (FindFirstResponder) - (UIView *)findFirstResonder { if (self.isFirstResponder) { return self; } for (UIView *subView in self.subviews) { UIView *firstResponder = [subView findFirstResonder]; if (firstResponder != nil) { return firstResponder; } } return nil; } @end

    Read the article

  • sorting names in a linked list

    - by sil3nt
    Hi there, I'm trying to sort names into alphabetical order inside a linked list but am getting a run time error. what have I done wrong here? #include <iostream> #include <string> using namespace std; struct node{ string name; node *next; }; node *A; void addnode(node *&listpointer,string newname){ node *temp; temp = new node; if (listpointer == NULL){ temp->name = newname; temp->next = listpointer; listpointer = temp; }else{ node *add; add = new node; while (true){ if(listpointer->name > newname){ add->name = newname; add->next = listpointer->next; break; } listpointer = listpointer->next; } } } int main(){ A = NULL; string name1 = "bob"; string name2 = "tod"; string name3 = "thomas"; string name4 = "kate"; string name5 = "alex"; string name6 = "jimmy"; addnode(A,name1); addnode(A,name2); addnode(A,name3); addnode(A,name4); addnode(A,name5); addnode(A,name6); while(true){ if(A == NULL){break;} cout<< "name is: " << A->name << endl; A = A->next; } return 0; }

    Read the article

  • pdfmark for docinfo metadata in pdf is not accepting accented characters in Keywords or Subject

    - by rpilkey
    I am inserting metadata into postscript files with a program, to be distilled to pdf with Adobe Distiller. I am using this code that I grabbed from Thomas Merz's "Web Publishing with Acrobat-PDF": /pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse [ /Title (mot accenté) /Author (mot accenté) /Subject (mot accenté) /Keywords (mot accenté) /DOCINFO pdfmark When you look at the metadata, the accented characters turn into "?" in the Subject and Keyword fields, but not the Title and Author fields. The characters are the same ascii 233 I tried replacing them with octal encoding (\351), which came out the same (Title and Author okay, Subject and Keywords messed up). file encoding is latin-1,unix eol I found a mention on adobe forums, but the answer didn't make sense to me. http://forums.adobe.com/message/1165593 I changed the encoding to utf-8, inserted the characters binarily (in VIM : <Ctrl-v>u00e9), no change. I tried inserting the BOM in a few places, it didn't work. This is with Acrobat Pro 9 I didn't notice this problem with Acrobat Pro 7. Does anybody know of a workaround to get the accented characters into ALL the metadata fields when modifying a postscript file, or tell me if I'm doing it wrong? It seems weird that different fields would not accept the same bytes.

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • making clean page via page.tpl.php

    - by user360051
    I have a Drupal module creating a page via hook_menu(). I am trying to make it so the page has no extraneous html output, only what I want. You can view the page here, http://www.thomashansen.me/chat/thomas. If you look at the source, you can see a strange script tag at the end. My page-chat.tpl.php looks like this, <?php // $Id$ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language ?>" lang="<?php print $language->language ?>" dir="<?php print $language->dir ?>"> <head> </head> <body> <?php print $content; ?> </body> </html> Where is that script tag coming from? and how do I get rid of it? If you need more information just ask.

    Read the article

  • Mysql count and sum from two diferent tables

    - by Agent_x
    Hi all, i have a problem with some querys in php and mysql: I have 2 diferent tables with one field in common: table 1 id | hits | num_g | cats | usr_id |active 1 | 10 | 11 | 1 | 53 | 1 2 | 13 | 16 | 3 | 53 | 1 1 | 10 | 22 | 1 | 22 | 1 1 | 10 | 21 | 3 | 22 | 1 1 | 2 | 6 | 2 | 11 | 1 1 | 11 | 1 | 1 | 11 | 1 table 2 id | usr_id | points 1 | 53 | 300 Now i use this statement to sum just the total from the table 1 every id count + 1 too SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) AS tot_h FROM table1 WHERE usr_id!='0' GROUP BY usr_id ASC LIMIT 0 , 15 and i get the total for each usr_id usr_id| tot_h | 53 | 50 22 | 63 11 | 20 until here all is ok, now i have a second table with extra points (table2) I try this: SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) + (SELECT points FROM table2 WHERE usr_id != '0' ) AS tot_h FROM table1 WHERE usr_id != '0' GROUP BY usr_id ASC LIMIT 0 , 15 but it seems to sum the 300 extra points to all users: usr_id| tot_h | 53 | 350 22 | 363 11 | 320 Now how i can get the total like the first try but + the secon table in one statement? because now i have just one entry in the second table but i can be more there. thanks for all the help. =============================================================================== hi thomas thanks for your reply, i think is in the right direction, but im getting weirds results, like usr_id | tot_h 22 | NULL <== i think the null its because that usr_id as no value in the table2 53 | 1033 Its like the second user is getting all the the values. then i try this one: SELECT table1.usr_id, COUNT( table1.id ) + SUM( table1.num_g + table1.hits + table2.points ) AS tot_h FROM table1 LEFT JOIN table2 ON table2.usr_id = table1.usr_id WHERE table1.usr_id != '0' AND table2.usr_id = table1.usr_id GROUP BY table1.usr_id ASC Same result i just get the sum of all values and not by each user, i need something like this result: usr_id | tot_h 53 | 53 <==== plus 300 points on table1 22 | 56 <==== plus 100 points on table2 /////////the result i need //////////// usr_id | tot_h 53 | 353 <==== plus 300 points on table2 22 | 156 <==== plus 100 points on table2 I think the structure need to be something like this Pseudo statements ;) from table1 count all id to get the number of record where the usr_id are then sum hits + num_g and from table2 select the extra points where the usr_id are the same as table1 and get teh result: usr_id | tot_h 53 | 353 22 | 156

    Read the article

  • How to store multiple variables from a File Input of unknown size in Java?

    - by AlphaOmegaStrife
    I'm a total beginner with my first programming assignment in Java. For our programming assignment, we will be given a .txt file of students like so: 3 345 Lisa Miller 890238 Y 2 <-(Number of classes) Mathematics MTH345 4 A Physics PHY357 3 B Bill Wilton 798324 N 2 English ENG378 3 B Philosophy PHL534 3 A Dandy Goat 746333 Y 1 History HIS101 3 A" The teacher will give us a .txt file on the day of turning it in with a list of unknown students. My problem is: I have a specific class for turning the data from the file into variables to be used for a different class in printing it to the screen. However, I do not know of a good way to get the variables from the input file for the course numbers, since that number is not predetermined. The only way I can think of to iterate over that unknown amount is using a loop, but that would just overwrite my variables every time. Also, the teacher has requested that we not use any JCL classes (I don't really know what this means.) Sorry if I have done a poor job of explaining this, but I can't think of a better way to conceptualize it. Let me know if I can clarify. Edit: public static void analyzeData() { Scanner inputStream = null; try { inputStream = new Scanner(new FileInputStream("Programming Assignment 1 Data.txt")); } catch (FileNotFoundException e) { System.out.println("File Programming Assignment 1 Data.txt could not be found or opened."); System.exit(0); } int numberOfStudents = inputStream.nextInt(); int tuitionPerHour = inputStream.nextInt(); String firstName = inputStream.next(); String lastname = inputStream.next(); String isTuitionPaid = inputStream.next(); int numberOfCourses = inputStream.nextInt(); String courseName = inputStream.next(); String courseNumber = inputStream.next(); int creditHours = inputStream.nextInt(); String grade = inputStream.next(); To show the methods I am using now, I am just using a Scanner to read from the file and for Scanner inputStream, I am using nextInt() or next() to get variables from the file. Obviously this will not work when I do not know exactly how many classes each student will have.

    Read the article

  • Emails not being delivered

    - by Tomtiger11
    Comment pointed out that this may fix my problem, and it did: Why don't mails show up in the recipient's mailspool? I use Postfix with Dovecot, and when I send an email from my gmail to my server, it is received at the server, but not at my email client using POP3. I can verify it being received at the server using the mail command. This is my main.cf: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = tom4u.eu myorigin = $myhostname inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 relay_domains = $mydomain alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_tls_cert_file = /etc/postfix/certs/cert.pem milter_protocol = 2 milter_default_action = accept smtpd_milters = inet:localhost:8891 non_smtpd_milters = inet:localhost:8891 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $myhostname smtpd_recipient_restrictions = reject_non_fqdn_recipient,permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination,permit broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth If you could help me with this, I'd be most grateful, if you need any more information, please ask. var/log/maillog: May 30 22:44:25 tom4u postfix/smtpd[18626]: connect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/smtpd[18626]: 318F679B7F: client=mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/cleanup[18631]: 318F679B7F: message-id=<CAA_0zdxY-WUFGOC57K_yVn0G+5hN=8KSXuohJqMDB5Rm7bqu8w@mail.gmail.com> May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: mail-we0-f181.google.com [74.125.82.181] not internal May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: not authenticated May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: DKIM verification successful May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: s=20120113 d=gmail.com SSL May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: from=<[email protected]>, size=1720, nrcpt=1 (queue active) May 30 22:44:25 tom4u postfix/smtpd[18626]: disconnect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/local[18632]: 318F679B7F: to=<[email protected]>, relay=local, delay=0.17, delays=0.12/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: removed May 30 22:45:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18679 May 30 22:45:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 May 30 22:46:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18725 May 30 22:46:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

< Previous Page | 42 43 44 45 46 47 48  | Next Page >