Search Results

Search found 24784 results on 992 pages for 'process integration packs'.

Page 162/992 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • SCons and dependencies for python function generating source

    - by elmo
    I have an input file data, a python function parse and a template. What I am trying to do is use parse function to get dictionary out of data and use that to replace fields in template. Now to make this a bit more generic (I perform the same action in few places) I have defined a custom function to do so. Below is definition of custom builder and values is a dictionary with { 'name': (data_file, parse_function) } (you don't really need to read through this, I simply put it here for completeness). def TOOL_ADD_FILL_TEMPLATE(env): def FillTemplate(env, output, template, values): out = output[0] subs = {} for name, (node, process) in values.iteritems(): def Process(env, target, source): with open( env.GetBuildPath(target[0]), 'w') as out: out.write( process( source[0] ) ) builder = env.Builder( action = Process ) subs[name] = builder( env, env.GetBuildPath(output[0])+'_'+name+'_processed.cpp', node )[0] def Fill(env, target, source): values = dict( (name, n.get_contents()) for name, n in subs.iteritems() ) contents = template[0].get_contents().format( **values ) open( env.GetBuildPath(target[0]), 'w').write( contents ) builder = env.Builder( action = Fill ) builder( env, output[0], template + subs.values() ) return output env.Append(BUILDERS = {'FillTemplate': FillTemplate}) It works fine when it comes to checking if data or template changed. If it did it rebuilds the output. It even works if I edit process function directly. However if my process function looks like this: def process( node ): return subprocess(node) and I edit subprocess the change goes unnoticed. Is there any way to get correct builds without making process functions being always invoked?

    Read the article

  • How to process a large post array in PHP where item names are all different and not known in advance

    - by Salnajjar
    I have a PHP page that queries a DB to populate a form for the user to modify the data and submit. The query returns a number of rows which contain 3 items: ImageID ImageName ImageDescription The PHP page titles each box in the form with a generic name and appends the ImageID to it. Ie: ImageID_03 ImageName_34 ImageDescription_22 As it's unknown which images are going to have been retrieved from the DB then I can't know in advance what the name of the form entries will be. The form deals with a large number of entries at the same time. My backend PHP form processor that gets the data just sees it as one big array: [imageid_2] => 2 [imagename_2] => _MG_0214 [imageid_10] => 10 [imagename_10] => _MG_0419 [imageid_39] => 39 [imagename_39] => _MG_0420 [imageid_22] => 22 [imagename_22] => Curly Fern [imagedescription_2] => Wibble [imagedescription_10] => Wobble [imagedescription_39] => Fred [imagedescription_22] => Sally I've tried to do an array walk on it to split it into 3 arrays which set places but am stuck: // define empty arrays $imageidarray = array(); $imagenamearray = array(); $imagedescriptionarray = array(); // our function to call when we walk through the posted items array function assignvars($entry, $key) { if (preg_match("/imageid/i", $key)) { array_push($imageidarray, $entry); } elseif (preg_match("/imagename/i", $key)) { // echo " ImageName: $entry"; } elseif (preg_match("/imagedescription/i", $key)) { // echo " ImageDescription: $entry"; } } array_walk($_POST, 'assignvars'); This fails with the error: array_push(): First argument should be an array in... Am I approaching this wrong?

    Read the article

  • Is it possible to make an online registration process for a course in Google Docs with a cap on registrations?

    - by user61551
    I am trying to build an online registration form for an institute I intern at that is running a training programme with multiple courses and hundreds of applicants. I've done registration forms with Google Docs Forms before, but the tricky spot this time round is that I'm trying to put a registration cap on each of the courses. Every person needs to register for one course in every timeslot, and there are seven or so timeslots. However, as seating space in every venue is limited, we would like to cap registrations per course (venue) to 200 per course, after which either a) the applicant will be informed that they need to register for another course in that timeslot if they apply to a full course, or b) ideal case scenario, the option will be greyed out. Is this possible using Google Docs Forms at all? If not, is there a free alternative that might do the job properly?

    Read the article

  • How to process more that one XML document in XSLT?

    - by brain_pusher
    Is there any trick to match two XML by one XSLT? I mean the way I can apply XSLT to a parameter passed. For example (I missed declarations to be short). XML1: XML to be transformed: <myData> <Collection> </Collection> </myData> XSLT need to be applied to the previous XML: <xsl:param name='items' /> <xsl:template match='Collection'> <!-- some transformation here --> </xsl:template> XML2: XML data passed as the parameter 'items': <newData> <Item>1</Item> <Item>2</Item> <Item>3</Item> </newData> And I need to create a set of nodes in the 'Collection' node in XML1 for each 'Item' element in XML2 using XSLT. And I do not know what XML2 contains exactly at design time. It is generated at runtime, so I can't place it inside XSLT, I know only its schema.

    Read the article

  • Iterating over a large data set in long running Python process - memory issues?

    - by user1094786
    I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher). Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc. What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually? Any advice would be appreciated. Thanks!

    Read the article

  • Php exec() return code is -1 when in a forked process, but 0 in a normal script

    - by fb1
    I am using exec() inside a a script that runs as a daemon and forks child processes using the pear class Net_Server. I am getting a strange issue whereby the return code (the third param of of exec) comes back as -1. When I run the command on the command line, or with exec in a normal php script the return code is 0 as it should be. Anyone have any idea why this is happening, and how to fix it?

    Read the article

  • Using jquery, what is the simplest function to post some json data and process a returned json respo

    - by Chris Boesch
    When users click on an element in my webpage, I would like to call a javascript function that reads the values of a few text boxes on the page, wraps their contents as json where the keys are the ids for the text boxes and the values are the contents of each text box, and then posts the resulting json to a url. I would then like the same function to expect back a json response and call another javascript function with the returned json data. Question: What is the best way to write the javascript function to create a json structure from html elements, post the json with jquery, and call another javascript function with the resulting json response from the server?

    Read the article

  • How can I remove Ruby, Rails & mysql to start the installation process again?

    - by ben
    I've been trying to get setup with Ruby on Rails today, but I think I've followed some bad instructions along the way, and nothing seems to work. I've now borrowed the book "Agile Web Development with Rails, Third Edition" from a friend, and want to follow the setup instructions in that. Firstly, do I need to remove what I've setup previously? If yes, how do I do it? I can't seem to find instructions anywhere. I'm running OSX 10.6.1, so I know that it came with some stuff already setup, but I've been installing customized stuff over the top which I think I'll have to remove. Thanks for reading!

    Read the article

  • Join our webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Data integration team has organized a series of webcasts for this summer. We are kicking it off this Thursday June 30th at 10am PT with a product update webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate. In this webcast you will hear from product management about the new patch updates to both GoldenGate 11g R1 and ODI 11gR1. Jeff Pollock, Sr. Director of Product Management for ODI will talk about the new features in Oracle Data Integrator 11.1.1.5, including the data lineage integration with OBI EE, enhanced web services to support flexible architectures as well as capabilities for efficient object execution such as Load Plans. Jeff will discuss support for complex files and performance enhancements. Chris McAllister, Sr. Director of Product Management for Oracle GoldenGate will cover the new features of Oracle GoldenGate 11.1.1.1 such as increased data security by supporting Oracle Database Advanced Security option, deeper integration with Oracle Database, and the expanded list of heterogeneous databases GoldenGate supports . Chris will also talk about the new Oracle GoldenGate 11gR1 release for HP NonStop platform and will provide information on our strategic direction for product development. Join us this Thursday at 10am PT/ 1pm ET to hear directly from Data Integration Product Management . You can register here for the June 30th webcast as well as for the upcoming ones in our summer webcast series.

    Read the article

  • In-Application Support Made Easier

    - by matt.hicks
    With the availability of Oracle UPK 3.6.1 and Enablement Service Pack 1 for Oracle UPK 3.6.1 (Oracle Support login required for both), there are quite a few changes for content admins to absorb. In addition to the support added for dozens of application releases, patches and new target applications, we've also added features to make implementing and using In-Application Support even easier. First, the old Help Menu Integration Guides have been updated and combined into a single In-Application Support Guide. If you integrate UPK content for user assistance, or if you're interested in doing so, read the new guide! It covers all the integration steps, including a section on the new In-Application Support Configuration Utility. If you've integrated content in multiple languages, or if you've ever had to make configuration changes for UPK Help Integration, then you know how cumbersome it was to manually edit javascript files. No longer! The Player now includes a configuration utility that provides a web browser interface for setting all In-Application Support options. From the main screen, you see a list of applications covered by the published content. Clicking on an application name takes you to the edit configuration screen where you can set all Player options for that application. No more digging through the Player folders to find the right javascript file to edit. No complicated javascript syntax to make changes. And with Enablement Service Pack 1 we've added a new feature we're calling the Tabbed Gateway. The Tabbed Gateway is a top-level navigation bar for Help Integration. And all tabs, links, and text are controlled with the Configuration Utility... I think the Tabbed Gateway is a really cool and exciting feature for content launch. I can't wait to hear how your ideas for how to use it for your content. Let me know in comments or email!

    Read the article

  • You Need BRM When You have EBS – and Even When You Don’t!

    - by bwalstra
    Here is a list of criteria to test your business-systems (Oracle E-Business Suite, EBS) or otherwise to support your lines of digital business - if you score low, you need Oracle Billing and Revenue Management (BRM). Functions Scalability High Availability (99.999%) Performance Extensibility (e.g. APIs, Tools) Upgradability Maintenance Security Standards Compliance Regulatory Compliance (e.g. SOX) User Experience Implementation Complexity Features Customer Management Real-Time Service Authorization Pricing/Promotions Flexibility Subscriptions Usage Rating and Pricing Real-Time Balance Mgmt. Non-Currency Resources Billing & Invoicing A/R & G/L Payments & Collections Revenue Assurance Integration with Key Enterprise Applications Reporting Business Intelligence Order & Service Mgmt (OSM) Siebel CRM E-Business Suite On-/Off-line Mediation Payment Processing Taxation Royalties & Settlements Operations Management Disaster Recovery Overall Evaluation Implementation Configuration Extensibility Maintenance Upgradability Functional Richness Feature Richness Usability OOB Integrations Operations Management Leveraging Oracle Technology Overall Fit for Purpose You need Oracle BRM: Built for high-volume transaction processing Monetizes any service or event based on any metric Supports high-volume usage rating, pricing and promotions Provides real-time charging, service authorization and balance management Supports any account structure (e.g. corporate hierarchies etc.) Scales from low volumes to extremely high volumes of transactions (e.g. billions of trxn per hour) Exposes every single function via APIs (e.g. Java, C/C++, PERL, COM, Web Services, JCA) Immediate Business Benefits of BRM: Improved business agility and performance Supports the flexibility, innovation, and customer-centricity required for current and future business models Faster time to market for new products and services Supports 360 view of the customer in real-time – products can be launched to targeted customers at a record-breaking pace Streamlined deployment and operation Productized integrations, standards-based APIs, and OOB enablement lower deployment and maintenance costs Extensible and scalable solution Minimizes risk – initial phase deployed rapidly; solution extended and scaled seamlessly per business requirements Key Considerations Productized integration with key Oracle applications Lower integration risks and cost Efficient order-to-cash process Engineered solution – certification on Exa platform Exadata tested at PayPal in the re-platforming project Optimal performance of Oracle assets on Oracle hardware Productized solution in Rapid Offer Design and Order Delivery Fast offer design and implementation Significantly shorter order cycle time Productized integration with Oracle Enterprise Manager Visibility to system operability for optimal up time

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • Separation of development responsibilities in a new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (All different levels i.e. more senior, intermediate and junior) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task (story) from view - controller - model. QUESTION: For those who have worked in small teams developing MVC projects how have you managed the workload distribution in this situation. I can't imagine the junior would be responsible for building parts of the underlying architecture so would given them responsibility for the view make sense considering we are trying to keep it simple?

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-18

    - by Bob Rhubart
    Eye on Architecture This week the Oracle Technology Network Solution Architect Homepage features an Oracle Reference Architecture for Software Engineering, a new podcast focusing on why IT governance is important whether you like it or not, and information on the next free OTN Architect Day event. Enabling WebLogic Administrator Group Inside Custom ADF Application | Andrejus Baranovskis A short but informative technical post from Oracle ACE Director Andrejus Baranovkis. Oracle OpenWorld 2012 Hands-on Lab: Leading Your Everyday Application Integration Projects with Enterprise SOA Yet another session to squeeze into your already-jammed Oracle OpenWorld schedule. This hands-on lab focuses on how "Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects." Mass Metadata Updates with Folders | Kyle Hatlestad "With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced," explains Fusion Middleware A-Team blogger Kyle Hatlestad. "This is meant to replace the folder architecture of 'Folders_g'. While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten." Creating your first OAM 11g R2 domain | Chris Johnson Prolific Fusion Middleware A-Team Blogger Chris Johnson reads the Oracle Identity and Access Management Installation Guide so you don't have to (though you probably should). Thought for the Day "Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice." — Christopher Alexander Source: SoftwareQuotes.com

    Read the article

  • The latest Oracle Social Network News from Open World

    - by me
    Highlights Oracle and Partners showcase the latest development around  Oracle Social Network  (OSN) Integration of OSN Social Fabric into Business Applications like Finance, HCM and Customer Experience Partners like Cisco WebEx, Avaya, Weemo, Lingotek and HarQen showcase OSN integration Oracle shares details around internal OSN deployment Please visit us at 2413 Moscone South  Exhibition Hall  and  experience a live OSN demo Social Fabric  Oracle Social Network socializes your Applications, Process and Content within your Enterprise. Here are some examples what is shown at Oracle Open World. Socialize the Finance department Enable Finance departments to collaborate instantly during quarter close with real-time information access Enable finance professionals in the back office to easily interact with the rest of the company Provide privacy when discussing sensitive financial results within Conversations  Socialize Human Capital Management (HCM) Promotes attainable performance goals that achieve the business objectives of the enterprise Capture expertise across the network Continuous feedback loop provided that results in productivity and innovation improvement tied to higher employee engagement OSN and Customer Experience Find the person with the best skills to assist with the issue Real-time collaboration in  context of the issue Track an Agent’s collaboration contributions Identify and contribute relevant knowledge back to the system Cisco/Webex integration The Web Conferencing tool of your choice can be integrated with OSN. In the example below you can see the integration of the Cisco WebEx solution into OSN. and sure - this works on mobile devices as well  OSN @ Oracle Oracle has deployed OSN as part of the internal Fusion CRM application rollout. After just 4 month we can see impressive usage patterns.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • any clue in these logs why keyboard audio and internet are messed up

    - by mmj
    Jun 7 00:01:18 Isis lightdm: pam_unix(lightdm-autologin:session): session opened for user mimi by (uid=0) Jun 7 00:01:18 Isis lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Jun 7 00:01:26 Isis polkitd(authority=local): Registered Authentication Agent for unix-session:/org/freedesktop/ConsoleKit/Session1 (system bus name :1.36 [/usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale zh_CN.UTF-8) Jun 7 00:01:29 Isis dbus[610]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.44" (uid=1000 pid=1763 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.15" (uid=0 pid=1219 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 00:07:55 Isis sudo: pam_unix(sudo:auth): authentication failure; logname=mimi uid=1000 euid=0 tty=/dev/pts/1 ruser=mimi rhost= user=mimi Jun 7 00:08:11 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/add-apt-repository ppa:colingille/freshlight Jun 7 00:08:11 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 00:08:32 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 00:11:20 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/apt-get install gparted Jun 7 00:11:20 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 00:11:59 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 00:17:02 Isis CRON[2651]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 00:17:02 Isis CRON[2651]: pam_unix(cron:session): session closed for user root Jun 7 00:17:32 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain ONE-SHOT authorization for action com.ubuntu.pkexec.gparted for unix-process:2655:96838 [/bin/sh /usr/bin/gparted-pkexec] (owned by unix-user:mimi) Jun 7 00:17:32 Isis pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000) Jun 7 00:17:32 Isis pkexec: pam_ck_connector(polkit-1:session): cannot determine display-device Jun 7 00:17:32 Isis pkexec[2657]: mimi: Executing command [USER=root] [TTY=unknown] [CWD=/home/mimi] [COMMAND=/usr/sbin/gparted] Jun 7 00:48:15 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain ONE-SHOT authorization for action com.ubuntu.pkexec.gparted for unix-process:3813:281120 [/bin/sh /usr/bin/gparted-pkexec] (owned by unix-user:mimi) Jun 7 00:48:15 Isis pkexec: pam_unix(polkit-1:session): session opened for user root by (uid=1000) Jun 7 00:48:15 Isis pkexec: pam_ck_connector(polkit-1:session): cannot determine display-device Jun 7 00:48:15 Isis pkexec[3815]: mimi: Executing command [USER=root] [TTY=unknown] [CWD=/home/mimi] [COMMAND=/usr/sbin/gparted] Jun 7 01:17:01 Isis CRON[3960]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 01:17:01 Isis CRON[3960]: pam_unix(cron:session): session closed for user root Jun 7 02:08:52 Isis gnome-screensaver-dialog: gkr-pam: unlocked login keyring Jun 7 02:17:01 Isis CRON[4246]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 02:17:01 Isis CRON[4246]: pam_unix(cron:session): session closed for user root Jun 7 02:17:05 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/apt-get install unetbootin Jun 7 02:17:05 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 02:17:57 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:18:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:18:59 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 02:18:59 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:19:26 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:19:26 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 02:19:26 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 02:33:21 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 02:33:21 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 02:33:21 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 02:40:04 Isis sudo: mimi : TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin rootcheck=no Jun 7 02:40:04 Isis sudo: pam_unix(sudo:session): session opened for user root by mimi(uid=1000) Jun 7 03:17:01 Isis CRON[5506]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 03:17:01 Isis CRON[5506]: pam_unix(cron:session): session closed for user root Jun 7 03:33:24 Isis sudo: pam_unix(sudo:session): session closed for user root Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 03:33:43 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 03:33:43 Isis sudo: mimi : 3 incorrect password attempts ; TTY=pts/1 ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin showall=yes 'rootcheck=no' Jun 7 03:33:43 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 04:17:01 Isis CRON[6119]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 04:17:01 Isis CRON[6119]: pam_unix(cron:session): session closed for user root Jun 7 04:18:35 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action org.debian.apt.install-or-remove-packages for system-bus-name::1.79 [/usr/bin/python /usr/bin/landscape-client-ui-install] (owned by unix-user:mimi) Jun 7 04:19:11 Isis groupadd[6702]: group added to /etc/group: name=landscape, GID=127 Jun 7 04:19:11 Isis groupadd[6702]: group added to /etc/gshadow: name=landscape Jun 7 04:19:11 Isis groupadd[6702]: new group: name=landscape, GID=127 Jun 7 04:19:11 Isis useradd[6706]: new user: name=landscape, UID=115, GID=127, home=/var/lib/landscape, shell=/bin/false Jun 7 04:19:12 Isis usermod[6711]: change user 'landscape' password Jun 7 04:19:12 Isis chage[6716]: changed password expiry for landscape Jun 7 04:19:37 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6146:1543697 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 04:20:20 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6832:1555313 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 04:21:04 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:6827:1555123 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:21:08 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:6827:1555123 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:21:44 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action org.debian.apt.install-or-remove-packages for system-bus-name::1.87 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:22:27 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:7830:1567424 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 04:25:50 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:7876:1584865 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 04:25:52 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.ubuntu.languageselector.setsystemdefaultlanguage for unix-process:7876:1584865 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 05:11:57 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action org.debian.apt.install-or-remove-packages for system-bus-name::1.95 [/usr/bin/python /usr/bin/gnome-language-selector] (owned by unix-user:mimi) Jun 7 05:17:02 Isis CRON[8708]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 05:17:02 Isis CRON[8708]: pam_unix(cron:session): session closed for user root Jun 7 05:28:03 Isis lightdm: pam_unix(lightdm-autologin:session): session opened for user mimi by (uid=0) Jun 7 05:28:03 Isis lightdm: pam_ck_connector(lightdm-autologin:session): nox11 mode, ignoring PAM_TTY :0 Jun 7 05:28:17 Isis polkitd(authority=local): Registered Authentication Agent for unix-session:/org/freedesktop/ConsoleKit/Session1 (system bus name :1.32 [/usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) Jun 7 05:28:32 Isis dbus[660]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.44" (uid=1000 pid=1736 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1333 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 06:17:01 Isis CRON[2391]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 06:17:02 Isis CRON[2391]: pam_unix(cron:session): session closed for user root Jun 7 06:25:02 Isis CRON[2492]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 06:25:02 Isis CRON[2492]: pam_unix(cron:session): session closed for user root Jun 7 07:17:01 Isis CRON[3174]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 07:17:01 Isis CRON[3174]: pam_unix(cron:session): session closed for user root Jun 7 07:30:01 Isis CRON[3397]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 07:30:01 Isis CRON[3397]: pam_unix(cron:session): session closed for user root Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:01 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:01 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/share/checkbox/backend --path=/usr/share/checkbox/scripts:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games /tmp/checkboxQbuE6V/input /tmp/checkboxQbuE6V/output Jun 7 08:09:01 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:09:59 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:09:59 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/share/checkbox/backend --path=/usr/share/checkbox/scripts:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games /tmp/checkboxQbuE6V/input /tmp/checkboxQbuE6V/output Jun 7 08:09:59 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 08:10:55 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 08:10:55 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/share/checkbox/backend --path=/usr/share/checkbox/scripts:/usr/lib/lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games /tmp/checkboxQbuE6V/input /tmp/checkboxQbuE6V/output Jun 7 08:10:55 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 08:17:01 Isis CRON[4215]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 08:17:01 Isis CRON[4215]: pam_unix(cron:session): session closed for user root Jun 7 09:17:02 Isis CRON[4766]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 09:17:02 Isis CRON[4766]: pam_unix(cron:session): session closed for user root Jun 7 10:17:02 Isis CRON[5046]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 10:17:02 Isis CRON[5046]: pam_unix(cron:session): session closed for user root Jun 7 11:17:02 Isis CRON[5325]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 11:17:02 Isis CRON[5325]: pam_unix(cron:session): session closed for user root Jun 7 12:17:01 Isis CRON[5617]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 12:17:01 Isis CRON[5617]: pam_unix(cron:session): session closed for user root Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_unix(gnome-screensaver:auth): authentication failure; logname= uid=1000 euid=1000 tty=:0.0 ruser= rhost= user=mimi Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): getting password (0x00000388) Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): pam_get_item returned a password Jun 7 13:07:51 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): request wbcLogonUser failed: WBC_ERR_AUTH_ERROR, PAM error: PAM_USER_UNKNOWN (10), NTSTATUS: NT_STATUS_NO_SUCH_USER, Error message was: No such user Jun 7 13:08:03 Isis gnome-screensaver-dialog: pam_unix(gnome-screensaver:auth): conversation failed Jun 7 13:08:03 Isis gnome-screensaver-dialog: pam_unix(gnome-screensaver:auth): auth could not identify password for [mimi] Jun 7 13:08:03 Isis gnome-screensaver-dialog: pam_winbind(gnome-screensaver:auth): getting password (0x00000388) Jun 7 13:08:08 Isis lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jun 7 13:08:08 Isis lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :1 Jun 7 13:08:13 Isis lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "mimi" Jun 7 13:08:16 Isis dbus[660]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.91" (uid=104 pid=5961 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1333 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 13:08:18 Isis dbus[660]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.98" (uid=104 pid=5999 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1333 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jun 7 13:10:15 Isis lightdm: pam_unix(lightdm:session): session closed for user lightdm Jun 7 13:17:02 Isis CRON[6181]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 13:17:02 Isis CRON[6181]: pam_unix(cron:session): session closed for user root Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): conversation failed Jun 7 13:55:14 Isis sudo: pam_unix(sudo:auth): auth could not identify password for [mimi] Jun 7 13:55:14 Isis sudo: mimi : 3 incorrect password attempts ; TTY=unknown ; PWD=/home/mimi ; USER=root ; COMMAND=/usr/bin/unetbootin 'rootcheck=no' Jun 7 13:55:14 Isis sudo: unable to execute /usr/sbin/sendmail: No such file or directory Jun 7 14:02:33 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6736:3087856 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 14:02:51 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 FAILED to authenticate to gain authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6752:3089992 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 14:03:14 Isis polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session1 successfully authenticated as unix-user:mimi to gain TEMPORARY authorization for action com.canonical.LandscapeClientSettings.configure for unix-process:6763:3092515 [/usr/bin/python /usr/bin/landscape-client-settings-ui] (owned by unix-user:mimi) Jun 7 14:17:01 Isis CRON[6933]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 14:17:01 Isis CRON[6933]: pam_unix(cron:session): session closed for user root Jun 7 15:17:02 Isis CRON[7611]: pam_unix(cron:session): session opened for user root by (uid=0) Jun 7 15:17:02 Isis CRON[7611]: pam_unix(cron:session): session closed for user root

    Read the article

  • Mysql - help me optimize this query

    - by sandeepan-nath
    About the system: -The system has a total of 8 tables - Users - Tutor_Details (Tutors are a type of User,Tutor_Details table is linked to Users) - learning_packs, (stores packs created by tutors) - learning_packs_tag_relations, (holds tag relations meant for search) - tutors_tag_relations and tags and orders (containing purchase details of tutor's packs), order_details linked to orders and tutor_details. For a more clear idea about the tables involved please check the The tables section in the end. -A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is a simpler representation (not the actual) of the more complex query which I am trying to optimize:- I have used statements like explanation of parts in the query select SUM(DISTINCT( t.tag LIKE "%Dictatorship%" )) as key_1_total_matches, SUM(DISTINCT( t.tag LIKE "%democracy%" )) as key_2_total_matches, td., u., count(distinct(od.id_od)), if (lp.id_lp > 0) then some conditional logic on lp fields else 0 as tutor_popularity from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN `some other tables on lp.id_lp - let's call learning pack tables set (including Learning_Packs table)` LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) where some condition on Users table's fields AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN `some conditions on learning pack tables set` ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN `some conditions on webclasses tables set` ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and some conditions on Orders table's fields ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%") group by td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 ===================================================================== What does the above query do? Does AND logic search on the search keywords (2 in this example - "Democracy" and "Dictatorship"). Returns only those tutors for which both the keywords are present in the union of the two sets - tutors details and details of all the packs created by a tutor. To make things clear - Suppose a Tutor name "Sandeepan Nath" has created a pack "My first pack", then:- Searching "Sandeepan Nath" returns Sandeepan Nath. Searching "Sandeepan first" returns Sandeepan Nath. Searching "Sandeepan second" does not return Sandeepan Nath. ====================================================================================== The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query on heavily loaded databases is like 25 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. It is possible that some of the delay is being caused because all the possible fields have not yet been indexed, but I would appreciate a better query as a solution, optimized as much as possible, displaying the same results ========================================================================================== How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. ==================================================================== The tables Most of the following tables contain many other fields which I have omitted here. CREATE TABLE IF NOT EXISTS users ( id_user int(10) unsigned NOT NULL AUTO_INCREMENT, name varchar(100) NOT NULL DEFAULT '', surname varchar(155) NOT NULL DEFAULT '', PRIMARY KEY (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=636 ; CREATE TABLE IF NOT EXISTS tutor_details ( id_tutor int(10) NOT NULL AUTO_INCREMENT, id_user int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_tutor), KEY Users_FKIndex1 (id_user) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=51 ; CREATE TABLE IF NOT EXISTS orders ( id_order int(10) unsigned NOT NULL AUTO_INCREMENT, PRIMARY KEY (id_order), KEY Orders_FKIndex1 (id_user), ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=275 ; ALTER TABLE orders ADD CONSTRAINT Orders_ibfk_1 FOREIGN KEY (id_user) REFERENCES users (id_user) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS order_details ( id_od int(10) unsigned NOT NULL AUTO_INCREMENT, id_order int(10) unsigned NOT NULL DEFAULT '0', id_author int(10) NOT NULL DEFAULT '0', PRIMARY KEY (id_od), KEY Order_Details_FKIndex1 (id_order) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=284 ; ALTER TABLE order_details ADD CONSTRAINT Order_Details_ibfk_1 FOREIGN KEY (id_order) REFERENCES orders (id_order) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs ( id_lp int(10) unsigned NOT NULL AUTO_INCREMENT, id_author int(10) unsigned NOT NULL DEFAULT '0', PRIMARY KEY (id_lp), KEY Learning_Packs_FKIndex2 (id_author), KEY id_lp (id_lp) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=23 ; CREATE TABLE IF NOT EXISTS tags ( id_tag int(10) unsigned NOT NULL AUTO_INCREMENT, tag varchar(255) DEFAULT NULL, PRIMARY KEY (id_tag), UNIQUE KEY tag (tag), KEY id_tag (id_tag), KEY tag_2 (tag), KEY tag_3 (tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3419 ; CREATE TABLE IF NOT EXISTS tutors_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, KEY Tutors_Tag_Relations (id_tag), KEY id_tutor (id_tutor), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE tutors_tag_relations ADD CONSTRAINT Tutors_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; CREATE TABLE IF NOT EXISTS learning_packs_tag_relations ( id_tag int(10) unsigned NOT NULL DEFAULT '0', id_tutor int(10) DEFAULT NULL, id_lp int(10) unsigned DEFAULT NULL, KEY Learning_Packs_Tag_Relations_FKIndex1 (id_tag), KEY id_lp (id_lp), KEY id_tag (id_tag) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; ALTER TABLE learning_packs_tag_relations ADD CONSTRAINT Learning_Packs_Tag_Relations_ibfk_1 FOREIGN KEY (id_tag) REFERENCES tags (id_tag) ON DELETE NO ACTION ON UPDATE NO ACTION; =================================================================================== Following is the exact query (this includes classes also - tutors can create classes and search terms are matched with classes created by tutors):- select count(distinct(od.id_od)) as tutor_popularity, CASE WHEN (IF((wc.id_wc 0), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT'))), 0)) THEN 1 ELSE 0 END as 'classes_published', CASE WHEN (IF((lp.id_lp 0), (lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT'))),0)) THEN 1 ELSE 0 END as 'packs_published', td . * , u . * from Tutor_Details AS td JOIN Users as u on u.id_user = td.id_user LEFT JOIN Learning_Packs_Tag_Relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN Learning_Packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN Learning_Packs_Categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN Learning_Packs_Categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN Learning_Pack_Content as lpct on (lp.id_lp = lpct.id_lp) LEFT JOIN Webclasses_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN WebClasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN Learning_Packs_Categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN Learning_Packs_Categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN Order_Details as od on td.id_tutor = od.id_author LEFT JOIN Orders as o on od.id_order = o.id_order LEFT JOIN Tutors_Tag_Relations as ttagrels ON td.id_tutor = ttagrels.id_tutor JOIN Tags as t on (t.id_tag = ttagrels.id_tag) OR (t.id_tag = lptagrels.id_tag) OR (t.id_tag = wtagrels.id_tag) where (u.country='IE' or u.country IN ('INT')) AND CASE WHEN ((t.id_tag = lptagrels.id_tag) AND (lp.id_lp 0)) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND (lpcp.country_code='IE' or lpcp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN ((t.id_tag = wtagrels.id_tag) AND (wc.id_wc 0)) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date '2010-06-01 22:00:56' AND wccp.status = 1 AND (wccp.country_code='IE' or wccp.country_code IN ('INT')) ELSE 1 END AND CASE WHEN (od.id_od0) THEN od.id_author = td.id_tutor and o.order_status = 'paid' and CASE WHEN (od.id_wc 0) THEN od.can_attend_class=1 ELSE 1 END ELSE 1 END AND 1 group by td.id_tutor order by tutor_popularity desc, u.surname asc, u.name asc limit 0,20 Please note - The provided database structure does not show all the fields and tables as in this query

    Read the article

  • System Account Logon Failures ever 30 seconds

    - by floyd
    We have two Windows 2008 R2 SP1 servers running in a SQL failover cluster. On one of them we are getting the following events in the security log every 30 seconds. The parts that are blank are actually blank. Has anyone seen similar issues, or assist in tracking down the cause of these events? No other event logs show anything relevant that I can tell. Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: SYSTEM Account Name: SERVERNAME$ Account Domain: DOMAINNAME Logon ID: 0x3e7 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x238 Caller Process Name: C:\Windows\System32\lsass.exe Network Information: Workstation Name: SERVERNAME Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Kerberos Transited Services: - Package Name (NTLM only): - Key Length: 0 Second event which follows every one of the above events Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 10/17/2012 10:02:04 PM Event ID: 4625 Task Category: Logon Level: Information Keywords: Audit Failure User: N/A Computer: SERVERNAME.domainname.local Description: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Account Domain: Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000006d Sub Status: 0x80090325 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: - Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Schannel Authentication Package: Microsoft Unified Security Protocol Provider Transited Services: - Package Name (NTLM only): - Key Length: 0 EDIT UPDATE: I have a bit more information to add. I installed Network Monitor on this machine and did a filter for Kerberos traffic and found the following which corresponds to the timestamps in the security audit log. A Kerberos AS_Request Cname: CN=SQLInstanceName Realm:domain.local Sname krbtgt/domain.local Reply from DC: KRB_ERROR: KDC_ERR_C_PRINCIPAL_UNKOWN I then checked the security audit logs of the DC which responded and found the following: A Kerberos authentication ticket (TGT) was requested. Account Information: Account Name: X509N:<S>CN=SQLInstanceName Supplied Realm Name: domain.local User ID: NULL SID Service Information: Service Name: krbtgt/domain.local Service ID: NULL SID Network Information: Client Address: ::ffff:10.240.42.101 Client Port: 58207 Additional Information: Ticket Options: 0x40810010 Result Code: 0x6 Ticket Encryption Type: 0xffffffff Pre-Authentication Type: - Certificate Information: Certificate Issuer Name: Certificate Serial Number: Certificate Thumbprint: So appears to be related to a certificate installed on the SQL machine, still dont have any clue why or whats wrong with said certificate. It's not expired etc.

    Read the article

  • BPM Business Value Patterns

    - by JuergenKress
    Together with Matthias Ziegler from Accenture we presented the BPM Business Value Patterns at the SOA & BPM Integration Days in Germany in October: BPM Business Value Patterns View more presentations by Jürgen Kress Please visit the website http://soa-bpm-days.de/  for the next SOA & BPM Integration Days III February 29th & March 1st in Munich If you'd like to learn more please feel free to contact us any time: Matthias Ziegler Jürgen Kress For regular information on Oracle SOA Suite become a member of the SOA Partner Community. To register please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: Matthias Ziegler,Jürgen Kress,SOA & BPM Integration Days,BPM,BPM Value Patterns,BPM ROI,Oracle,OPN,Accenture

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >