Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 34/95 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Want to book hotel stay using Bitcoin? Book using Expedia.com

    - by Gopinath
    The online travel booking leader Expedia announced that it started accepting Bitcoins for booking hotels on its website. For those who are new to Bitcoin, it is a digital currency in which transactions can be performed without the need for a central bank – its more like internet of currency. At the moment Expedia is accepting Bitcoin payments only for hotel bookings and in the future it may allow flights and vacation packages bookings. When Expedia customers wants to pay for a hotel using Bitcoin then they are transferred to Coinbase, a third party bit coin processor, for accepting the payments and then they are redirected back to Expedia.com to complete booking. This simple process would definitely drive mainstream adoption of Bitcoin – a win-win situation for digital currency users as well as for the travel company as they save a lot on card processing fees. Online retailers pay around 3% of transaction amount as fees for credit card processing companies like Visa & Master when they accept cards, but Coinbase charges a fee of just 1 percent for processing Bitcoins. Irrespective of customers adoption to Bitcoin based payments on Expedia as well as the savings on transaction fees, this move would give bragging rights to Expedia being the first e-commerce giant to accept digital currency! Image credit: Jonathan Caves

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • SQL SERVER – Using MAXDOP 1 for Single Processor Query – SQL in Sixty Seconds #008 – Video

    - by pinaldave
    Today’s SQL in Sixty Seconds video is inspired from my presentation at TechEd India 2012 on Speed up! – Parallel Processes and Unparalleled Performance. There are always special cases when it is about SQL Server. There are always few queries which gives optimal performance when they are executed on single processor and there are always queries which gives optimal performance when they are executed on multiple processors. I will be presenting the how to identify such queries as well what are the best practices related to the same. In this quick video I am going to demonstrate if the query is giving optimal performance when running on single CPU how one can restrict queries to single CPU by using hint OPTION (MAXDOP 1). More on Errors: Difference Temp Table and Table Variable – Effect of Transaction Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT Debate – Table Variables vs Temporary Tables – Quiz – Puzzle – 13 of 31 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Congratulations to 2012 Innovation Award winners in BPM category

    - by Manoj Das
    Last year many of our customers went live on BPM 11g. It is my extreme pleasure to congratulate two of them – Amadeus and Navistar – for being awarded Oracle Fusion Middleware Innovation Award at Oracle OpenWorld 2012. We invited our customers to submit their most innovative BPM implementations that have delivered substantiated value to them. This year we saw more than 20 submissions from our customers seeing significant business value from their live BPM 11g deployments. The submissions came from across the world, spanning various industry verticals including manufacturing, healthcare, logistics, Hi-Tech, Public Sector, Education and covering many process usage patterns. Award submissions were evaluated based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. Amadeus Team Receiving Innovation Award from Hasan Rizvi Congratulations to Amadeus and Navistar and their teams on being recognized from among some very strong submissions and more importantly for the business value delivered. It is an honor to be part of your success and to play a small role in the innovation you drive. Navistar is a leading truck manufacturing company which produces International® brand commercial and military trucks, MaxxForce® brand diesel engines, IC Bus™ brand school and commercial buses, and Navistar RV brands of recreational vehicles. The company also provides truck and diesel engine service parts. Amadeus is a leading transaction processor for the global travel and tourism industry, providing transaction processing power and technology solutions to both travellers and travel providers. Both Navistar and Amadeus have leveraged Oracle BPM Suite to improve visibility into their business and made their business more agile and efficient. We congratulate them again and wish them continued success in their business and future BPM initiatives.

    Read the article

  • Authenticate native mobile app using a REST API

    - by Supercell
    I'm starting a new project soon, which is targeting mobile application for all major mobile platforms (iOS, Android, Windows). It will be a client-server architecture. The app is both informational and transactional. For the transactional part, they're required to have an account and log in before a transaction can be made. I'm new to mobile development, so I don't know how the authentication part is done on these platforms. The clients will communicate with the server through a REST API. Will be using HTTPS ofcourse. I haven't yet decided if I want the user to log in when they open the app, or only when they perform a transaction. I got the following questions: 1) Like the Facebook application, you only enter your credentials when you open the application for the first time. After that, you're automatically signed in every time you open the app. How does one accomplish this? Just simply by encrypting and storing the credentials on the device and sending them every time the app starts? 2) Do I need to authenticate the user for each (transactional) request made to the REST API or use a token based approach? Please feel free to suggest other ways for authentication. Thanks!

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Application that provides unique keys to multiple threads

    - by poly
    Thanks all for your help before. So, this is what I came up with so far, the requirements are, application has two or more threads and each thread requires a unique session/transaction ID. is the below considered thread safe? thread 1 will register itself with get_id by sending it's pid thread 2 will do the same then thread 1 & 2 will call the function to get a unique ID function get_id(bool choice/*register thread or get id*/, pid_t pid) { static int pid[15][1]={0};//not sure if this work, anyway considor any it's been set to 0 by any other way than this static int total_threads = 0; static int i = 0; int x=0,y=0; if (choice) // thread registeration part { for(x=0;x<15;x++) { if (pid[x][0]==0); { pid[x][0] = (int) pid; pid[x][1] = (x & pidx[x][1]) << 24;//initiate counter for this PID by shifting x to the 25th bit, it could be any other bit, it's just to set a range. //so the range will be between 0x0000000 and 0x0ffffff, the second one will be 0x1000000 and 0x1ffffff, break; } total_threads++; } } //search if pid exist or not, if yes return transaction id for(x=0;x<15;x++) { if (pid[x][0]==pid); { pid[x][1]++;//put some test here to reset the number to 0 if it reaches 0x0ffffff return pid[x][1]; break; } } }

    Read the article

  • Need some critique on .NET/WCF SOA architecture plan

    - by user998101
    I am working on a refactoring of some services and would appreciate some critique on my general approach. I am working with three back-end data systems and need to expose an authenticated front-end API over http binding, JSON, and REST for internal apps as well as 3rd party integration. I've got a rough idea below that's a hybrid of what I have and where I intend to wind up. I intend to build guidance extensions to support this architecture so that devs can build this out quickly. Here's the current idea for our structure: Front-end WCF routing service (spread across multiple IIS servers via hardware load balancer) Load balancing of services behind routing is handled within routing service, probably round-robin One of the services will be a token Multiple bindings per-service exposed to address JSON, REST, and whatever else comes up later All in/out is handled via POCO DTOs Use unity to scan for what services are available and expose them The front-end services behind the routing service do nothing more than expose the API and do conversion of DTO<-Entity Unity inject service implementation to allow mocking automapper for DTO/Entity conversion Invoke WF services where response required immediately Queue to ESB for async WF -- ESB will invoke WF later Business logic WF layer Expose same api as front-end services Implement business logic Wrap transaction context where needed Call out to composite/atomic services Composite/Atomic Services Exposed as WCF One service per back-end system Standard atomic CRUD operations plus composite operations Supports transaction context The questions I have are: Are the separation of concerns outlined above beneficial? Current thought is each layer below is its own project, except the backend stuff, where each system gets one project. The project has a servicehost and all the services are under a services folder. Interfaces live in a separate project at each layer. DTO and Entities are in two separate projects under a shared folder. I am currently planning to build dedicated services for shared functionality such as logging and overload things like tracelistener to call those services. Is this a valid approach? Any other suggestions/comments?

    Read the article

  • @ContextConfiguration in Spring 3.0 give me No default constructor found

    - by atomsfat
    I have already do the test using AbstractDependencyInjectionSpringContextTests and it works but in spring 3 it is deprecated, so I decided to try @ContextConfiguration but spring say that default constructor is not found, I check and the class doesn't have any constructor. If I use this test spring give the object. package atoms.portales.servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import java.util.List; import javax.persistence.EntityManager; import org.springframework.test.AbstractDependencyInjectionSpringContextTests; /** * * @author tsalazar */ public class ClienteServiceImplDeTest extends AbstractDependencyInjectionSpringContextTests{ private ClienteService clienteService; public ClienteService getClienteService() { return clienteService; } public void setClienteService(ClienteService clienteService) { this.clienteService = clienteService; } public ClienteServiceImplDeTest(String testName) { super(testName); } @Override protected String[] getConfigLocations() { return new String[]{"PersistenceAppCtx.xml", "ServicesAppCtx.xml"}; } /** * Test of buscaCliente method, of class ClienteServiceImplDeTest. */ public void testBuscaCliente() { System.out.println("======================================="); System.out.println("buscaCliente"); String nombre = ""; System.out.println(clienteService); System.out.println("======================================="); } } But if I use this, spring say that default constructor is not found. package atoms.config.portales.servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import org.junit.runner.RunWith; import org.junit.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.transaction.TransactionConfiguration; import org.springframework.transaction.annotation.Transactional; /** * * @author tsalazar */ @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = {"/PersistenceAppCtx.xml", "/ServicesAppCtx.xml"}) @TransactionConfiguration(transactionManager = "transactionManager") @Transactional public class ClienteServiceImplTest { @Autowired private ClienteService clienteService; /** * Test of buscaCliente method, of class ClienteServiceImpl. */ @Test public void testBuscaCliente() { System.out.println("======================================="); System.out.println("buscaCliente"); System.out.println(clienteService); System.out.println("======================================="); } } This how I do the implementacion: package atoms.portales.servicios; import atoms.portales.model; /** * Una interface para obtener clientes, con sus surcursales, servicios, layouts * y contratos. Tambien soporta operaciones CRUD. * @author tsalazar */ public interface ClienteService { /** * Busca clientes a partir del nombre * @param nombre */ public Cliente buscaCliente(String nombre); } the implemetacion package atoms.portales..servicios.impl; import atoms.portales.model.Cliente; import atoms.portales.servicios.ClienteService; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import org.springframework.stereotype.Repository; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; /** * A JPA-based implementation.Delegates to a JPA entity manager to issue data access calls * against the backing repository. The EntityManager reference is provided by the managing container (Spring) * automatically. */ @Service("clienteSerivice") @Repository public class ClienteServiceImpl implements ClienteService { public ClienteServiceImpl() { } private EntityManager em; @PersistenceContext public void setEntityManager(EntityManager em) { this.em = em; } @Transactional(readOnly = true) public Cliente buscaCliente(String nombre) { Cliente cliente = em.getReference(Cliente.class, 1l); return cliente; } } spring configuration: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd"> <!-- Instructs Spring to perfrom declarative transaction management on annotated classes --> <tx:annotation-driven /> <!-- Drives transactions using local JPA APIs --> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <!-- Creates a EntityManagerFactory for use with the Hibernate JPA provider and a simple in-memory data source populated with test data --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" /> </property> </bean> <!-- Deploys a in-memory "booking" datasource populated --> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver" /> <property name="url" value="jdbc:hsqldb:hsql://localhost/test" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean> <context:component-scan base-package="atoms.portales.servicios" /> </beans> This is the persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="configuradorPortales" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <class>atoms.portales.model.Cliente</class> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.HSQLDialect"/> <property name="hibernate.hbm2ddl.auto" value="create-drop"/> <property name="hibernate.show_sql" value="true"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider"/> </properties> </persistence-unit> </persistence> This is the error that give me:

    Read the article

  • These are few objective type questions which i was not able to find the solution [closed]

    - by Tarun
    1. Which of the following advantages does System.Collections.IDictionaryEnumerator provide over System.Collections.IEnumerator? a. It adds properties for direct access to both the Key and the Value b. It is optimized to handle the structure of a Dictionary. c. It provides properties to determine if the Dictionary is enumerated in Key or Value order d. It provides reverse lookup methods to distinguish a Key from a specific Value 2. When Implementing System.EnterpriseServices.ServicedComponent derived classes, which of the following statements are true? a. Enabling object pooling requires an attribute on the class and the enabling of pooling in the COM+ catalog. b. Methods can be configured to automatically mark a transaction as complete by the use of attributes. c. You can configure authentication using the AuthenticationOption when the ActivationMode is set to Library. d. You can control the lifecycle policy of an individual instance using the SetLifetimeService method. 3. Which of the following are true regarding event declaration in the code below? class Sample { event MyEventHandlerType MyEvent; } a. MyEventHandlerType must be derived from System.EventHandler or System.EventHandler<TEventArgs> b. MyEventHandlerType must take two parameters, the first of the type Object, and the second of a class derived from System.EventArgs c. MyEventHandlerType may have a non-void return type d. If MyEventHandlerType is a generic type, event declaration must use a specialization of that type. e. MyEventHandlerType cannot be declared static 4. Which of the following statements apply to developing .NET code, using .NET utilities that are available with the SDK or Visual Studio? a. Developers can create assemblies directly from the MSIL Source Code. b. Developers can examine PE header information in an assembly. c. Developers can generate XML Schemas from class definitions contained within an assembly. d. Developers can strip all meta-data from managed assemblies. e. Developers can split an assembly into multiple assemblies. 5. Which of the following characteristics do classes in the System.Drawing namespace such as Brush,Font,Pen, and Icon share? a. They encapsulate native resource and must be properly Disposed to prevent potential exhausting of resources. b. They are all MarshalByRef derived classes, but functionality across AppDomains has specific limitations. c. You can inherit from these classes to provide enhanced or customized functionality 6. Which of the following are required to be true by objects which are going to be used as keys in a System.Collections.HashTable? a. They must handle case-sensitivity identically in both the GetHashCode() and Equals() methods. b. Key objects must be immutable for the duration they are used within a HashTable. c. Get HashCode() must be overridden to provide the same result, given the same parameters, regardless of reference equalityl unless the HashTable constructor is provided with an IEqualityComparer parameter. d. Each Element in a HashTable is stored as a Key/Value pair of the type System.Collections.DictionaryElement e. All of the above 7. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. A Nullable type is a structure. c. An implicit conversion exists from any non-nullable value type to a nullable form of that type. d. An implicit conversion exists from any nullable value type to a non-nullable form of that type. e. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 8. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 9. Which of the following does using Initializer Syntax with a collection as shown below require? CollectionClass numbers = new CollectionClass { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; a. The Collection Class must implement System.Collections.Generic.ICollection<T> b. The Collection Class must implement System.Collections.Generic.IList<T> c. Each of the Items in the Initializer List will be passed to the Add<T>(T item) method d. The items in the initializer will be treated as an IEnumerable<T> and passed to the collection constructor+K110 10. What impact will using implicitly typed local variables as in the following example have? var sample = "Hello World"; a. The actual type is determined at compilation time, and has no impact on the runtime b. The actual type is determined at runtime, and late binding takes effect c. The actual type is based on the native VARIANT concept, and no binding to a specific type takes place. d. "var" itself is a specific type defined by the framework, and no special binding takes place 11. Which of the following is not supported by remoting object types? a. well-known singleton b. well-known single call c. client activated d. context-agile 12. In which of the following ways do structs differ from classes? a. Structs can not implement interfaces b. Structs cannot inherit from a base struct c. Structs cannot have events interfaces d. Structs cannot have virtual methods 13. Which of the following is not an unboxing conversion? a. void Sample1(object o) { int i = (int)o; } b. void Sample1(ValueType vt) { int i = (int)vt; } c. enum E { Hello, World} void Sample1(System.Enum et) { E e = (E) et; } d. interface I { int Value { get; set; } } void Sample1(I vt) { int i = vt.Value; } e. class C { public int Value { get; set; } } void Sample1(C vt) { int i = vt.Value; } 14. Which of the following are characteristics of the System.Threading.Timer class? a. The method provided by the TimerCallback delegate will always be invoked on the thread which created the timer. b. The thread which creates the timer must have a message processing loop (i.e. be considered a UI thread) c. The class contains protection to prevent reentrancy to the method provided by the TimerCallback delegate d. You can receive notification of an instance being Disposed by calling an overload of the Dispose method. 15. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 16. Which of the following scenarios are applicable to Window Workflow Foundation? a. Document-centric workflows b. Human workflows c. User-interface page flows d. Builtin support for communications across multiple applications and/or platforms e. All of the above 17. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is a private instance member with a leading underscore that can be programmatically referenced. c. The compiler generates a backing field that is accessible via reflection d. The compiler generates a code that will store the information separately from the instance to ensure its security. 18 While using the capabilities supplied by the System.Messaging classes, which of the following are true? a. Information must be explicitly converted to/from a byte stream before it uses the MessageQueue class b. Invoking the MessageQueue.Send member defaults to using the System.Messaging.XmlMessageFormatter to serialize the object. c. Objects must be XMLSerializable in order to be transferred over a MessageQueue instance. d. The first entry in a MessageQueue must be removed from the queue before the next entry can be accessed e. Entries removed from a MessageQueue within the scope of a transaction, will be pushed back into the front of the queue if the transaction fails. 19. Which of the following are true about declarative attributes? a. They must be inherited from the System.Attribute. b. Attributes are instantiated at the same time as instances of the class to which they are applied. c. Attribute classes may be restricted to be applied only to application element types. d. By default, a given attribute may be applied multiple times to the same application element. 20. When using version 3.5 of the framework in applications which emit a dynamic code, which of the following are true? a. A Partial trust code can not emit and execute a code b. A Partial trust application must have the SecurityCriticalAttribute attribute have called Assert ReflectionEmit permission c. The generated code no more permissions than the assembly which emitted it. d. It can be executed by calling System.Reflection.Emit.DynamicMethod( string name, Type returnType, Type[] parameterTypes ) without any special permissions Within Windows Workflow Foundation, Compensating Actions are used for: a. provide a means to rollback a failed transaction b. provide a means to undo a successfully committed transaction later c. provide a means to terminate an in process transaction d. achieve load balancing by adapting to the current activity 21. What is the proper declaration of a method which will handle the following event? Class MyClass { public event EventHandler MyEvent; } a. public void A_MyEvent(object sender, MyArgs e) { } b. public void A_MyEvent(object sender, EventArgs e) { } c. public void A_MyEvent(MyArgs e) { } d. public void A_MyEvent(MyClass sender,EventArgs e) { } 22. Which of the following controls allows the use of XSL to transform XML content into formatted content? a. System.Web.UI.WebControls.Xml b. System.Web.UI.WebControls.Xslt c. System.Web.UI.WebControls.Substitution d. System.Web.UI.WebControls.Transform 23. To which of the following do automatic properties refer? a. You declare (explicitly or implicitly) the accessibility of the property and get and set accessors, but do not provide any implementation or backing field b. You attribute a member field so that the compiler will generate get and set accessors c. The compiler creates properties for your class based on class level attributes d. They are properties which are automatically invoked as part of the object construction process 24. Which of the following are true about Nullable types? a. A Nullable type is a reference type. b. An implicit conversion exists from any non-nullable value type to a nullable form of that type. c. A predefined conversion from the nullable type S? to the nullable type T? exists if there is a predefined conversion from the non-nullable type S to the non-nullable type T 25. When using an automatic property, which of the following statements is true? a. The compiler generates a backing field that is completely inaccessible from the application code. b. The compiler generates a backing field that is accessible via reflection. c. The compiler generates a code that will store the information separately from the instance to ensure its security. 26. When using an implicitly typed array, which of the following is most appropriate? a. All elements in the initializer list must be of the same type. b. All elements in the initializer list must be implicitly convertible to a known type which is the actual type of at least one member in the initializer list c. All elements in the initializer list must be implicitly convertible to common type which is a base type of the items actually in the list 27. Which of the following is false about anonymous types? a. They can be derived from any reference type. b. Two anonymous types with the same named parameters in the same order declared in different classes have the same type. c. All properties of an anonymous type are read/write. 28. Which of the following are true about Extension methods. a. They can be declared either static or instance members b. They must be declared in the same assembly (but may be in different source files) c. Extension methods can be used to override existing instance methods d. Extension methods with the same signature for the same class may be declared in multiple namespaces without causing compilation errors

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

  • SQL SERVER – Auditing and Profiling Database Made Easy with SQL Audit and Comply

    - by Pinal Dave
    Do you like auditing your database, or can you think of about a million other things you’d rather do?  Unfortunately, auditing is incredibly important.  As with tax audits, it is important to audit databases to ensure they are following all the rules, but they are also important for troubleshooting and security. There are several ways to audit SQL Server.  There is manual auditing, which is going through your database “by hand,” and obviously takes a long time and is quite inefficient.  SQL Server also provides programs to help you audit your systems.  Different administrators will have different opinions about best practices and which tools to use, and each one will be perfected for certain systems and certain users. Today, though, I would like to talk about Apex SQL Audit.  It is an auditing tool that acts like “track changes” in a word processing document.  It will log what has changed on the database, who made the changes, and what effects these changes have had (i.e. what objects were affected down the line).  All this information is logged, and can be easily viewed or printed for easy access. One of the best features of Apex is that it is so customizable (and easy to use!).  First, start Apex.  Then you can connect to the database you would like to monitor. Once you select your database, you can select which table you want to audit. You can customize right down to the field you’d like to audit, and then select which types of actions you’d like tracked – insert, delete, or update.  Repeat these steps for every database you want monitored. To create the logs, choose “Create triggers” in the menu.  The script written here will be what logs each insert, delete, and update function.  Press F5 to execute.  All this tracking information will be stored in AUDIT_LOG_DATA and AUDIT_LOG_TRANSACTIONS tables.  View these tables using ApexSQL Audit reports. These transaction logs can be extremely detailed – especially on very busy servers, where every move it traced.  Reading them can be overwhelming, to say the least.  Apex has tried to make things easier for the average DBA, though. You can read these tracking logs in Apex, and it will display data and objects that affect your server – even things that were happening on your server before you installed Apex! To read these logs, open Apex, and connect to that database you want to audit. Go to the Transaction Logs tab, and add the logs you want to read. To narrow down what results you want to see, you can use the Filter tab to choose time, operation type, name, users, and more. Click Open, and you can see the results in a grid (as shown below).  You can export these results to CSV, HTML, XML or SQL files and save on the hard disk. One of the advantages is that since there are no triggers here, there are no other processes that will affect SQL Server performance.  Using this method is also how to view history from your database that occurred before Apex was installed.  This type of tracking does require storage space for the data sources, as the database must be fully running, and the transaction logs must exist (things not stored in the transactions logs will not be recoverable). Apex can also replace SQL Server Profiler and SQL Server Traces – which are much more complex and error-prone – with its ApexSQL Comply.  It can do fault tolerant auditing, centralized reporting, and “who saw what” information in an easy-to-use interface.  The tracking settings can be altered by the user, or the default options will provide solutions to the most common auditing problems. To get started: open ApexSQL Comply, and selected Database Filter Settings to choose which database you’d like to audit.  You can select which tracking you’re like in Operation Types – DML, DDL, queries executed, execute statements, and more.  To get started, click Start Auditing. After this, every action will be stored in the central repository database (ApexSQLCrd).  You can view the audit and create a report (or view the standard default report) using a wizard. You can see how easy it is to use ApexSQL Comply.  You can easily set audits, including the type and time, and create customized reports.  Remote users can easily access the reports through the user interface (available online, as well), and security concerns are all taken care of by the program.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • 7 Reasons for Abandonment in eCommerce and the need for Contextual Support by JP Saunders

    - by Tuula Fai
    Shopper confidence, or more accurately the lack thereof, is the bane of the online retailer. There are a number of questions that influence whether a shopper completes a transaction, and all of those attributes revolve around knowledge. What products are available? What products are on offer? What would be the cost of the transaction? What are my options for delivery? In general, most online businesses do a good job of answering basic questions around the products as the shopper engages in the online journey, navigating the product catalog and working through the checkout process. The needs that are harder to address for the shopper are those that are less concerned with product specifics and more concerned with deciding whether the transaction met their needs and delivered value. A recent study by the Baymard Institute [1] finds that more than 60% of ecommerce site visitors will abandon their shopping cart. The study also identifies seven reasons for abandonment out of the commerce process [2]. Most of those reasons come down to poor usability within the commerce experience. Distractions. External distractions within the shopper’s external environment (TV, Children, Pets, etc.) or distractions on the eCommerce page can drive shopper abandonment. Ideally, the selection and check-out process should be straightforward. One common distraction is to drive the shopper away from the task at hand through pop-ups or re-directs. The shopper engaging with support information in the checkout process should not be directed away from the page to consume support. Though confidence may improve, the distraction also means abandonment may increase. Poor Usability. When the experience gets more complicated, buyer’s remorse can set in. While knowledge drives confidence, a lack of understanding erodes it. Therefore it is important that the commerce process is streamlined. In some cases, the number of clicks to complete a purchase is lengthy and unavoidable. In these situations, it is vital to ensure that the complexity of your experience can be explained with contextual support to avoid abandonment. If you can illustrate the solution to a complex action while the user is engaged in that action and address customer frustrations with your checkout process before they arise, you can decrease abandonment. Fraud. The perception of potential fraud can be enough to deter a buyer. Does your site look credible? Can shoppers trust your brand? Providing answers on the security of your experience and the levels of protection applied to profile information may play as big a role in ensuring the sale, as does the support you provide on the product offerings and purchasing process. Does it fit? If it is a clothing item or oversized furniture item, another common form of abandonment is for the shopper to question whether the item can be worn by the intended user. Providing information on the sizing applied to clothing, physical dimensions, and limitations on delivery/returns of oversized items will also assist the sale. A photo alone of the item will help, as it answers some of those questions, but won’t assuage all customer concerns about sizing and fit. Sometimes the customer doesn’t want to buy. Prospective buyers might be browsing through your catalog to kill time, or just might not have the money to purchase the item! You are unlikely to provide any information in contextual support to increase the likelihood to buy if the shopper already has no intentions of doing so. The customer will still likely abandon. Ensuring that any questions are proactively answered as they browse through your site can only increase their likelihood to return and buy at a future date. Can’t Buy. Errors or complexity at checkout can be another major cause of abandonment. Good contextual support is unlikely to help with severe errors caused by technical issues on your site, but it will have a big impact on customers struggling with complexity in the checkout process and needing a question answered prior to completing the sale. Embedded support within the checkout process to patiently explain how to complete a task will help increase conversion rates. Additional Costs. Tax, shipping and other costs or duties can dramatically increase the cost of the purchase and when unexpected, can increase abandonment, particularly if they can’t be adequately explained. Again, a lack of knowledge erodes confidence in the purchase, and cost concerns in particular, erode the perception of your brand’s trustworthiness. Again, providing information on what costs are additive and why they are being levied can decrease the likelihood that the customer will abandon out of the experience. Knowledge drives confidence and confidence drives conversion. If you’d like to understand best practices in providing contextual customer support in eCommerce to provide your shoppers with confidence, download the Oracle Cloud Service and Oracle Commerce - Contextual Support in Commerce White Paper. This white paper discusses the process of adding customer support, including a suggested process for finding where knowledge has the most influence on your shoppers and practical step-by-step illustrations on how contextual self-service can be added to your online commerce experience. Resources: [1] http://baymard.com/checkout-usability [2] http://baymard.com/blog/cart-abandonment

    Read the article

  • Building a SOA/BPM/BAM Cluster Part I &ndash; Preparing the Environment

    - by antony.reynolds
    An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.3 Provides Web Server, 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 Virtual LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

    Read the article

  • Hibernate unable to instantiate default tuplizer - cannot find getter

    - by ZeldaPinwheel
    I'm trying to use Hibernate to persist a class that looks like this: public class Item implements Serializable, Comparable<Item> { // Item id private Integer id; // Description of item in inventory private String description; // Number of items described by this inventory item private int count; //Category item belongs to private String category; // Date item was purchased private GregorianCalendar purchaseDate; public Item() { } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public int getCount() { return count; } public void setCount(int count) { this.count = count; } public String getCategory() { return category; } public void setCategory(String category) { this.category = category; } public GregorianCalendar getPurchaseDate() { return purchaseDate; } public void setPurchasedate(GregorianCalendar purchaseDate) { this.purchaseDate = purchaseDate; } My Hibernate mapping file contains the following: <property name="puchaseDate" type="java.util.GregorianCalendar"> <column name="purchase_date"></column> </property> When I try to run, I get error messages indicating there is no getter function for the purchaseDate attribute: 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Using Hibernate built-in connection pool (not for production use!) 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - Hibernate connection pool size: 20 577 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - autocommit mode: false 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/home_inventory 592 [main] INFO org.hibernate.connection.DriverManagerConnectionProvider - connection properties: {user=root, password=****} 1078 [main] INFO org.hibernate.cfg.SettingsFactory - RDBMS: MySQL, version: 5.1.45 1078 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-5.1.12 ( Revision: ${bzr.revision-id} ) 1103 [main] INFO org.hibernate.dialect.Dialect - Using dialect: org.hibernate.dialect.MySQLDialect 1107 [main] INFO org.hibernate.engine.jdbc.JdbcSupportLoader - Disabling contextual LOB creation as JDBC driver reported JDBC version [3] less than 4 1109 [main] INFO org.hibernate.transaction.TransactionFactoryFactory - Using default transaction strategy (direct JDBC transactions) 1110 [main] INFO org.hibernate.transaction.TransactionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic flush during beforeCompletion(): disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - Automatic session close at end of transaction: disabled 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch size: 15 1110 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC batch updates for versioned data: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Scrollable result sets: enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - JDBC3 getGeneratedKeys(): enabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Connection release mode: auto 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Maximum outer join fetch depth: 2 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Default batch fetch size: 1 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Generate SQL with comments: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL updates by primary key: disabled 1111 [main] INFO org.hibernate.cfg.SettingsFactory - Order SQL inserts for batching: disabled 1112 [main] INFO org.hibernate.cfg.SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.hql.ast.ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query language substitutions: {} 1113 [main] INFO org.hibernate.cfg.SettingsFactory - JPA-QL strict compliance: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Second-level cache: enabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Query cache: disabled 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Cache region factory : org.hibernate.cache.impl.NoCachingRegionFactory 1113 [main] INFO org.hibernate.cfg.SettingsFactory - Optimize cache for minimal puts: disabled 1114 [main] INFO org.hibernate.cfg.SettingsFactory - Structured second-level cache entries: disabled 1117 [main] INFO org.hibernate.cfg.SettingsFactory - Echoing all SQL to stdout 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Statistics: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Deleted entity synthetic identifier rollback: disabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Default entity-mode: pojo 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Named query checking : enabled 1118 [main] INFO org.hibernate.cfg.SettingsFactory - Check Nullability in Core (should be disabled when Bean Validation is on): enabled 1151 [main] INFO org.hibernate.impl.SessionFactoryImpl - building session factory org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.entity.PojoEntityTuplizer] at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:110) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructDefaultTuplizer(EntityTuplizerFactory.java:135) at org.hibernate.tuple.entity.EntityEntityModeToTuplizerMapping.<init>(EntityEntityModeToTuplizerMapping.java:80) at org.hibernate.tuple.entity.EntityMetamodel.<init>(EntityMetamodel.java:323) at org.hibernate.persister.entity.AbstractEntityPersister.<init>(AbstractEntityPersister.java:475) at org.hibernate.persister.entity.SingleTableEntityPersister.<init>(SingleTableEntityPersister.java:133) at org.hibernate.persister.PersisterFactory.createClassPersister(PersisterFactory.java:84) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:295) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1385) at service.HibernateSessionFactory.currentSession(HibernateSessionFactory.java:53) at service.ItemSvcHibImpl.generateReport(ItemSvcHibImpl.java:78) at service.test.ItemSvcTest.testGenerateReport(ItemSvcTest.java:226) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.hibernate.tuple.entity.EntityTuplizerFactory.constructTuplizer(EntityTuplizerFactory.java:107) ... 29 more Caused by: org.hibernate.PropertyNotFoundException: Could not find a getter for puchaseDate in class domain.Item at org.hibernate.property.BasicPropertyAccessor.createGetter(BasicPropertyAccessor.java:328) at org.hibernate.property.BasicPropertyAccessor.getGetter(BasicPropertyAccessor.java:321) at org.hibernate.mapping.Property.getGetter(Property.java:304) at org.hibernate.tuple.entity.PojoEntityTuplizer.buildPropertyGetter(PojoEntityTuplizer.java:299) at org.hibernate.tuple.entity.AbstractEntityTuplizer.<init>(AbstractEntityTuplizer.java:158) at org.hibernate.tuple.entity.PojoEntityTuplizer.<init>(PojoEntityTuplizer.java:77) ... 34 more I'm new to Hibernate, so I don't know all the ins and outs, but I do have the getter and setter for the purchaseDate attribute. I don't know what I'm missing here - does anyone else? Thanks!

    Read the article

  • pre-commit hook in svn: could not be translated from the native locale to UTF-8

    - by Alexandre Moraes
    Hi everybody, I have a problem with my pre-commit hook. This hook test if a file is locked when the user commits. When a bad condition happens, it should output that the another user is locking this file or if nobody is locking, it should show "you are not locking this file message (file´s name)". The error happens when the file´s name has some latin character like "ç" and tortoise show me this in the output. Commit failed (details follow): Commit blocked by pre-commit hook (exit code 1) with output: [Erro output could not be translated from the native locale to UTF-8.] Do you know how can I solve this? Thanks, Alexandre My shell script is here: #!/bin/sh REPOS="$1" TXN="$2" export LANG="en_US.UTF-8" /app/svn/hooks/ensure-has-need-lock.pl "$REPOS" "$TXN" if [ $? -ne 0 ]; then exit 1; fi exit 0 And my perl is here: !/usr/bin/env perl #Turn on warnings the best way depending on the Perl version. BEGIN { if ( $] >= 5.006_000) { require warnings; import warnings; } else { $^W = 1; } } use strict; use Carp; &usage unless @ARGV == 2; my $repos = shift; my $txn = shift; my $svnlook = "/usr/local/bin/svnlook"; my $user; my $ok = 1; foreach my $program ($svnlook) { if (-e $program) { unless (-x $program) { warn "$0: required program $program' is not executable, ", "edit $0.\n"; $ok = 0; } } else { warn "$0: required program $program' does not exist, edit $0.\n"; $ok = 0; } } exit 1 unless $ok; unless (-e $repos){ &usage("$0: repository directory $repos' does not exist."); } unless (-d $repos){ &usage("$0: repository directory $repos' is not a directory."); } foreach my $user_tmp (&read_from_process($svnlook, 'author', $repos, '-t', $txn)) { $user = $user_tmp; } my @errors; foreach my $transaction (&read_from_process($svnlook, 'changed', $repos, '-t', $txn)){ if ($transaction =~ /^U. (.*[^\/])$/){ my $file = $1; my $err = 0; foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ $err = 1; if($locks=~ /Owner: (.*)/){ if($1 != $user){ push @errors, "$file : You are not locking this file!"; } } } if($err==0){ push @errors, "$file : You are not locking this file!"; } } elsif($transaction =~ /^D. (.*[^\/])$/){ my $file = $1; my $tchan = &read_from_process($svnlook, 'lock', $repos, $file); foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ push @errors, "$1 : cannot delete locked Files"; } } elsif($transaction =~ /^A. (.*[^\/])$/){ my $needs_lock; my $path = $1; foreach my $prop (&read_from_process($svnlook, 'proplist', $repos, '-t', $txn, '--verbose', $path)){ if ($prop =~ /^\s*svn:needs-lock : (\S+)/){ $needs_lock = $1; } } if (not $needs_lock){ push @errors, "$path : svn:needs-lock is not set. Pleas ask TCC for support."; } } } if (@errors) { warn "$0:\n\n", join("\n", @errors), "\n\n"; exit 1; } else { exit 0; } sub usage { warn "@_\n" if @_; die "usage: $0 REPOS TXN-NAME\n"; } sub safe_read_from_pipe { unless (@_) { croak "$0: safe_read_from_pipe passed no arguments.\n"; } print "Running @_\n"; my $pid = open(SAFE_READ, '-|'); unless (defined $pid) { die "$0: cannot fork: $!\n"; } unless ($pid) { open(STDERR, ">&STDOUT") or die "$0: cannot dup STDOUT: $!\n"; exec(@_) or die "$0: cannot exec @_': $!\n"; } my @output; while (<SAFE_READ>) { chomp; push(@output, $_); } close(SAFE_READ); my $result = $?; my $exit = $result >> 8; my $signal = $result & 127; my $cd = $result & 128 ? "with core dump" : ""; if ($signal or $cd) { warn "$0: pipe from @_' failed $cd: exit=$exit signal=$signal\n"; } if (wantarray) { return ($result, @output); } else { return $result; } } sub read_from_process { unless (@_) { croak "$0: read_from_process passed no arguments.\n"; } my ($status, @output) = &safe_read_from_pipe(@_); if ($status) { if (@output) { die "$0: @_' failed with this output:\n", join("\n", @output), "\n"; } else { die "$0: @_' failed with no output.\n"; } } else { return @output; } }

    Read the article

  • Memory Leak Issue in Weblogic, SUN, Apache and Oracle classes Options

    - by Amit
    Hi All, Please find below the description of memory leaks issues. Statistics show major growth in the perm area (static classes). Flows were ran for 8 hours , Heap dump was taken after 2 hours and at the end. A growth in Perm area was identified Statistics show from our last run 240MB growth in 6 hour,40mb growth every hour 2GB heap –can hold ¾ days ,heap will be full in ¾ days Heap dump show –growth in area as mentioned below JMS connection/session Area Apache org.apache.xml.dtm.DTM[] org.apache.xml.dtm.ref.ExpandedNameTable$ExtendedType org.jdom.AttributeList org.jdom.Content[] org.jdom.ContentList org.jdom.Element SUN * ConstantPoolCacheKlass * ConstantPoolKlass * ConstMethodKlass * MethodDataKlass * MethodKlass * SymbolKlass byte[] char[] com.sun.org.apache.xml.internal.dtm.DTM[] com.sun.org.apache.xml.internal.dtm.ref.ExtendedType java.beans.PropertyDescriptor java.lang.Class java.lang.Long java.lang.ref.WeakReference java.lang.ref.SoftReference java.lang.String java.text.Format[] java.util.concurrent.ConcurrentHashMap$Segment java.util.LinkedList$Entry Weblogic com.bea.console.cvo.ConsoleValueObject$PropertyInfo com.bea.jsptools.tree.TreeNode com.bea.netuix.servlets.controls.content.StrutsContent com.bea.netuix.servlets.controls.layout.FlowLayout com.bea.netuix.servlets.controls.layout.GridLayout com.bea.netuix.servlets.controls.layout.Placeholder com.bea.netuix.servlets.controls.page.Book com.bea.netuix.servlets.controls.window.Window[] com.bea.netuix.servlets.controls.window.WindowMode javax.management.modelmbean.ModelMBeanAttributeInfo weblogic.apache.xerces.parsers.SecurityConfiguration weblogic.apache.xerces.util.AugmentationsImpl weblogic.apache.xerces.util.AugmentationsImpl$SmallContainer weblogic.apache.xerces.util.SymbolTable$Entry weblogic.apache.xerces.util.XMLAttributesImpl$Attribute weblogic.apache.xerces.xni.QName weblogic.apache.xerces.xni.QName[] weblogic.ejb.container.cache.CacheKey weblogic.ejb20.manager.SimpleKey weblogic.jdbc.common.internal.ConnectionEnv weblogic.jdbc.common.internal.StatementCacheKey weblogic.jms.common.Item weblogic.jms.common.JMSID weblogic.jms.frontend.FEConnection weblogic.logging.MessageLogger$1 weblogic.logging.WLLogRecord weblogic.rjvm.BubblingAbbrever$BubblingAbbreverEntry weblogic.rjvm.ClassTableEntry weblogic.rjvm.JVMID weblogic.rmi.cluster.ClusterableRemoteRef weblogic.rmi.internal.CollocatedRemoteRef weblogic.rmi.internal.PhantomRef weblogic.rmi.spi.ServiceContext[] weblogic.security.acl.internal.AuthenticatedSubject weblogic.security.acl.internal.AuthenticatedSubject$SealableSet weblogic.servlet.internal.ServletRuntimeMBeanImpl weblogic.transaction.internal.XidImpl weblogic.utils.collections.ConcurrentHashMap$Entry Oracle XA Transaction oracle.jdbc.driver.Binder[] oracle.jdbc.driver.OracleDatabaseMetaData oracle.jdbc.driver.T4C7Ocommoncall oracle.jdbc.driver.T4C7Oversion oracle.jdbc.driver.T4C8Oall oracle.jdbc.driver.T4C8Oclose oracle.jdbc.driver.T4C8TTIBfile oracle.jdbc.driver.T4C8TTIBlob oracle.jdbc.driver.T4C8TTIClob oracle.jdbc.driver.T4C8TTIdty oracle.jdbc.driver.T4C8TTILobd oracle.jdbc.driver.T4C8TTIpro oracle.jdbc.driver.T4C8TTIrxh oracle.jdbc.driver.T4C8TTIuds oracle.jdbc.driver.T4CCallableStatement oracle.jdbc.driver.T4CClobAccessor oracle.jdbc.driver.T4CConnection oracle.jdbc.driver.T4CMAREngine oracle.jdbc.driver.T4CNumberAccessor oracle.jdbc.driver.T4CPreparedStatement oracle.jdbc.driver.T4CTTIdcb oracle.jdbc.driver.T4CTTIk2rpc oracle.jdbc.driver.T4CTTIoac oracle.jdbc.driver.T4CTTIoac[] oracle.jdbc.driver.T4CTTIoauthenticate oracle.jdbc.driver.T4CTTIokeyval oracle.jdbc.driver.T4CTTIoscid oracle.jdbc.driver.T4CTTIoses oracle.jdbc.driver.T4CTTIOtxen oracle.jdbc.driver.T4CTTIOtxse oracle.jdbc.driver.T4CTTIsto oracle.jdbc.driver.T4CXAConnection oracle.jdbc.driver.T4CXAResource oracle.jdbc.oracore.OracleTypeADT[] oracle.jdbc.xa.OracleXAResource$XidListEntry oracle.net.ano.Ano oracle.net.ns.ClientProfile oracle.net.ns.ClientProfile oracle.net.ns.NetInputStream oracle.net.ns.NetOutputStream oracle.net.ns.SessionAtts oracle.net.nt.ConnOption oracle.net.nt.ConnStrategy oracle.net.resolver.AddrResolution oracle.sql.CharacterSet1Byte we are using Oracle BEA Weblogic 9.2 MP3 JDK 1.5.12 Oracle versoin 10.2.0.4 (for oracle we found one path which is needed to applied to avoid XA transaction memory leaks). But we are stuck to resolve SUN, BEA Weblgogic and Apache leaks. please suggest... regards, Amit J.

    Read the article

  • How to avoid circular relationship in SQL-Server?

    - by Shimmy
    I am creating a self-related table: Table Item columns: ItemId int - PK; Amount money - not null; Price money - a computed column using a UDF that retrieves value according to the items ancestors' Amount. ParentItemId int - nullable, reference to another ItemId in this table. I need to avoid a loop, meaning, a sibling cannot become an ancestor of his ancestors, meaning, if ItemId=2 ParentItemId = 1, then ItemId 1 ParentItemId = 2 shouldn't be allowed. I don't know what should be the best practice in this situation. I think I should add a CK that gets a Scalar value from a UDF or whatever else. EDIT: Another option is to create an INSTEAD OF trigger and put in 1 transaction the update of the ParentItemId field and selecting the Price field from the @@RowIdentity, if it fails cancel transaction, but I would prefer a UDF validating. Any ideas are sincerely welcomed.

    Read the article

  • Disable Autocommit in H2 with Hibernate/C3P0 ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 (as well as other databases). Currently I am using Atomikos for transaction and C3P0 for connection pooing. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • How to stop UITableView moveRowAtIndexPath from leaving blank rows upon reordering

    - by coneybeare
    I am having an issue where in reordering my UITableViewCells, the tableView is not scrolling with the cell. Only a blank row appears and any subsequent scrolling gets an Array out of bounds error without any of my code in the Stack Trace. Here is a quick video of the problem. Here is the relevant code: - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { return indexPath.section == 1; } - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { BOOL ret = indexPath.section == 1 && indexPath.row < self.count; DebugLog(@"canMoveRowAtIndexPath: %d:%d %@", indexPath.section, indexPath.row, (ret ? @"YES" : @"NO")); return ret; } - (void)delayedUpdateCellBackgroundPositionsForTableView:(UITableView *)tableView { [self performSelectorOnMainThread:@selector(updateCellBackgroundPositionsForTableView:) withObject:tableView waitUntilDone:NO]; } - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { if (fromIndexPath.row == toIndexPath.row) return; DebugLog(@"Moved audio from %d:%d to %d:%d", fromIndexPath.section, fromIndexPath.row, toIndexPath.section, toIndexPath.row); NSMutableArray *audio = [self.items objectAtIndex:fromIndexPath.section]; [audio exchangeObjectAtIndex:fromIndexPath.row withObjectAtIndex:toIndexPath.row]; [self performSelector:@selector(delayedUpdateCellBackgroundPositionsForTableView:) withObject:tableView afterDelay:kDefaultAnimationDuration/3]; } And here is the generated Stack Trace of the crash: Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: iPhone Simulator 3.2 (193.3), iPhone OS 3.0 (7A341) *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSCFArray removeObjectsInRange:]: index (6) beyond bounds (6)' Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 CoreFoundation 0x302ac924 ___TERMINATING_DUE_TO_UNCAUGHT_EXCEPTION___ + 4 1 libobjc.A.dylib 0x93cb2509 objc_exception_throw + 56 2 CoreFoundation 0x3028e5fb +[NSException raise:format:arguments:] + 155 3 CoreFoundation 0x3028e55a +[NSException raise:format:] + 58 4 Foundation 0x305684e9 _NSArrayRaiseBoundException + 121 5 Foundation 0x30553a6e -[NSCFArray removeObjectsInRange:] + 142 6 UIKit 0x30950105 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] + 862 7 UIKit 0x30947715 -[UITableView layoutSubviews] + 250 8 QuartzCore 0x0090bd94 -[CALayer layoutSublayers] + 78 9 QuartzCore 0x0090bb55 CALayerLayoutIfNeeded + 229 10 QuartzCore 0x0090b3ae CA::Context::commit_transaction(CA::Transaction*) + 302 11 QuartzCore 0x0090b022 CA::Transaction::commit() + 292 12 QuartzCore 0x009132e0 CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) + 84 13 CoreFoundation 0x30245c32 __CFRunLoopDoObservers + 594 14 CoreFoundation 0x3024503f CFRunLoopRunSpecific + 2575 15 CoreFoundation 0x30244628 CFRunLoopRunInMode + 88 16 GraphicsServices 0x32044c31 GSEventRunModal + 217 17 GraphicsServices 0x32044cf6 GSEventRun + 115 18 UIKit 0x309021ee UIApplicationMain + 1157 19 XXXXXXXX 0x0000278a main + 104 (main.m:12) 20 XXXXXXXX 0x000026f6 start + 54 NOte that the array out of bounds length is not the length of my elements (I have 9), but always something smaller. I have been trying to solve this for many hours days without avail… any ideas? UPDATE: More code as requested In my delegate: - (UITableViewCellEditingStyle)tableView:(UITableView *)tableView editingStyleForRowAtIndexPath:(NSIndexPath *)indexPath { return UITableViewCellEditingStyleNone; } - (NSIndexPath *)tableView:(UITableView *)tableView targetIndexPathForMoveFromRowAtIndexPath:(NSIndexPath *)sourceIndexPath toProposedIndexPath:(NSIndexPath *)proposedDestinationIndexPath { int count = [(UAPlaylistEditDataSource *)self.dataSource count]; if (proposedDestinationIndexPath.section == 0) { return [NSIndexPath indexPathForRow:0 inSection:sourceIndexPath.section]; }else if (proposedDestinationIndexPath.row >= count) { return [NSIndexPath indexPathForRow:count-1 inSection:sourceIndexPath.section]; } return proposedDestinationIndexPath; } …thats about it. I am using the three20 framework and I have not had any issues with reordering till now. The problem is also not in the updateCellBackgroundPositionsForTableView: method as it still crashes when this is commented out.

    Read the article

  • MDX Except function in where clause

    - by rfders
    Hi, i'm having problem restricting a query in mdx, using except function at where clause. i need to retrieved a set of data but which not in an specific set. Then i created the next query: select {[Measures].[Amount], [Measures].[Transaction Cost], [Measures].[Transaction Number]} ON COLUMNS,{[ManualProcessing].[All ManualProcessings].[MAGNETICSTRIPE], ManualProcessing].[All ManualProcessings].[MANUAL]} ON ROWS FROM [Transactions] where except([Product].[All Products].Children,{[Product].[All Products].[Debit}) apparently this works fine, but when i try to add another restriction to slicer, i got this error: No function matches signature (Set,Member). I'm currently working on mondrian 3.1 Is it possible to add multiple restriction to slicer when im sing the except funtion ? are there any other way to get this ? thanks in advance.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >