Search Results

Search found 2359 results on 95 pages for 'transaction'.

Page 73/95 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • NRF Big Show 2011 -- Part 2

    - by David Dorf
    One of the things I love about attending NRF is visiting the smaller booths to see what new innovative ideas have sprung up. After all, by watching emerging technologies we can get a sense of how the retail experience might change. After NRF I'm hoping to write a post on what I found, if anything, so be sure to check back. At the Oracle Retail booth we'll be demonstrating some of the aspects of the changing retail experience. These demos use a mix of GA and experimental components. Here are some highlights: 1. Checkin We wrote a consumer iPhone app we call Store Gateway that lets consumers access information from the store. They'll start by doing a checkin when they arrive that will alert the store manager via another iPhone app we wrote called Mobile Manager. Additionally, we display a welcome messaging using Starmount's digital sign. 2. Receive Offers There are three interaction points where a store can easily make an offer to a consumer: checkin, product scans, and checkout. For this demo we're calling our Universal Offer Engine at checkin to determine the best offer for this particular consumer. This offer is then displayed on the consumer's phone as well as on the digital sign. 3. Scan Products To thwart consumers from scanning product barcodes, we used Store Inventory Management to print QRCodes on shelf label then provided access to a scanner in the Store Gateway iphone app. When the consumer scans the shelf label they are shown product information provided by the retailer. 4. Checkout While we don't have a NFC-enabled mobile phone, we have a NFC chip that can attach to a phone. We're using this to checkout using a reader provided by ViVOTech. Tap the phone on the reader, and the POS accesses the customer#, coupons, and payment information. This really speeds the checkout process. 5. Digital Receipt After the transaction is complete, a digital copy of the receipt is sent to Intuit's QuickReceipts where consumers to store all their digital receipts. There's even an iPhone app that provides easy access to the receipts. This covers about half of what what we'll be showing, so be sure to stop by. I'll also be talking about how mobile is impacting the retail experience at the Wednesday morning session NRF Mobile Retail Initiative: a Blueprint for Action. See you at the Big Show!

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-05

    - by Bob Rhubart
    Why is enterprise software often so complicated? | Rajesh Raheja rraheja.wordpress.com Rajesh Raheja shares "a few examples of requirements that lead to creation of complex platform infrastructures that up the complex enterprise software." Educause Top-Ten IT Issues - the most change in a decade or more | Cole Clark blogs.oracle.com Cole Clark discusses why "higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia." Oracle VM RAC template - what it took | Wim Coekaerts blogs.oracle.com Wim Coekaerts shares an example that shows how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. Oracle Cloud and Oracle Platinum Services Announcements oracle.com Featuring Larry Ellison and Mark Hurd. Wednesday, June 06, 2012. 1:00 p.m. PT – 2:30 p.m. PT Creating an Oracle Endeca Information Discovery 2.3 Application Part 1 : Scoping and Design | Mark Rittman www.rittmanmead.com Oracle ACE Director Mark Rittman launches a new series that dives into "the various stages in building a simple Oracle Endeca Information Discovery application, using the recent Endeca Information Discovery 2.3 release." Introducing Decision Tables in the SOA Suite 11g | Lucas Jellama technology.amis.nl Oracle ACE Director Lucas Jellema demonstrates how "the decision table can be put to good use to implement the business logic behind the classical game of Rock, Paper and Scissors." Application integration: reorganise, recycle, repurpose | Andrew Clarke radiofreetooting.blogspot.com "Integration is a topic which is in everybody's baliwick," says Oracle ACE Andrew Clarke. "The business people want to get the best value from their existing IT investments. The architects need to understand the interfaces between the silos and across the layers. The developers have to implement it." Using XA Transactions in Coherence-based Applications | Jonathan Purdy blogs.oracle.com Purdy shares "a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager." Thought for the Day "The difficulty lies, not in the new ideas, but in escaping from the old ones..." — John Maynard Keynes (June 5, 1883 - April 4, 1946) Source: Quotations Page

    Read the article

  • Application Integration Future &ndash; BizTalk Server &amp; Windows Azure (TechEd 2012)

    - by SURESH GIRIRAJAN
    I am really excited to see lot of new news around BizTalk in TechEd 2012. I was recently watching the session presented by Sriram and Rajesh on “Application Integration Futures: The Road Map and What's Next on Windows Azure”. It was great session and lot of interesting stuff about the feature updates for BizTalk and Azure integration. I have highlighted them below, definitely customers who haven’t started using Microsoft BizTalk ESB Toolkit should start using them which is going to be part of the core BizTalk product in future release, which is cool… BizTalk Server feature enhancements Manageability: ESB Tool Kit is going to be part of the core BizTalk product and Setup. Visualize BizTalk artifact dependencies in BizTalk administration console. HIS administration using configuration files. Performance: Improvements in ordered send port scenarios Improved performance in dynamic send ports and ESB, also to configure BizTalk host handler for dynamic send ports. Right now it runs under default host, which does not enable to scale. MLLP adapter enhancements and DB2 client transaction load balancing / client bulk insert. Platform Support: Visual Studio 2012, Windows 8 Server, SQL Server 2012, Office 15 and System Center 2012. B2B enhancements to support latest industry standards natively. Connectivity Improvements: Consume REST service directly in BizTalk SharePoint integration made easier. Improvements to SMTP adapter, to add macros for sending same email with different content to different parties. Connectivity to Azure Service Bus relay, queues and topics. DB2 client connectivity to SQL Server and SQL Server connectivity to Informix. CICS Http client connectivity to Windows. Azure: Use Azure as IaaS/PaaS for BizTalk environment. Use Azure to provision BizTalk environment for test environment / development. Later move to On-premises or build a Hybrid cloud approach. Eliminate HW procurement for BizTalk environment for testing / demos / development etc. Enable to create BizTalk farm easily and remove/add more servers as needed. EAI Service: EAI Bridge Protocol transformation Message Transformation Running custom code Message Enrichment Hybrid Connectivity LOB Applications On-premises Application On-premises Connectivity to Applications in the cloud Queues/ Topics Ftp Devices Web Services Bridge can be customized based on the service needs to provide different capabilities needed as part of the bridge. Look at the sample for EDI bridge for EDI service sample. Also with Tracking enabled through the portal. http://msdn.microsoft.com/en-us/library/windowsazure/hh674490 Adapters: Service Bus Messaging adapter - New adapter added. WebHttp adapter - For REST services. NetTcpRelay adapter - New adapter added. I will start posting more and once I start playing with this…

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • As the current draft stands, what is the most significant change the "National Strategy for Trusted Identities in Cyberspace" will provoke?

    - by mfg
    A current draft of the "National Strategy for Trusted Identities in Cyberspace" has been posted by the Department of Homeland Security. This question is not asking about privacy or constitutionality, but about how this act will impact developers' business models and development strategies. When the post was made I was reminded of Jeff's November blog post regarding an internet driver's license. Whether that is a perfect model or not, both approaches are attempting to handle a shared problem (of both developers and end users): How do we establish an online identity? The question I ask here is, with respect to the various burdens that would be imposed on developers and users, what are some of the major, foreseeable implementation issues that will arise from the current U.S. Government's proposed solution? For a quick primer on the setup, jump to page 12 for infrastructure components, here are two stand-outs: An Identity Provider (IDP) is responsible for the processes associated with enrolling a subject, and establishing and maintaining the digital identity associated with an individual or NPE. These processes include identity vetting and proofing, as well as revocation, suspension, and recovery of the digital identity. The IDP is responsible for issuing a credential, the information object or device used during a transaction to provide evidence of the subject’s identity; it may also provide linkage to authority, roles, rights, privileges, and other attributes. The credential can be stored on an identity medium, which is a device or object (physical or virtual) used for storing one or more credentials, claims, or attributes related to a subject. Identity media are widely available in many formats, such as smart cards, security chips embedded in PCs, cell phones, software based certificates, and USB devices. Selection of the appropriate credential is implementation specific and dependent on the risk tolerance of the participating entities. Here are the first considered actionable components of the draft: Action 1: Designate a Federal Agency to Lead the Public/Private Sector Efforts Associated with Achieving the Goals of the Strategy Action 2: Develop a Shared, Comprehensive Public/Private Sector Implementation Plan Action 3:Accelerate the Expansion of Federal Services, Pilots, and Policies that Align with the Identity Ecosystem Action 4:Work Among the Public/Private Sectors to Implement Enhanced Privacy Protections Action 5:Coordinate the Development and Refinement of Risk Models and Interoperability Standards Action 6: Address the Liability Concerns of Service Providers and Individuals Action 7: Perform Outreach and Awareness Across all Stakeholders Action 8: Continue Collaborating in International Efforts Action 9: Identify Other Means to Drive Adoption of the Identity Ecosystem across the Nation

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Oracle GoldenGate 11gR2 Event Marker System

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Oracle GoldenGate 11gR2 includes a number of refinements to the Event Marker system. Using event markers enables GoldenGate processes to take a defined action based on an event in the data stream. This feature within Oracle GoldenGate simplifies methods to embed specific custom processing in the areas of error handling, alerts, and notification. The event marker system effectively allows for DML driven workflows to be created within GoldenGate and enables customers to craft non-standard processing based on special events. There are a number of supported event actions including: trace, log, checkpoint before, suspend, abort, and several others. With 11gR1 events can now be triggered by DDL operations, plus variables can be passed in and out of the system to shell scripts. Some good use cases for this feature are Automatic switchover to the secondary system during planned outages Better monitoring over source systems’ performance and automated switchover to the standby system in case of an outage with the primary system Automatic switchover from initial load to changed data movement Automatic synchronization of any type of batch processing taking place on both the source and target databases for database consistency Automatic stoppage of the Delivery module to allow end-of-day reporting Finding, tracking, and reporting on transactions that are of interest including the ones that do not have primary keys or transaction record numbers If you would like to see a demo, please visit our youtube channel (http://youtube.com/oraclegoldengate)  To learn more about the new features of Oracle GoldenGate 11gR2 and to ask questions to the PM team, please join us on September 12th  8am or 10am PST for our live webcast. Click here to register.

    Read the article

  • PASS Summit for SQL Starters

    - by Davide Mauri
    I’ve received a buch of emails from PASS Summit “First Timers” that are also somehow new to SQL Server (for “somehow” I mean people with less than 6 month experience but with some basic knowledge of SQL Server engine) or are catching up from SQL Server 2000. The common question regards the session one should not miss to have a broad view of the entire SQL Server platform have some insight into some specific areas of SQL Server Given that I’m on (semi-)vacantion and that I have more free time (not true, I have to prepare slides & demos for several conferences, PASS Summit  - Building the Agile Data Warehouse with SQL Server 2012 - and PASS 24H - Agile Data Warehousing with SQL Server 2012 - among them…but let’s pretend it to be true), I’ve decided to make a post to answer to this common questions. Of course this is my personal point of view and given the fact that the number and quality of session that will be delivered at PASS Summit is so high that is very difficoult to make a choice, fell free to jump into the discussion and leave your feedback or – even better – answer with another post. I’m sure it will be very helpful to all the SQL Server beginners out there. I’ve imposed to myself to choose 6 session at maximum for each Track. Why 6? Because it’s the maximum number of session you can follow in one day, and given that all the session will be on the Summit DVD, they are the answer to the following question: “If I have one day to spend in training, which session I should watch?”. Of course a Summit is not like a Course so a lot of very basics concept of well-established technologies won’t be found here. Analysis Services, Integration Services, MDX are not part of the Summit this time (at least for the basic part of them). Enough with that, let’s start with the session list ideal to have a good Overview of all the SQL Server Platform: Geospatial Data Types in SQL Server 2012 Inside Unstructured Data: SQL Server 2012 FileTable and Semantic Search XQuery and XML in SQL Server: Common Problems and Best Practice Solutions Microsoft's Big Play for Big Data Dashboards: When to Choose Which MSBI Tool Microsoft BI End-User Tools 360° for what concern Database Development, I recommend the following sessions Understanding Transaction Isolation Levels What to Look for in Execution Plans Improve Query Performance by Fixing Bad Parameter Sniffing A Window into Your Data: Using SQL Window Functions Practical Uses and Optimization of New T-SQL Features in SQL Server 2012 Taking MERGE Beyond the Basics For Business Intelligence Information Delivery Analyzing SSAS Data with Excel Building Compelling Power View Reports Managed Self-Service BI PowerPivot 101  SharePoint for Business Intelligence The Best Microsoft BI Tools You've Never Heard Of and for Business Intelligence Architecture & Development BI Power Hour Building a Tabular Model Database Enterprise Information Management: Bringing Together SSIS, DQS, and MDS SSIS Design Patterns Storing Columnstore Indexes Hadoop and Its Ecosystem Components in Action Beside the listed sessions, First Timers should also take a look the the page PASS set up for them: http://www.sqlpass.org/summit/2012/Connect/FirstTimers.aspx See you at PASS Summit!

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part II

    - by Ted Davis
    As we draw closer to the first day of Oracle OpenWorld, starting in less than a week, we continue to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033). We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. In today's post we highlight three additional Oracle Linux / Oracle VM Partners from the pavilion. Micro Focus delivers mainframe solutions software and software delivery tools with its Borland products. These tools are grouped under the following solutions: Analysis and testing tools for JDeveloper Micro Focus Enterprise Analyzer is key to the success of application overhaul and modernization strategies by ensuring that they are based on a solid knowledge foundation. It reveals the reality of enterprise application portfolios and the detailed constructs of business applications. COBOL for Oracle Database, Oracle Linux, and Tuxedo Micro Focus Visual COBOL delivers the next generation of COBOL development and deployment. Itbrings the productivity of the Eclipse IDE to COBOL, and provides the ability to deploy key business critical COBOL applications to Oracle Linux both natively and under a JVM. Migration and Modernization tooling for mainframes Enterprise application knowledge, development, test and workload re-hosting tools significantly improves the efficiency of business application delivery, enabling CIOs and IT leaders to modernize application portfolios and target platforms such as Oracle Linux. When it comes to Oracle Linux database environments, supporting high transaction rates with minimal response times is no longer just a goal. It’s a strategic imperative. The “data deluge” is impacting the ability of databases and other strategic applications to access data and provide real-time analytics and reporting. As such, customer demand for accelerated application performance is increasing. Visit LSI at the Oracle Linux Pavilion, #733, to find out how LSI Nytro Application Acceleration products are designed from the ground up for database acceleration. Our intelligent solid-state storage solutions help to eliminate I/O bottlenecks, increase throughput and enable Oracle customers achieve the highest levels of DB performance. Accelerate Your Exadata Success With Teleran. Teleran’s software solutions for Oracle Exadata and Oracle Database reduce the cost, time and effort of migrating and consolidating applications on Exadata. In addition Teleran delivers visibility and control solutions for BI/data warehouse performance and user management that ensure service levels and cost efficiency.Teleran will demonstrate these solutions at the Oracle Open World Linux Pavilion: Consolidation Accelerator - Reduces the cost, time and risk ofof migrating and consolidation applications on Exadata. Application Readiness – Identifies legacy application performance enhancements needed to take advantage of Exadata performance features Workload Accelerator – Identifies and clusters workloads for faster performance on Exadata Application Visibility and Control - Improves performance, user productivity, and alignment to business objectives while reducing support and resource costs. Thanks for reading today's Partner Spotlight. Three more partners will be highlighted tomorrow. If you missed our first Partner Spotlight check it out here.

    Read the article

  • Social Networks & the Cloud

    - by kellsey.ruppel
    It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another. Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. Learn more how you can use Oracle Social Network to revolutionize how you create, understand, and achieve true value through enterprise social networking. And be sure to check out the follow sessions here at Oracle OpenWorld, where can learn more about Oracle Cloud and Oracle Social Network. Tuesday, Oct 2 – Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups, 1:15pm - 2:15pm, Moscone West – 3001  Wednesday, Oct 3 – Oracle Social Network: Your Strategy for Socially Enabled Oracle Fusion Applications, 11:45am - 12:45pm, Moscone West – 3002/3004

    Read the article

  • Is it reasonable to insist on reproducing every defect before diagnosing and fixing it?

    - by amphibient
    I work for a software product company. We have large enterprise customers who implement our product and we provide support to them. For example, if there is a defect, we provide patches, etc. In other words, It is a fairly typical setup. Recently, a ticket was issued and assigned to me regarding an exception that a customer found in a log file and that has to do with concurrent database access in a clustered implementation of our product. So the specific configuration of this customer may well be critical in the occurrence of this bug. All we got from the customer was their log file. The approach I proposed to my team was to attempt to reproduce the bug in a similar configuration setup as that of the customer and get a comparable log. However, they disagree with my approach saying that I should not need to reproduce the bug (as that is overly time-consuming and will require simulating a server cluster on VMs) and that I should simply "follow the code" to see where the thread- and/or transaction-unsafe code is and put the change working off of a simple local development, which is not a cluster implementation like the environment from which the occurrence of the bug originates. To me, working out of an abstract blueprint (program code) rather than a concrete, tangible, visible manifestation (runtime reproduction) seems like a difficult working environment (for a person of normal cognitive abilities and attention span), so I wanted to ask a general question: Is it reasonable to insist on reproducing every defect and debug it before diagnosing and fixing it? Or: If I am a senior developer, should I be able to read (multithreaded) code and create a mental picture of what it does in all use case scenarios rather than require to run the application, test different use case scenarios hands on, and step through the code line by line? Or am I a poor developer for demanding that kind of work environment? Is debugging for sissies? In my opinion, any fix submitted in response to an incident ticket should be tested in an environment simulated to be as close to the original environment as possible. How else can you know that it will really remedy the issue? It is like releasing a new model of a vehicle without crash testing it with a dummy to demonstrate that the air bags indeed work. Last but not least, if you agree with me: How should I talk with my team to convince them that my approach is reasonable, conservative and more bulletproof?

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • Oracle EBS ????“????”???????

    - by Steve He(???)
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.5pt; mso-bidi-font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:??; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-font-kerning:1.0pt;} Oracle E-Business ?????????????????????,???????????EBS?????????????????“????”,???????????????????????????????? ????????? notes ?????,?????????????????????,?notes??????????????????,????????????? note,????????  ???????????????????????????????,??????????????????????????????,“????”??????????????????????????,???????????????????????????? ?? EBS ????????“????”?????? Doc ID 1501724.1 ???? EBS “????”???? ??????Receivables Transactions?“????”: ??? "Entering / Updating Transactions"????,????????: ????? "Transaction numbers are not in sequence",????????????ID 197212.1: How To Setup Gapless Document Sequencing in Receivables. EBS?????????“????” ?: Advanced Pricing Applications Technology Configurator General Ledger Human Capital Management Inventory Management Order Management Payables Process Manufacturing Purchasing Receivables Shipping Value Chain Planning

    Read the article

  • Occasional disk I/O errors in SQLite

    - by Alix Axel
    I have a very simple website running PHP and SQLite 3.7.9 (with PDO). After establishing the SQLite connection I immediately execute the following queries: PRAGMA busy_timeout=0; PRAGMA cache_size=8192; PRAGMA foreign_keys=ON; PRAGMA journal_size_limit=67110000; PRAGMA legacy_file_format=OFF; PRAGMA page_size=4096; PRAGMA recursive_triggers=ON; PRAGMA secure_delete=ON; PRAGMA synchronous=NORMAL; PRAGMA temp_store=MEMORY; PRAGMA journal_mode=WAL; PRAGMA wal_autocheckpoint=4096; This website only has one writer and a few occasional readers, so I don't expect any concurrency problems (and I'm even using WAL). Every couple of days, I've seen this error being reported by PHP: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 10 disk I/O error' in ... Stack trace: #0 ...: PDO-exec('PRAGMA cache_si...') There are several things that make this error very weird to me: it's not a transient problem - no matter how many times I refresh the page, it won't go away the database file is not corrupted - the sqlite3 executable can open the database without problems If the following pragmas are commented out, PHP stops throwing the disk I/O exception: PRAGMA cache_size=8192; PRAGMA synchronous=NORMAL; PRAGMA journal_mode=WAL; Then, after successfully reconnecting to the database, I'm able to reintroduce these pragmas and the code with run smoothly for days - until eventually, the same error will occur without any apparent reason. I wasn't able to reproduce this error so far, so I'm clueless about the origin of it. I'm really curious what may be causing this problem... Any ideas? Environment: Ubuntu Server 12.04 LTS PHP 5.4.15 SQLite 3.7.9 Database size: ? 10MiB Transaction (write) size: ? 1KiB EDIT: Might these symptoms have something to do with busy_timeout?

    Read the article

  • My SMTP's outgoing mail gets bounced

    - by BloodPhilia
    I've got a ISPconfig 3 production server set up, running Ubuntu Server 9.04. My e-mail gets delivered ok to almost every other server I send mail to except for one (smtp.chello.nl which bounces my email). In my /var/log/mail.err I found the below error. Sep 23 08:59:33 <MYHOSTNAME> postfix/smtp[26944]: 3DB2B1456149: to=<<RECIPIENT>@chello.nl>, relay=smtp.chello.nl[213.46.255.2]:25, delay=2, delays=0.02/0.01/1.9/0.04, dsn=5.1.0, status=bounced (host smtp.chello.nl[213.46.255.2] said: 550 5.1.0 Dynamic/Generic hostnames are blocked. Please contact your Email Provider. Your IP was <MY IP>. Your hostname was ??. (in reply to MAIL FROM command)) What could be the cause of this? I did an SMTP check on mxtools.com and got the following: OK - Not an open relay OK - 0 seconds - Good on Connection time OK - 1.482 seconds - Good on Transaction time OK - 83.161.xx.xx resolves to a83-161-xx-xx.xxx.xxx.nl WARNING - Reverse DNS does not match SMTP Banner Update: My IP is static.

    Read the article

  • uninstall google chrome in fedora

    - by tbleckert
    Yesterday I installed Fedora 15 Beta with GNOME 3 - it works well. One problem though is that I installed Chrome 32-bit (which was wrong, should have been the 64-bit version) and now I can't uninstall it. I can't find it in Add/Remove Software, and I also can't install the correct version of Chrome because it complains about my other copy of Chrome. Any ideas how I can remove the existing copy and get the 64-bit version installed? Here's the message I get when trying to install: Test Transaction Errors: file /etc/cron.daily/google-chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/chrome-sandbox from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libffmpegsumo.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libpdf.so from install of google-chrome-stable-11.0.696.65-84435.x86_64 conflicts with file from package google-chrome-stable-11.0.696.65-84435.i386 file /opt/google/chrome/libppGoogleNaClPluginChrome.so from install of google-chrome-stable-11.0.696.65-84435.x8...

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • PostgreSQL update problems with uuid

    - by kdl
    Hi, I am trying to run yum update on my CentOS 5.2 box and keep getting this message: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib I ran yum update postgresql separately and now it's 8.3.8. I also downloaded uuid-1.6.2 and built it from source, but I still get the same result. yum update -d6 uuid gives me this at the end: --> Running transaction check ---> Package uuid.i386 0:1.6.1-3.el5.kb set to be updated Checking deps for uuid.i386 0-1.6.1-3.el5.kb - u Checking deps for uuid.i386 0-1.5.1-4.rhel5 - None postgresql-contrib requires: libossp-uuid.so.15 --> Processing Dependency: libossp-uuid.so.15 for package: postgresql-contrib Needed Require is not a package name. Looking up: libossp-uuid.so.15 Potential Provider: uuid.i386 0:1.5.1-4.rhel5 Mode is u for provider of libossp-uuid.so.15: uuid.i386 0:1.5.1-4.rhel5 Mode for pkg providing libossp-uuid.so.15: u Cannot find an update path for dep for: libossp-uuid.so.15 Searching pkgSack for dep: libossp-uuid.so.15 Potential match for libossp-uuid.so.15 from uuid - 1.5.1-4.rhel5.i386 Matched uuid - 1.5.1-4.rhel5.i386 to require for libossp-uuid.so.15 uuid - 1.5.1-4.rhel5.i386 is in providing packages but it is already installed, removing. --> Finished Dependency Resolution Dependency Process ending Error: Missing Dependency: libossp-uuid.so.15 is needed by package postgresql-contrib How can I resolve this situation? Thanks

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    Hi Folks, When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Performance mitigations serving content from a UNC share via IIS 6

    - by codepoke
    I have a quad processor vmware instance running Windows 2003 and 1gb ethernet. I'm comparing serving the exact same heavy .NET 2.0 content from the local hard drive versus serving it from a UNC drive. If I use WCAT to load it down, I see about a 40% reduction in transactions/sec while serving from the UNC. Processor time barely moves from 45% and the NIC sits around 40% either way. I don't see any significant memory loading either way. Context Switches/Transaction, though, more than doubles when serving from the UNC. Pathlengths more than double as well, but I believe that's just an expression of the effect of context switching. All told, it looks like the bottleneck is processor switching while waiting on content from the UNC share. Is my experience about the norm? Is there some mitigation I might try? I twiddled HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaxCmds a little bit per http://technet.microsoft.com/en-us/library/dd296694(WS.10).aspx, but to no obvious effect. I kind of doubt my problem is lack of connections, but rather just the act of switching from thread to thread while waiting on data.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >