Search Results

Search found 1054 results on 43 pages for 'replication'.

Page 35/43 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Devoxx 2011: Java EE 6 Hands-on Lab Delivered

    - by arungupta
    I, along with Alexis's help, delivered a Java EE 6 hands-on lab to a packed room of about 40+ attendees at Devoxx 2011. The lab was derived from the OTN Developer Days 2012 version but added lot more content to showcase several Java EE 6 technologies. The problem statement from the lab document states: This hands-on lab builds a typical 3-tier Java EE 6 Web application that retrieves customer information from a database and displays it in a Web page. The application also allows new customers to be added to the database as well. The string-based and type-safe queries are used to query and add rows to the database. Each row in the database table is published as a RESTful resource and is then accessed programmatically. Typical design patterns required by a Web application like validation, caching, observer, partial page rendering, and cross-cutting concerns like logging are explained and implemented using different Java EE 6 technologies. The lab covered Java Persistence API 2, Servlet 3, Enterprise JavaBeans 3.1, JavaServer Faces 2, Java API for RESTful Web Services 1.1, Contexts and Dependency Injection 1.0, and Bean Validation 1.0 over 47 pages of detailed self-paced instructions. Here is the complete Table of Contents: The lab can be downloaded from here and requires only NetBeans IDE "All" or "Java EE" version, which includes GlassFish anyway. All the feedback received from the lab has been incorporated in the instructions and bugs filed (Updated 49559, 205232, 205248, 205256). 80% of the attendees could easily complete the lab and some even completed in much less than 3 hours. That indicates that either more content needs to be added to the lab or the intellectual level of the attendees at the conference was pretty high. I think the lab has enough content for 3 hours but we moved at a much more faster pace so I conclude on the latter. Truly a joy to conduct a lab to 40 Devoxxians! Another related lab that might be handy for folks is "Develop, Deploy, and Monitor your Java EE 6 applications using GlassFish 3.1 Cluster". It explains how: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete lab instructions and source code are available and you can try them. I plan to continue evolving the contents for the Java EE 6 hands-on lab to cover more technologies and features and will announce them on this blog. Let me know on what else would you like to see in the future versions.

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • What is the difference between development and R&D?

    - by MainMa
    I was asked by a colleague to explain clearly the difference between ordinary development and research and development (R&D) and was unable to do it. After reading Wikipedia, I still don't have the precise answer. According to Wikipedia (slightly modified): There are two primary models: In one model, the primary function is to develop new products; in the other model, the primary function is to discover and create new knowledge about scientific and technological topics for the purpose of uncovering and enabling development of valuable new products, processes, and services. The first model is confusing. Does it mean that development (not R&D) consists exclusively in adding new features to a product, solving bugs and doing maintenance? What if something which was previously developed as a new feature becomes a separate product? The second model is less confusing, but still, how to qualify whether something is new knowledge or existent knowledge which is just rediscovered? Later, Wikipedia adds that ordinary development is different from R&D because of its: nearly immediate profit or immediate improvement. It's still not clear enough. How to qualify "nearly immediate profit"? What if a task has an immediate profit but requires heavy research? Or if it is basic but has uncertain profit, like the enforcement of a common style over the codebase? For example, does it belong to development or R&D to: Develop an engine which abstracts the access to the database, simplifying and shortening enormously the code of other applications (existent or ones which will be written in future) which should access to the database? Establish a new service-oriented architecture for the entire organization of company resources, in order to move from a bunch of separate and autonomous applications to a set of well-organized, interconnected web services, like what is used by Amazon? Design a new communication protocol to allow faster replication of data between two data centers of the company? Conceive a new type of software testing while working on a specific product, knowing that this type of testing will improve/simplify the testing process? Prove that Functional programming is more appropriate than OOP for a specific application, based on evidence, logic and previous experience? Enhance the existent application by adding gestures on tactile screens, after doing studies and testing that shows that those gestures improve the productivity of the users by a ratio of at least 1.4 for a precise set of tasks? Find a way to strongly enhance the Power usage effectiveness (PUE) of a data center? Create a Domain-Specific Language (DSL)? In short, how could I determine whether I'm doing R&D while working on something?

    Read the article

  • Becoming an Expert MySQL DBA Across Five Continents

    - by Antoinette O'Sullivan
    You can take Oracle's MySQL Database Administrator training on five contents. In this 5-day, live, instructor-led course, you learn to install and optimize the MySQL Server, set up replication and security, perform database backups and performance tuning, and protect MySQL databases. Below is a selection of the in-class events already on the schedule for the MySQL for Database Administrators course. AFRICA  Location  Date  Delivery Language  Nairobi, Kenya  22 July 2013  English  Johannesburg, South Africa  9 December 2013  English AMERICA  Location  Date  Delivery Language  Belmont, California, United States  22 July 2013  English ASIA  Location  Date  Delivery Language  Dehradun, India  11 July 2013  English  Grogol - Jakarta Barat, Indonesia  16 September 2013  English  Makati City, Philippines  5 August 2013  English  Pasig City, Philippines  12 August 2013  English  Istanbul, Turkey  12 August 2013  Turkish AUSTRALIA and OCEANIA  Location  Date  Delivery Language  Sydney, Australia  15 July 2013  English  Auckland, New Zealand  5 August 2013  English  Wellington, New Zealand  15 July 2013  English EUROPE  Location  Date  Delivery Language  London, England  9 September 2013  English  Aix-en-Provence, France  2 December 2013  French  Bordeaux Merignac, France  2 December 2013  French  Puteaux, France  16 September 2013  French  Dresden, Germany  26 August 2013  German  Hamburg, Germany  16 November 2013  German  Munich, Germany  19 August 2013  German  Munster, Germany  9 September 2013  German  Budapest, Hungary  4 November 2013  Hungarian  Belfast, Ireland  16 December 2013  English  Milan, Italy  7 October 2013  Italian  Rome, Italy  16 September 2013  Italian  Utrecht, Netherlands  16 September 2013  English  Warsaw, Poland 5 August 2013  Polish   Lisbon, Portugal  16 September 2013 European Portugese   Barcelona, Spain 30 October 2013  Spanish   Madrid, Spain 4 November 2013  Spanish   Bern, Switzerland  27 November 2013  German  Zurich, Switzerland  27 November 2013  German You can also take this course from your own desk as a live-virtual class, choosing from a wide selection of events already on the schedule suiting different timezones. To register for this course or to learn more about the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • MySQL Connect in Only 5 Days – Some Fun Stuff!

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} We’ve recently blogged about the various MySQL Connect sessions focused on MySQL Cluster, InnoDB, the MySQL Optimizer and MySQL Replication. But we also wanted to draw your attention to some great opportunities to network and have fun! That’s also part of what makes a good conference... MySQL Connect Reception San Francisco Hilton - Continental Ballroom 6:30 p.m.–8:30 p.m. A great opportunity to network with Oracle’s MySQL engineers, partners having a booth in the exhibition hall and just about everyone at MySQL Connect. Long time MySQL users will see many familiar faces, and new users will be able to build valuable relationships. A must attend reception for sure! Taylor Street Open House 7:00 p.m.–9:00 p.m. After two intense days at MySQL Connect, you’ll get the chance to relax and continue networking at the Taylor Street Café Open House on Sunday evening. Perhaps recharging batteries for a full week at Oracle OpenWorld… The Oracle OpenWorld Music Festival Starting on Sunday eve and running through the entire duration of Oracle OpenWorld, the first Oracle OpenWorld Musical Festival features some of today’s breakthrough musicians. It’s five nights of back-to-back performances in the heart of San Francisco. Registered Oracle conference attendees get free admission, so remember your badge when you head to a show. More information here. You can check out the full MySQL Connect program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Will you share your SQL Server configuration?

    - by Bill Graziano
    I regularly visit client sites and review their SQL Server configurations.  I come across all kinds of strange settings.  I’ve been thinking about a way to aggregate people’s configurations and see what’s common and what’s unique.  I used to do that with polls on SQLTeam.com.  I think we can find out more interesting things if we look at combinations of settings in relation to size and volume. I’ve been working on an application for another project that is similar.  It will be fairly easy to use that code for this.  I can have something up and running in a few days – if people are interested in it.  I admit that I often come up with ideas that just don’t make sense.  This may be one of them.  One of your biggest concerns has be how secure your data is.  My solution is not to store anything identifying.  The instance name and database names can both be “anonymized” and I don’t store the machine name or IP address or anything to do with logins. Some of the questions I’m curious about are: At what size database does the Enterprise Edition become prevalent? Given the total size of the databases how much RAM is common? How many people have multiple data files?  At what size does that become prevalent? How common is database mirroring?  Replication?  Log shipping? How common is full recovery mode?  At what data size does it become prevalent? I think those are all questions that are easy to answer -- with the right data.  The big question is whether or not people will share their SQL Server configurations.  I understand that organizations in regulated or high security environments can’t participate.  But I think that leaves many, many people that can.  Are you willing to share your configuration and learn about others?  I have a simple sign up form here.  It’s actually a mailing list signup that also captures your edition, number of servers and largest database.  The list will only be used for this project.  Is your SQL Server is configured correctly?  Do you wonder what the next step is as your data grows?  Take a second and sign up.

    Read the article

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • MySQL Enterprise Monitor 3.0.3 Is Now Available

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud with the November update in about 1 week. This is a maintenance release that fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • ?????QCon Tokyo 2010 ??????????!

    - by rika.tokumichi
    ??????OTN????????? 2010?4?19??20??????????·??????????QCon Tokyo 2010?????????? ??????????????????????????????????????????! ??????????????????????????????????^^ ???????????! >QCon Tokyo 2010 ????? ?????????? ????????????????? -------------------------------------------- ???:?? ?? (???? ????) ?????????????? - ????????CoE - ?????????? BI/SOA????????????? ???:?? ?? (???? ?????) ?????????? - ????????? - ??????????? -------------------------------------------- ??????????????????????????????????QCon Tokyo 2010?????????? QCon????????????????????????????InfoQ???????????????????????????????????????·?????????? ????????????4???????????????????????????????·?????????? InfoQ????????????????????????????????????????????????? ?1??? ??????????????Twitter????Nick Kallen??????????? ??????Data Architecture at Twitter Scale?? ?????????????????????????????????????????????????????????????????120????????????????????????????????????????? ??????????????????????????????????????????????????????????????????Twitter????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????? ????????Oracle Coherence????????????Patrick Peralta??? ?Connected Clouds: A Platform for Globally Distributed Service? ??????????????? Coherence????????KVS??????????????????????????????????????Oracle Coherence?????????????????????????????????? ??????????????????????????????????????????Coherence?Push Replication Pattern?????????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????? ?????????????????????????????????? ?????? ??????????????????????????2???3??????????? ???????????????????????????????! ??????????????????????????????? ?2??? 2??????????????Facebook?Marc Kwiatkowski?????????? ??????Scaling Memcache at Facebook?? 4??????????Facebook???????memcache??????????????????????????????????????? ?????????memchace????????????????????????????????memcache???????????????????????????????????????????????????????????????????????????? ????????????? Java7???????????????????????????????????? Ruby??????????????????????????????Ruby????????? ??????????????????????????? ???????????????! --------------------------------------- ?????????????????????????!! ???????????! ???????2?????????? ??????????????????????????? >QCon Tokyo 2010 ????? ?????????&??????? ?????????????Patrick Peralta?????????? >?????Connected Clouds: A Platform for Globally Distributed Service?(??) by Patrick Peralta ?Oracle Coherence????? ???? ?????? ???????????? Oracle Coherence?????(??????????) OTN????????????????????????????? ??????????????????? >???????????????????1(?????? ???) >???????????????????2(???????) ???????????????????????????????????????????????????!

    Read the article

  • Mounting an Azure blob container in a Linux VM Role

    - by djechelon
    I previously asked a question about this topic but now I prefer to rewrite it from scratch because I was very confused back then. I currently have a Linux XS VM Role in Azure. I basically want to create a self-managed and evoluted hosting service using VMs rather than Azure's more-expensive Web Roles. I also want to take advantage of load balancing (between VM Roles) and geo-replication (of Storage Roles), making sure that the "web files" of customers are located in a defined and manageable place. One way I found to "mount" a drive in Linux VM is described here and involves mounting a VHD onto the virtual machine. From what I could learn, the VHD is reliably-stored in a storage role, and is exclusively locked by the VM that uses it. Once the VM Role has its drive I can format the partition to any size I want. I don't want that!! I would like each hosted site to have its own blob directory, then each replicated/load-balanced VM Role to rw mount like in NFS that blob directory to read HTML and script files. The database is obviously courtesy of Microsoft :) My question is Is it possible to actually mount a blob storage into a directory in the Linux FS? Is it possible in Windows Server 2008?

    Read the article

  • Windows 2003 Domain Controller Very Upset about NIC Teaming

    - by Kyle Brandt
    I set up BACS (Broadcom Teaming) to team two NIC on a Windows 2003 Active Directory Domain Controller. Networking still works okay, I can ping the gateway etc, but both DNS and Active Directory fail to start with various 40xx errors. The team that I created is Smart load Balancing with Failover, with one backup and only one in smart load balancing (So really it is just failover). I have the team the same IP address that the single active NIC had before. Anyone seen this before, or have any ideas what the problem might be? Event Type: Error Event Source: DNS Event Category: None Event ID: 4015 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: DNS Event Category: None Event ID: 4004 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server was unable to complete directory service enumeration of zone .. This DNS server is configured to use information obtained from Active Directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and repeat enumeration of the zone. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: NTDS Replication Event Category: DS RPC Client Event ID: 2087 Date: 3/7/2010 Time: 10:40:28 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: ADC Description: Active Directory could not resolve the following DNS host name of the source domain controller to an IP address. This error prevents additions, deletions and changes in Active Directory from replicating between one or more domain controllers in the forest. Security groups, group policy, users and computers and their passwords will be inconsistent between domain controllers until this error is resolved, potentially affecting logon authentication and access to network resources.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • ldap_modify: Insufficient access (50)

    - by Lynn Owens
    I am running an OpenLDAP 2.4 server that uses the SSL service for communication. It works for lookups. I am trying to add mirror mode replication. So this is the command that I'm executing: ldapmodify -D "cn=myuser,dc=mydomain,dc=com" -H ldaps://myloadbalancer -W -f /etc/ldap/ldif/server_id.ldif Where this is my server_id.ldif: dn: cn=config changetype: modify replace: olcServerID olcServerID: 1 myserver1 olcServerID: 2 myserver2 and this is my cn\=config.ldif in the slapd.d tree of text files: dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcPidFile: /var/run/slapd/slapd.pid olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: ff9689de-c61d-1031-880b-c3eb45d66183 creatorsName: cn=config createTimestamp: 20121118224947Z olcLogLevel: stats olcTLSCertificateFile: /etc/ldap/certs/ldapscert.pem olcTLSCertificateKeyFile: /etc/ldap/certs/ldapskey.pem olcTLSCACertificateFile: /etc/ldap/certs/ldapscert.pem olcTLSVerifyClient: never entryCSN: 20121119022009.770692Z#000000#000#000000 modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth modifyTimestamp: 20121119022009Z But unfortunately I'm getting this: Enter LDAP Password: modifying entry "cn=config" ldap_modify: Insufficient access (50) If I try to specify the config database I get this: ldapmodify -H 'ldaps://myloadbalancer/cn=config' -D "cn=myuser,cn=config" -W -f ./server_id.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49)} Does anyone know how I can add the serverID to the config database so that I can complete the setup of mirror mode?

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Exchange 2007 two node cluster setup on Windows 2008 Enterprise, install error

    - by Shadow00Caster
    I am installing Microsoft Exchange 2007 x64 in a two node environment using Microsoft Windows 2008 Enterprise x64. The Failover Cluster is all setup properly and following best practices for setting up the windows clustering for use with Exchange 2007. All the validation tests pass on the cluster and all of that portion is working fine. The problem is when I go to install the first Exchange node as an Active mailbox in configuration for a two node CCR. It gets all the way through the first 3 steps (Copy Exchange Files, Management Tools, Mailbox Role) and then fails on the 4th step 'Clustered Mailbox Server' with the following error: Error: The clustered mailbox server's group 'XXXX' was not found, and should already exist. Firewalls are all disabled, DNS is all setup properly, the environment has 3 domain controllers all 2k8 ent x64, all replication works. The name I pick for the CCR cluster (XXX) does not exist in AD or in DNS. I have attempted this install from both of the two Exchange nodes and multiple times .. tried with different names. I have been banging my head against the wall for days working on this and would appreciate any feedback on the issue.

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • Sticky sessions not sticky on coldfusion cluster

    - by GreatSeaSpider
    we're trying to deploy a legacy coldfusion site onto a new CF8 cluster. The cluster consists of three cf instances running under JRUN4 on a single windows 2008 server. I've got the cluster set to not replicate sessions, and sticky sessions turned on. each instance is set to use J2EE session variables. The application tag for the site has: sessionmanagement="Yes" setclientcookies="Yes" setdomaincookies="Yes" when each instance starts... no errors are reported in the instance log and they join the cluster without any issues. though the intances do have: 16/10 08:31:25 info SessionReplicationService successfully joined a JINI lookup service (assigned JINI-ID .....) and 16/10 08:31:25 info Clusterable service SessionReplicationService discovered a SessionReplicationService peer on a JRun server named "xxxx" on host xxxx which is interesting since session replication is definately off, is the SessionReplicationService responsible for sticky sessions aswell? thats enough background, the problem is that the sticky sessions appear to simply not work, each request is bounced to a different instance, and it seems as if the sessions are being lost on each instance anyways? As soon as the cluster is down to a single instance, the web app works exactly as expected and the sessions seem fine. has anybody got any ideas for me? i've been trawling the web and I cant seem to find any answers.

    Read the article

  • Umount stale glusterfs partition

    - by Khaled
    I am using glusterfs on several Ubuntu servers: two of them are running glusterfs servers in replication mode. Without any clear error, the glusterfs partition became stale and the system shows this error when I try to access the stale partition: Transport endpoint is not connected Also, when running ls -l on the parent folder I get: d????????? ? ? ? ? ? myfolder I tried all types of commands that I can find to umount this partition, but I could not get it done: umount -l /path/to/mount/point umount -f /path/to/mount/point Also, using fuser command to show processes accessing this folder did not work. Unload the fuse kernel module can not be done as it is clear from the kernel config that fuse is built into the kernel and not a loadable module. I found this line in /boot/config-2.6.32-24-server CONFIG_FUSE_FS=y I have been left with two options: Reboot the system. Create another mount point like myfolder2 and mount this again using sudo glusterfs -f /etc/glustefs/glusterfs.vol /path/to/folder2. Of course, I have chosen to go with option 2. Anyone faced such an issue before? Anyone has a better solution for such a case?

    Read the article

  • Kerberos service on win2k dc will not start following disk failure

    - by iwilson68
    Hi, I have a win2k (mixed mode domain) with 4 DCS. One of these also acts an exchange 2000 server which uses 2 logical volumes from an MSA 2000 array. AD etc is stored on local drives. We experienced a problem last week when the raid array fell back to a redundant controller and this temporarily meant that the two logical drives were not visible to the server for around 5 minutes and a couple of reboots. The log records these Events as Type: Warning Event Source: Disk Event Category: None Event ID: 51 Date: 06/11/2009 Time: 11:46:23 User: N/A Computer: server1 Description: An error was detected on device \Device\Harddisk1\DR1 during a paging operation. Following these problems, the server “kerberos Key Distribution” service refuses to start with an “error.31 a device attached to the system is not functioning”. All other automatic start services (including net logon) are running and there are no DNS issues etc. All devices are also functioning but the two logical MSA disks are now numbered in the Windows Disk Management MMC as 2 and 4 and I suspect that they may have previously been identified as disks 1 & 2 and perhaps windows still sees this as an ongoing failure?? Replication has not been affected but obviously there are many audit failures in the security log relating to users and workstations presumably linked to the Kerberos issue. Attempting to manually start the kerberos service generates the following in the System Log. Event Type: Error Event Source: Service Control Manager Event Category: None Event ID: 7023 Date: 09/11/2009 Time: 09:46:55 User: N/A Computer: Server1 Description: The Kerberos Key Distribution Center service terminated with the following error: A device attached to the system is not functioning. DCDIAG passes all tests except “Advertising” and “Services” which I believe relate directly to the failure of Kerberos only. Any advice would be appreciated.

    Read the article

  • Problems starting autossh on boot [ubuntu]

    - by Ken
    I'm trying to automatically start an SSH tunnel to my server on boot from a ubuntu box. I have an ubuntu box that's mounted on an 18-wheeler and is networked behind an air card. The box hosts a mysql database that i'm trying to have replicated when the aircard is connected. As I can never be sure of my IP and how many or which routers I'm behind I'm connected to my replication server with an SSH tunnel. I got that working using the following command: ssh -R 3307:localhost:3307 [email protected] Now I'd like that to start whenever the box is, and be alive all the time, so I installed auto-ssh and setup this little script: ID=xkenneth HOST=erdosmiller.com AUTOSSH_POLL=15 AUTOSSH_PORT=20000 AUTOSSH_GATETIME=30 AUTOSSH_DEBUG=yes AUTOSSH_PATH=/usr/bin/ssh export AUTOSSH_POLL AUTOSSH_DEBUG AUTOSSH_PATH AUTOSSH_GATETIME AUTOSSH_PORT autossh -2 -fN -M 20000 -R 3307:localhost:3306 ${ID}@${HOST} I've tried putting this scrip in /etc/init.d/ and using a post-up command in /etc/network/interfaces as well as putting it in /etc/network/if-up.d/. In both situations the script starts on boot, but the tunnel doesn't appear to be correctly established. The script works when run manually.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >