Search Results

Search found 11836 results on 474 pages for 'cloud dev'.

Page 136/474 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • top Tweets SOA Partner Community – August 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Lucas Jellema ?Published an article about organizing Fusion Middleware Administration: http://technology.amis.nl/2012/07/31/organizing-fusion-middleware-administration-in-a-smart-and-frugal-way … - many organizations are struggling with this. ServiceTechSymposium Countdown to the Early Bird Registration Discount deadline. Only 4 days left! http://ow.ly/cBCiv demed ?Good chatting w Bob Rhubart, Thomas Erl & Tim Hall on SOA & Cloud Symposium https://blogs.oracle.com/archbeat/entry/podcast_show_notes_thomas_erl … @soaschool @OTNArchBeat -- CU in London! SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/07/30/top-tweets-soa-partner-community-july-2012/ … #soacommunity SOA Community ?Are You a facebook member - do You follow http://www.facebook.com/soacommunity ? #soacommunity #soa SOA Community ?SOA 24/7 - Home Page: http://soa247.com/#.UBJsN8n3kyk.twitter … #soacommunity OracleBlogs ?Handling Large Payloads in SOA Suite 11g http://ow.ly/1lFAih OracleBlogs ?SOA Community Newsletter July 2012 http://ow.ly/1lFx6s OTNArchBeat Podcast Show Notes: Thomas Erl on SOA, Cloud, and Service Technology http://bit.ly/OOHTUJ SOA Community SOA Community Newsletter July 2012 http://wp.me/p10C8u-s7 OTNArchBeat ?OTN ArchBeat Podcast: Thomas Erl on SOA, Cloud, and Service Technology - Part 1 http://pub.vitrue.com/fMti OProcessAccel ?Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 1 of 4 - A conversation with SOA, Cloud, and Service Technology Symposiu... http://ow.ly/1lDyAK OracleBlogs ?SOA Suite 11g PS5 Bundled Patch 3 (11.1.1.6.3) http://ow.ly/1lCW1S Simon Haslam My write-up of the virtues of the #ukoug App Server & Middleware SIG http://bit.ly/LMWdfY What's important to you for our next meeting? SOA Community SOA Partner Community Survey 2012 http://wp.me/p10C8u-qY Simone Geib ?RT @jswaroop: #Oracle positioned in the Leader's quadrant - Gartner Magic Quadrants for Application Infrastructure (SOA & SOA Gov)... ServiceTechSymposium New Supporting Organization, IBTI has joined the Symposium! http://www.servicetechsymposium.com/ orclateamsoa ?A-Team Blog #ateam: BPM 11g Task Form Version Considerations http://ow.ly/1lA7XS OTNArchBeat Oracle content at SOA, Cloud and Service Technology Symposium (and discount code!) http://pub.vitrue.com/FPcW OracleBlogs ?BPM 11g Task Form Version Considerations http://ow.ly/1lzOrX OTNArchBeat BPM 11g #ADF Task Form Versioning | Christopher Karl Chan #fusionmiddleware http://pub.vitrue.com/0qP2 OTNArchBeat Lightweight ADF Task Flow for BPM Human Tasks Overview | @AndrejusB #fusionmiddleware http://pub.vitrue.com/z7x9 SOA Community Oracle Fusion Middleware Summer Camps in Lisbon report by Link Consulting http://middlewarebylink.wordpress.com/2012/07/20/oracle-fusion-middleware-summer-camps-in-lisbon/ … #ofmsummercamps #soa #bpm SOA Community ?Clemens Utschig-Utschig & Manas Deb The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts… Simone Geib ?RT @oprocessaccel: Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … jornica ?Report from Oracle Fusion Middleware Summer Camps in Munich: SOA Suite 11g advanced training experiences @soacommunity http://bit.ly/Mw3btE Simone Geib ?Bruce Tierney: Update - SOA & BPM Customer Insights Webcast Series: | https://blogs.oracle.com/SOA/entry/update_soa_bpm_customer_insights … OTNArchBeat Business SOA: Thinking is Dead | @mosesjones http://pub.vitrue.com/k8mw esentri ?had 3 great days in Munich at #Oracle #soacommunity Summercamp! Special thanks to Geoffroy de Lamalle from eProseed! Danilo Schmiedel ?Used my time in train to setup the ps5 soa/bpm vbox-image.Works like a dream. Setup-Readme is perfect! Saves a lot of time!!! @soacommunity 18 Jul SOA Community ?THANKS for the excellent OFM summer camps - save trip home - share your pictures at http://www.facebook.com/soacommunity #ofmsummercamps #soacommunity doors BBQ-party with Oracle @soacommunity. 5Star! #lovemunich #ofmsummercamps pic.twitter.com/ztfcGn2S leonsmiers ?New #Capgemini blog post "Continuous Improvement of Business Agility" http://bit.ly/Lr0EwG #bpm #yam Eric Elzinga ?MDS Explorer utility, http://see.sc/4qdb43 #soasuite ServiceTechSymposium ?@techsymp New speaker Demed L’Her from Oracle has been added to the symposium calendar. http://ow.ly/cjnyw SOA Community ?Last day of the Fusion Middleware summer camps - we continue at 9.00 am. send us your barbecue pictures! #ofmsummercamps #soacommunity SOA Community ?Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting http://middlewarebylink.wordpress.com/2012/06/26/delivering-soa-governance-with-eams-and-oracle-enterprise-repository/ … #soacommunity #soa #oer OracleBlogs ?Process Accelerator Kit http://ow.ly/1loaCw 15 Jul SOA Community ?Sun is back in Munich! Send your pictures Middleware summer camps! #ofmsummercamps We start tomorrow 11.00 at Oracle pic.twitter.com/6FStxomk Walter Montantes ?Gracias, Obrigado, Thank you, Danke a Lisboa y a @soacommunity @wlscommunity. From the Mexican guys!! cc @mikeintoch #ofmsummercamps Andrejus Baranovskis Tips & Tricks How to Run Oracle BPM 11g PS5 Workspace from Custom ADF 11g Application http://fb.me/1zOf3h2K8 JDeveloper & ADF ?Fusion Apps Enterprise Repository - Explained http://dlvr.it/1rpjWd Steve Walker ?Oracle #Exalogic is the logical choice for running business applications. Exalogic Software 2.0 launches 7/25. Reg at http://bit.ly/NedQ9L A. Chatziantoniou ?Landed in rainy Amsterdam after a great week in Lisbon for the #ofmsummercamps - multo obrigado for Jürgen for another fantastic event SOA Community ?Teams present #BPM11g POC results at #ofmsummercamps - great job! #soacommunity pic.twitter.com/0d4txkWF Sabine Leitner ?#DOAG SIG Middleware 29.08.2012 Köln über MW, Administration, Monitoring http://bit.ly/P47w82 @soacommunity @OracleMW @OracleFMW 12 Jul philmulhall ?Thanks @soacommunity for a great week at the #ofmsummercamps. Hard work done so time for a few cold ones in Lisboa. pic.twitter.com/LVUUuwTh peter230769 ?RT: andrea_rocco_31: RT @soacommunity: Enjoy the networking event at #ofmsummercamps want to attend next time ... pic.twitter.com/D1HRndi4 Niels Gorter ?#ofmsummercamps dinner in Lisbon. Great weather, scenery, training, people, on and on. Big THANKS @soacommunity JDeveloper & ADF ?Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Domain http://dlvr.it/1r0c2j Andrea Rocco ?RT @soacommunity: Jamy pastry at cafe Belem - who is the ghost there?!? http://via.me/-2x33uk6 Simon Haslam ?Sounds great - sorry I couldn't make it. RT @soacommunity: 6pm BPM advanced training hard work to build the POC #ofmsummercamps philmulhall ?A well earned rest after a hard days work @soacommunity #summercamps pic.twitter.com/LKK7VOVS philmulhall ?Some more hard working delegates @soacommunity #summercamps pic.twitter.com/gWpk1HZh SOA Community ?Error message at the BPM POC - will The #ace director understand the message and solve it? #ofmsummercamps pic.twitter.com/LFTEzNck Daniel Kleine-Albers ?posted on the #thecattlecrew blog: Assigning more memory to JDeveloper http://thecattlecrew.wordpress.com/2012/07/10/jdeveloper-quicktip-assigning-more-memory/ … OTNArchBeat ?BAM design pointers | Kavitha Srinivasan http://pub.vitrue.com/TOhP SOA Community ?Did you receive the July SOA community newsletter? read it! Want to become a member http://www.oracle.com/goto/emea/soa #soacommunity #soa #opn OracleBlogs ?Markus Zirn, Big Data with CEP and SOA @ SOA, Cloud &amp; Service Technology Symposium 2012 http://ow.ly/1lcSkb Andrejus Baranovskis Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Eric Elzinga ?Service Facade design pattern in OSB, http://bit.ly/NnOExN Eric Elzinga ?New BPEL Thread Pool in SOA 11g for Non-Blocking Invoke Activities from 11.1.1.6 (PS5), http://bit.ly/NnOc2G Gilberto Holms New Post: Siebel Connection Pool in Oracle Service Bus 11g http://wp.me/pRE8V-2z Oracle UPK & Tutor ?UPK Pre-Built Content Update: UPK pre-built content development efforts are always underway and growing. Ove... http://bit.ly/R2HeTj JDeveloper & ADF ?Troubleshooting BPMN process editor problems in 11.1.1.6 http://dlvr.it/1p0FfS orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove... http://ow.ly/1kYqES SOA Community BPMN process editor problems in 11.1.1.6 by Mark Nelson http://redstack.wordpress.com/2012/06/27/bpmn-process-editor-problems-in-11-1-1-6 … #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) http://fb.me/1coX4r1X1 OTNArchBeat ?A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser http://pub.vitrue.com/mQVZ OTNArchBeat ?BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Web site not responding

    - by Subhransu
    I have website working fine before. But now its not able to connect to the server(I believe that is the problem). But its strange that the message not able to connect to the server is not coming and its keep connecting... for infinite time. Here is the screenshot. Here are some of the useful details about the status of the server. Application starts when server wakes up are: cd /etc/init.d/ Application server running in my server : Traceroute: UPDATE: ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 19204 744 ? Ss Aug07 0:01 /sbin/init root 2 0.0 0.0 0 0 ? S Aug07 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S Aug07 0:00 [migration/0] root 4 0.0 0.0 0 0 ? S Aug07 7:15 [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S Aug07 0:00 [migration/0] root 6 0.0 0.0 0 0 ? S Aug07 0:00 [watchdog/0] root 7 0.0 0.0 0 0 ? S Aug07 0:05 [events/0] root 8 0.0 0.0 0 0 ? S Aug07 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S Aug07 0:00 [khelper] root 10 0.0 0.0 0 0 ? S Aug07 0:00 [netns] root 11 0.0 0.0 0 0 ? S Aug07 0:00 [async/mgr] root 12 0.0 0.0 0 0 ? S Aug07 0:00 [pm] root 13 0.0 0.0 0 0 ? S Aug07 0:00 [sync_supers] root 14 0.0 0.0 0 0 ? S Aug07 0:00 [bdi-default] root 15 0.0 0.0 0 0 ? S Aug07 0:00 [kintegrityd/0] root 16 0.0 0.0 0 0 ? S Aug07 0:24 [kblockd/0] root 17 0.0 0.0 0 0 ? S Aug07 0:00 [kacpid] root 18 0.0 0.0 0 0 ? S Aug07 0:00 [kacpi_notify] root 19 0.0 0.0 0 0 ? S Aug07 0:00 [kacpi_hotplug] root 20 0.0 0.0 0 0 ? S Aug07 0:00 [ata/0] root 21 0.0 0.0 0 0 ? S Aug07 0:00 [ata_aux] root 22 0.0 0.0 0 0 ? S Aug07 0:00 [ksuspend_usbd] root 23 0.0 0.0 0 0 ? S Aug07 0:00 [khubd] root 24 0.0 0.0 0 0 ? S Aug07 0:00 [kseriod] root 25 0.0 0.0 0 0 ? S Aug07 0:00 [md/0] root 26 0.0 0.0 0 0 ? S Aug07 0:00 [md_misc/0] root 27 0.0 0.0 0 0 ? S Aug07 0:00 [khungtaskd] root 28 0.0 0.0 0 0 ? S Aug07 0:19 [kswapd0] root 29 0.0 0.0 0 0 ? SN Aug07 0:00 [ksmd] root 30 0.0 0.0 0 0 ? SN Aug07 1:36 [khugepaged] root 31 0.0 0.0 0 0 ? S Aug07 0:00 [aio/0] root 32 0.0 0.0 0 0 ? S Aug07 0:00 [crypto/0] root 37 0.0 0.0 0 0 ? S Aug07 0:00 [kthrotld/0] root 38 0.0 0.0 0 0 ? S Aug07 0:00 [pciehpd] root 40 0.0 0.0 0 0 ? S Aug07 0:00 [kpsmoused] root 41 0.0 0.0 0 0 ? S Aug07 0:00 [usbhid_resumer] root 71 0.0 0.0 0 0 ? S Aug07 0:00 [kstriped] root 203 0.0 0.0 0 0 ? S Aug07 0:00 [scsi_eh_0] root 206 0.0 0.0 0 0 ? S Aug07 0:00 [scsi_eh_1] root 213 0.0 0.0 0 0 ? S Aug07 0:00 [mpt_poll_0] root 214 0.0 0.0 0 0 ? S Aug07 0:00 [mpt/0] root 215 0.0 0.0 0 0 ? S Aug07 0:00 [scsi_eh_2] root 317 0.0 0.0 0 0 ? S Aug07 0:00 [kdmflush] root 319 0.0 0.0 0 0 ? S Aug07 0:00 [kdmflush] root 338 0.0 0.0 0 0 ? S Aug07 4:30 [jbd2/dm-0-8] root 339 0.0 0.0 0 0 ? S Aug07 0:00 [ext4-dio-unwrit] root 411 0.0 0.0 11060 224 ? S<s Aug07 0:00 /sbin/udevd -d root 591 0.0 0.0 0 0 ? S Aug07 0:00 [vmmemctl] root 732 0.0 0.0 0 0 ? S Aug07 0:00 [jbd2/sda1-8] root 733 0.0 0.0 0 0 ? S Aug07 0:00 [ext4-dio-unwrit] root 770 0.0 0.0 0 0 ? S Aug07 0:00 [kauditd] root 907 0.0 0.0 0 0 ? S Aug07 0:02 [flush-253:0] root 963 0.0 0.0 93180 528 ? S<sl Aug07 0:00 auditd root 979 0.0 0.0 248680 1132 ? Sl Aug07 0:04 /sbin/rsyslogd -i /var/run/syslogd.pid -c 4 dbus 991 0.0 0.0 31740 348 ? Ssl Aug07 0:00 dbus-daemon --system root 1023 0.0 0.0 64032 456 ? Ss Aug07 0:01 /usr/sbin/sshd root 1031 0.0 0.0 22076 592 ? Ss Aug07 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1107 0.0 0.0 78652 744 ? Ss Aug07 0:01 /usr/libexec/postfix/master postfix 1116 0.0 0.0 78904 852 ? S Aug07 0:00 qmgr -l -t fifo -u qpidd 1129 0.0 0.0 234596 1488 ? Ssl Aug07 1:54 /usr/sbin/qpidd --data-dir /var/lib/qpidd --daemon root 1181 0.0 0.0 117176 532 ? Ss Aug07 0:04 crond root 1217 0.0 0.0 108152 412 ? S Aug07 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/m mysql 1306 0.0 1.8 792636 72640 ? Sl Aug07 6:51 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log- root 1334 0.0 0.1 739156 5520 ? Ssl Aug07 0:34 /usr/sbin/shibd -p /var/run/shibboleth/shibd.pid -f -w 30 root 1355 0.0 0.0 4048 272 tty2 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty2 root 1357 0.0 0.0 4048 272 tty3 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty3 root 1360 0.0 0.0 12336 264 ? S< Aug07 0:00 /sbin/udevd -d root 1361 0.0 0.0 12336 240 ? S< Aug07 0:00 /sbin/udevd -d root 1362 0.0 0.0 4048 272 tty4 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty4 root 1364 0.0 0.0 4048 272 tty5 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty5 root 1366 0.0 0.0 4048 272 tty6 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty6 root 1394 0.0 0.0 574892 436 ? Sl Aug07 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1495 0.0 0.0 4048 264 tty1 Ss+ Aug07 0:00 /sbin/mingetty /dev/tty1 root 7665 0.0 0.1 296304 6244 ? Ss Aug16 2:33 /usr/sbin/httpd apache 10298 0.0 0.2 457756 10472 ? Sl Sep07 3:35 /usr/sbin/httpd apache 11684 0.0 0.5 465352 20708 ? Sl Sep12 0:02 /usr/sbin/httpd apache 14570 0.0 0.7 475592 30628 ? Sl Sep12 0:02 /usr/sbin/httpd apache 14877 0.0 0.5 467868 22696 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15128 0.0 0.4 464628 19096 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15151 0.0 0.4 464624 18980 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15169 0.0 0.6 470268 24636 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15238 0.0 0.4 464628 19108 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15266 0.0 0.4 464624 18920 ? Sl Sep12 0:02 /usr/sbin/httpd apache 15312 0.0 0.4 464624 18724 ? Sl Sep12 0:01 /usr/sbin/httpd apache 15427 0.0 0.6 470268 24644 ? Sl Sep12 0:00 /usr/sbin/httpd apache 15814 0.0 0.4 464884 19296 ? Sl 00:14 0:01 /usr/sbin/httpd apache 15830 0.0 0.4 464628 19028 ? Sl 00:24 0:00 /usr/sbin/httpd apache 15859 0.0 0.7 475524 30320 ? Sl 00:31 0:00 /usr/sbin/httpd apache 15897 0.0 0.6 471876 26056 ? Sl 00:42 0:00 /usr/sbin/httpd apache 15926 0.0 0.4 464884 18936 ? Sl 00:46 0:01 /usr/sbin/httpd apache 15970 0.0 0.6 470268 24216 ? Sl 00:57 0:00 /usr/sbin/httpd apache 16010 0.0 0.4 464884 18912 ? Sl 01:04 0:00 /usr/sbin/httpd apache 16023 0.0 0.3 457756 12300 ? Sl 01:05 0:02 /usr/sbin/httpd apache 16176 0.0 0.4 464624 18568 ? Sl 02:01 0:01 /usr/sbin/httpd apache 16213 0.0 0.4 464624 18900 ? Sl 02:22 0:01 /usr/sbin/httpd apache 16240 0.0 0.4 464884 18828 ? Sl 02:35 0:00 /usr/sbin/httpd root 16313 0.0 0.0 19372 968 ? Ss 03:01 0:00 /usr/sbin/anacron -s apache 16361 0.0 0.4 464624 18572 ? Sl 03:17 0:00 /usr/sbin/httpd apache 16364 0.0 0.4 464884 19284 ? Sl 03:19 0:01 /usr/sbin/httpd root 16421 0.0 0.0 9180 1300 ? SN 03:37 0:00 /bin/bash /usr/bin/run-parts /etc/cron.daily root 16426 0.0 0.0 9312 1404 ? SN 03:37 0:00 /bin/bash /etc/cron.daily/backupdb root 16427 0.0 0.0 9064 820 ? SN 03:37 0:00 awk -v progname /etc/cron.daily/backupdb progname {????? print progname ":\n" root 16434 0.0 0.0 50776 2420 ? SN 03:37 0:00 mysqldump --opt --quote-names -u root -px xxx inamiriziv_dokeos_user personal_a root 16435 0.0 0.0 4280 536 ? SN 03:37 0:00 gzip --rsyncable apache 16484 0.0 0.2 457584 11432 ? Sl 03:55 0:04 /usr/sbin/httpd apache 16492 0.0 0.4 464884 19320 ? Sl 03:58 0:02 /usr/sbin/httpd apache 16496 0.0 0.4 464624 18704 ? Sl 04:00 0:02 /usr/sbin/httpd apache 16529 0.0 0.6 470268 24608 ? Sl 04:06 0:02 /usr/sbin/httpd apache 16533 0.0 0.4 464624 18532 ? Sl 04:10 0:00 /usr/sbin/httpd apache 16536 0.0 0.4 464884 18908 ? Sl 04:10 0:00 /usr/sbin/httpd apache 16556 0.0 0.4 464884 18924 ? Sl 04:18 0:02 /usr/sbin/httpd apache 16563 0.0 0.3 457756 12384 ? Sl 04:19 0:07 /usr/sbin/httpd apache 16598 0.0 0.3 457756 12344 ? Sl 04:28 0:02 /usr/sbin/httpd apache 16633 0.0 0.4 464624 18492 ? Sl 04:41 0:00 /usr/sbin/httpd apache 16637 0.0 0.6 470268 24300 ? Sl 04:41 0:02 /usr/sbin/httpd apache 16654 0.0 0.3 457756 12296 ? Sl 04:47 0:02 /usr/sbin/httpd apache 16665 0.0 0.6 470268 24308 ? Sl 04:50 0:03 /usr/sbin/httpd apache 16738 0.0 0.6 470268 24312 ? Sl 05:10 0:02 /usr/sbin/httpd apache 17388 0.0 0.2 457584 11440 ? Sl 08:56 0:01 /usr/sbin/httpd apache 17391 0.0 0.3 457756 12296 ? Sl 08:57 0:00 /usr/sbin/httpd apache 17397 0.0 0.3 457756 12312 ? Sl 08:59 0:00 /usr/sbin/httpd apache 17401 0.0 0.3 457756 12284 ? Sl 09:00 0:00 /usr/sbin/httpd apache 17420 0.0 0.2 457584 11436 ? Sl 09:04 0:01 /usr/sbin/httpd apache 17426 0.0 0.3 457756 12324 ? Sl 09:07 0:01 /usr/sbin/httpd apache 17431 0.0 0.3 457756 12276 ? Sl 09:08 0:03 /usr/sbin/httpd apache 17434 0.0 0.3 457756 12308 ? Sl 09:08 0:00 /usr/sbin/httpd apache 17437 0.0 0.2 457584 11440 ? Sl 09:09 0:01 /usr/sbin/httpd apache 17442 0.0 0.2 457584 11436 ? Sl 09:10 0:01 /usr/sbin/httpd apache 17445 0.0 0.3 457756 12328 ? Sl 09:11 0:01 /usr/sbin/httpd apache 17449 0.0 0.3 457756 12292 ? Sl 09:12 0:01 /usr/sbin/httpd apache 17454 0.0 0.2 457584 11444 ? Sl 09:15 0:01 /usr/sbin/httpd apache 17457 0.0 0.2 457584 11436 ? Sl 09:15 0:01 /usr/sbin/httpd apache 17461 0.0 0.3 457756 12304 ? Sl 09:16 0:01 /usr/sbin/httpd apache 17465 0.0 0.2 457584 11444 ? Sl 09:18 0:01 /usr/sbin/httpd apache 17468 0.0 0.2 457584 11436 ? Sl 09:18 0:01 /usr/sbin/httpd apache 17473 0.0 0.4 464884 18940 ? Sl 09:19 0:00 /usr/sbin/httpd apache 17476 0.0 0.4 464628 18736 ? Sl 09:20 0:00 /usr/sbin/httpd apache 17479 0.0 0.2 457584 11440 ? Sl 09:20 0:01 /usr/sbin/httpd apache 17483 0.0 0.2 457584 11416 ? Sl 09:21 0:00 /usr/sbin/httpd apache 17486 0.0 0.3 457756 12296 ? Sl 09:21 0:01 /usr/sbin/httpd apache 17489 0.0 0.4 464884 18928 ? Sl 09:21 0:00 /usr/sbin/httpd apache 17492 0.0 0.2 457584 11260 ? Sl 09:22 0:00 /usr/sbin/httpd apache 17496 0.0 0.3 457756 12372 ? Sl 09:22 0:01 /usr/sbin/httpd apache 17500 0.0 0.2 457584 11428 ? Sl 09:23 0:00 /usr/sbin/httpd apache 17504 0.0 0.2 457584 11432 ? Sl 09:25 0:00 /usr/sbin/httpd apache 17509 0.0 0.3 457756 12336 ? Sl 09:27 0:01 /usr/sbin/httpd apache 17513 0.0 0.2 457584 11436 ? Sl 09:29 0:01 /usr/sbin/httpd apache 17517 0.0 0.2 457584 11448 ? Sl 09:31 0:00 /usr/sbin/httpd apache 17520 0.0 0.3 457584 12128 ? Sl 09:32 0:00 /usr/sbin/httpd apache 17525 0.0 0.4 464884 18960 ? Sl 09:34 0:00 /usr/sbin/httpd apache 17529 0.0 0.2 457584 11420 ? Sl 09:36 0:00 /usr/sbin/httpd apache 17533 0.0 0.2 457584 11436 ? Sl 09:38 0:00 /usr/sbin/httpd apache 17537 0.0 0.2 457584 11436 ? Sl 09:38 0:00 /usr/sbin/httpd apache 17542 0.0 0.4 464884 18840 ? Sl 09:40 0:00 /usr/sbin/httpd apache 17546 0.0 0.3 457756 12320 ? Sl 09:41 0:00 /usr/sbin/httpd apache 17550 0.0 0.2 457584 11440 ? Sl 09:42 0:00 /usr/sbin/httpd apache 17554 0.0 0.2 457584 11436 ? Sl 09:43 0:00 /usr/sbin/httpd apache 17557 0.0 0.2 457584 11436 ? Sl 09:44 0:00 /usr/sbin/httpd apache 17560 0.0 0.2 457584 11428 ? Sl 09:44 0:01 /usr/sbin/httpd apache 17568 0.0 0.4 464884 18824 ? Sl 09:48 0:00 /usr/sbin/httpd apache 17572 0.0 0.2 457584 11428 ? Sl 09:48 0:00 /usr/sbin/httpd apache 17575 0.0 0.2 457584 11428 ? Sl 09:48 0:01 /usr/sbin/httpd apache 17583 0.0 0.2 457584 11432 ? Sl 09:50 0:00 /usr/sbin/httpd apache 17586 0.0 0.3 457756 12264 ? Sl 09:50 0:00 /usr/sbin/httpd apache 17589 0.0 0.2 457584 11420 ? Sl 09:51 0:00 /usr/sbin/httpd apache 17597 0.0 0.2 457584 11420 ? Sl 09:53 0:02 /usr/sbin/httpd apache 17600 0.0 0.3 457756 12376 ? Sl 09:54 0:00 /usr/sbin/httpd apache 17604 0.0 0.2 457584 11436 ? Sl 09:55 0:00 /usr/sbin/httpd apache 17610 0.0 0.2 457584 11420 ? Sl 09:59 0:00 /usr/sbin/httpd apache 17615 0.0 0.2 457584 11424 ? Sl 10:00 0:00 /usr/sbin/httpd apache 17618 0.0 0.4 464884 19288 ? Sl 10:00 0:00 /usr/sbin/httpd apache 17635 0.0 0.2 457584 11416 ? Sl 10:01 0:00 /usr/sbin/httpd apache 17639 0.0 0.2 457584 11440 ? Sl 10:02 0:00 /usr/sbin/httpd apache 17643 0.0 0.2 457584 11448 ? Sl 10:03 0:00 /usr/sbin/httpd apache 17648 0.0 0.4 464884 18868 ? Sl 10:06 0:00 /usr/sbin/httpd apache 17651 0.0 0.2 457584 11416 ? Sl 10:07 0:00 /usr/sbin/httpd apache 17655 0.0 0.3 457756 12268 ? Sl 10:08 0:01 /usr/sbin/httpd apache 17658 0.0 0.2 457584 11440 ? Sl 10:08 0:00 /usr/sbin/httpd apache 17663 0.0 0.3 457756 12292 ? Sl 10:11 0:00 /usr/sbin/httpd apache 17666 0.0 0.2 457584 11432 ? Sl 10:11 0:00 /usr/sbin/httpd apache 17672 0.0 0.2 457584 11428 ? Sl 10:14 0:00 /usr/sbin/httpd apache 17676 0.0 0.2 457584 11424 ? Sl 10:16 0:00 /usr/sbin/httpd apache 17680 0.0 0.4 464884 18884 ? Sl 10:16 0:00 /usr/sbin/httpd apache 17683 0.0 0.2 457584 11420 ? Sl 10:19 0:00 /usr/sbin/httpd apache 17689 0.0 0.2 457584 11424 ? Sl 10:23 0:00 /usr/sbin/httpd apache 17692 0.0 0.2 457584 11428 ? Sl 10:23 0:00 /usr/sbin/httpd apache 17696 0.0 0.3 457584 11980 ? Sl 10:25 0:00 /usr/sbin/httpd apache 17699 0.0 0.2 457584 11436 ? Sl 10:25 0:00 /usr/sbin/httpd apache 17704 0.0 0.2 457584 11232 ? Sl 10:27 0:00 /usr/sbin/httpd apache 17711 0.0 0.2 457584 11412 ? Sl 10:30 0:01 /usr/sbin/httpd postfix 17714 0.0 0.0 78732 3216 ? S 10:30 0:00 pickup -l -t fifo -u apache 17715 0.0 0.2 457584 11436 ? Sl 10:30 0:00 /usr/sbin/httpd apache 17718 0.0 0.2 457584 11428 ? Sl 10:31 0:00 /usr/sbin/httpd apache 17726 0.0 0.2 457584 11420 ? Sl 10:36 0:00 /usr/sbin/httpd apache 17731 0.0 0.2 457584 11168 ? Sl 10:37 0:00 /usr/sbin/httpd apache 17734 0.0 0.4 464884 18796 ? Sl 10:37 0:00 /usr/sbin/httpd apache 17743 0.0 0.2 457584 11220 ? Sl 10:43 0:00 /usr/sbin/httpd apache 17746 0.0 0.2 457584 11172 ? Sl 10:44 0:00 /usr/sbin/httpd apache 17750 0.0 0.3 457756 12288 ? Sl 10:44 0:00 /usr/sbin/httpd apache 17753 0.0 0.2 457584 11220 ? Sl 10:45 0:00 /usr/sbin/httpd apache 17756 0.0 0.2 457584 11424 ? Sl 10:46 0:00 /usr/sbin/httpd apache 17763 0.0 0.3 457756 12204 ? Sl 10:51 0:00 /usr/sbin/httpd apache 17766 0.0 0.2 457584 11428 ? Sl 10:51 0:00 /usr/sbin/httpd apache 17771 0.0 0.2 457584 11180 ? Sl 10:54 0:00 /usr/sbin/httpd apache 17774 0.0 0.2 457584 11416 ? Sl 10:54 0:00 /usr/sbin/httpd apache 17779 0.0 0.2 457584 11428 ? Sl 10:58 0:00 /usr/sbin/httpd apache 17784 0.0 0.2 457584 11380 ? Sl 11:00 0:00 /usr/sbin/httpd apache 17805 0.0 0.2 457584 11380 ? Sl 11:05 0:00 /usr/sbin/httpd apache 17818 0.0 0.2 457584 11156 ? Sl 11:11 0:00 /usr/sbin/httpd apache 17823 0.0 0.2 457584 11416 ? Sl 11:12 0:00 /usr/sbin/httpd apache 17827 0.0 0.2 457584 11412 ? Sl 11:13 0:00 /usr/sbin/httpd apache 17831 0.0 0.2 457584 11132 ? Sl 11:13 0:00 /usr/sbin/httpd root 17835 0.0 0.0 97780 3792 ? S 11:14 0:00 sshd: smaity [priv] smaity 17839 0.0 0.0 97780 1748 ? S 11:15 0:00 sshd: smaity@pts/0 smaity 17840 0.0 0.0 108288 1928 pts/0 Ss 11:15 0:00 -bash apache 17858 0.0 0.4 464884 18856 ? Sl 11:16 0:00 /usr/sbin/httpd apache 17862 0.0 0.3 457584 11904 ? Sl 11:17 0:00 /usr/sbin/httpd apache 17866 0.0 0.2 457584 11212 ? Sl 11:19 0:00 /usr/sbin/httpd apache 17871 0.0 0.2 457584 11144 ? Sl 11:20 0:00 /usr/sbin/httpd apache 17875 0.0 0.2 457584 11416 ? Sl 11:23 0:00 /usr/sbin/httpd apache 17880 0.0 0.2 457584 11408 ? Sl 11:23 0:00 /usr/sbin/httpd apache 17883 0.0 0.2 457584 11412 ? Sl 11:24 0:00 /usr/sbin/httpd apache 17888 0.0 0.2 457584 11412 ? Sl 11:25 0:00 /usr/sbin/httpd apache 17891 0.0 0.2 457584 11140 ? Sl 11:26 0:00 /usr/sbin/httpd apache 17899 0.0 0.2 457584 10984 ? Sl 11:32 0:00 /usr/sbin/httpd apache 17902 0.0 0.2 457584 11680 ? Sl 11:33 0:00 /usr/sbin/httpd apache 17906 0.0 0.2 457584 10980 ? Sl 11:33 0:00 /usr/sbin/httpd Output of wget http://mydomain.com/ --2012-09-13 13:35:17-- http://mydomain.com/ Resolving mydomain.com... 127.0.0.1 Connecting to mydomain.com|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 [text/html] Saving to: “index.html” 0% [ ] 0 --.-K/s in 0s Cannot write to “index.html” (No space left on device). UPDATE3: output of df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_inamivm-lv_root 18G 17G 0 100% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 485M 71M 389M 16% /boot output of wget -O /dev/null http://127.0.0.1/ --2012-09-13 13:47:49-- http://127.0.0.1/ Connecting to 127.0.0.1:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45 [text/html] Saving to: “/dev/null” 100%[======================================================================================================>] 45 --.-K/s in 0s 2012-09-13 13:47:54 (8.57 MB/s) - “/dev/null” saved [45/45]

    Read the article

  • How do I stop and repair a RAID 5 array that has failed and has I/O pending?

    - by Ben Hymers
    The short version: I have a failed RAID 5 array which has a bunch of processes hung waiting on I/O operations on it; how can I recover from this? The long version: Yesterday I noticed Samba access was being very sporadic; accessing the server's shares from Windows would randomly lock up explorer completely after clicking on one or two directories. I assumed it was Windows being a pain and left it. Today the problem is the same, so I did a little digging; the first thing I noticed was that running ps aux | grep smbd gives a lot of lines like this: ben 969 0.0 0.2 96088 4128 ? D 18:21 0:00 smbd -F root 1708 0.0 0.2 93468 4748 ? Ss 18:44 0:00 smbd -F root 1711 0.0 0.0 93468 1364 ? S 18:44 0:00 smbd -F ben 3148 0.0 0.2 96052 4160 ? D Mar07 0:00 smbd -F ... There are a lot of processes stuck in the "D" state. Running ps aux | grep " D" shows up some other processes including my nightly backup script, all of which need to access the volume mounted on my RAID array at some point. After some googling, I found that it might be down to the RAID array failing, so I checked /proc/mdstat, which shows this: ben@jack:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdb1[3](F) sdc1[1] sdd1[2] 2930271872 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> And running mdadm --detail /dev/md0 gives this: ben@jack:~$ sudo mdadm --detail /dev/md0 /dev/md0: Version : 00.90 Creation Time : Sat Oct 31 20:53:10 2009 Raid Level : raid5 Array Size : 2930271872 (2794.53 GiB 3000.60 GB) Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Mar 7 03:06:35 2011 State : active, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : f114711a:c770de54:c8276759:b34deaa0 Events : 0.208245 Number Major Minor RaidDevice State 3 8 17 0 faulty spare rebuilding /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 I believe this says that sdb1 has failed, and so the array is running with two drives out of three 'up'. Some advice I found said to check /var/log/messages for notices of failures, and sure enough there are plenty: ben@jack:~$ grep sdb /var/log/messages ... Mar 7 03:06:35 jack kernel: [4525155.384937] md/raid:md0: read error NOT corrected!! (sector 400644912 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644920 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389686] md/raid:md0: read error not correctable (sector 400644928 on sdb1). Mar 7 03:06:35 jack kernel: [4525155.389688] md/raid:md0: read error not correctable (sector 400644936 on sdb1). Mar 7 03:06:56 jack kernel: [4525176.231603] sd 0:0:1:0: [sdb] Unhandled sense code Mar 7 03:06:56 jack kernel: [4525176.231605] sd 0:0:1:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Mar 7 03:06:56 jack kernel: [4525176.231608] sd 0:0:1:0: [sdb] Sense Key : Medium Error [current] [descriptor] Mar 7 03:06:56 jack kernel: [4525176.231623] sd 0:0:1:0: [sdb] Add. Sense: Unrecovered read error - auto reallocate failed Mar 7 03:06:56 jack kernel: [4525176.231627] sd 0:0:1:0: [sdb] CDB: Read(10): 28 00 17 e1 5f bf 00 01 00 00 To me it is clear that device sdb has failed, and I need to stop the array, shutdown, replace it, reboot, then repair the array, bring it back up and mount the filesystem. I cannot hot-swap a replacement drive in, and don't want to leave the array running in a degraded state. I believe I am supposed to unmount the filesystem before stopping the array, but that is failing, and that is where I'm stuck now: ben@jack:~$ sudo umount /storage umount: /storage: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) It is indeed busy; there are some 30 or 40 processes waiting on I/O. What should I do? Should I kill all these processes and try again? Is that a wise move when they are 'uninterruptable'? What would happen if I tried to reboot? Please let me know what you think I should do. And please ask if you need any extra information to diagnose the problem or to help!

    Read the article

  • lsattr: Inappropriate ioctl for device While reading flags

    - by rchhe
    For one of our Linux servers running CentOS 6.0, if I do lsattr /home, I get something like this (as root): $lsattr /home lsattr: Inappropriate ioctl for device While reading flags on /home/user lsattr: Inappropriate ioctl for device While reading flags on /home/user lsattr: Inappropriate ioctl for device While reading flags on /home/DIR Now, I try to change something with chattr $chattr -R -i /home chattr: Inappropriate ioctl for device while reading flags on /home Mount returns: $mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda3 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) nfsd on /proc/fs/nfsd type nfsd (rw) I have no clue how to fix this. Could somebody help?

    Read the article

  • Heartbeat won't successfully start up resources from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. The servers are directory connected with a 1000Mbps crossover cable on eth1 and have access to an IP camera LAN on eth0. Now, let's say that one node is down and the remaining functional node is booting after having been shut down. The node that is still functioning won't start up heartbeat and provide access to the drbd resource from a cold boot. I have to manually restart heartbeat by sudo service heartbeat restart to get everything up and running. How can I get it to start fine from a cold start, when only one server is present? Here is the ha.cf: debug /var/log/ha-debug logfile /var/log/ha-log logfacility none keepalive 2 deadtime 10 warntime 7 initdead 60 ucast eth1 192.168.2.2 ucast eth0 10.1.10.201 node EMserver1 node EMserver2 respawn hacluster /usr/lib/heartbeat/ipfail ping 10.1.10.22 10.1.10.21 10.1.10.11 auto_failback off Some material from the syslog: harc[4604]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/status status mach_down[4632]: 2012/11/27_13:54:49 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[4632]: 2012/11/27_13:54:49 info: mach_down takeover complete for node emserver2. Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: Initial resource acquisition complete (T_RESOURCES(us)) Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: mach_down takeover complete. IPaddr[4679]: 2012/11/27_13:54:49 INFO: Resource is stopped Nov 27 13:54:49 EMserver1 heartbeat: [4605]: info: Local Resource acquisition completed. harc[4713]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp ip-request-resp[4713]: 2012/11/27_13:54:49 received ip-request-resp IPaddr::10.1.10.254 OK yes ResourceManager[4732]: 2012/11/27_13:54:50 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[4759]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 start IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated nic for 10.1.10.254: eth0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated netmask for 10.1.10.254: 255.255.255.0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: eval ifconfig eth0:0 10.1.10.254 netmask 255.255.255.0 broadcast 10.1.10.255 IPaddr[4804]: 2012/11/27_13:54:50 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/drbddisk r0 start Filesystem[4965]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 start Filesystem[5039]: 2012/11/27_13:54:50 INFO: Running start for /dev/drbd1 on /shr Filesystem[5033]: 2012/11/27_13:54:51 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:51 info: Running /etc/init.d/nfs-kernel-server start Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: Local Resource acquisition completed. (none) Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: local resource transition completed. Nov 27 13:57:46 EMserver1 heartbeat: [4586]: info: Heartbeat shutdown in progress. (4586) Nov 27 13:57:46 EMserver1 heartbeat: [5286]: info: Giving up all HA resources. ResourceManager[5301]: 2012/11/27_13:57:46 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[5372]: 2012/11/27_13:57:46 INFO: Running stop for /dev/drbd1 on /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: Trying to unmount /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: unmounted /shr successfully Filesystem[5366]: 2012/11/27_13:57:47 INFO: Success ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[5509]: 2012/11/27_13:57:47 INFO: ifconfig eth0:0 down IPaddr[5497]: 2012/11/27_13:57:47 INFO: Success Nov 27 13:57:47 EMserver1 heartbeat: [5286]: info: All HA resources relinquished. Nov 27 13:57:48 EMserver1 heartbeat: [4586]: info: killing /usr/lib/heartbeat/ipfail process group 4603 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBFIFO process 4589 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4590 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4591 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4592 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4593 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4594 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4595 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4596 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4597 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4598 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4599 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4589 exited. 11 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4596 exited. 10 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4598 exited. 9 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4590 exited. 8 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4595 exited. 7 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4591 exited. 6 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4592 exited. 5 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4593 exited. 4 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4597 exited. 3 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4594 exited. 2 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4599 exited. 1 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: emserver1 Heartbeat shutdown complete. Here is some more from the log ResourceManager[2576]: 2012/11/28_16:32:42 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[2602]: 2012/11/28_16:32:42 INFO: Running OK Filesystem[2653]: 2012/11/28_16:32:43 INFO: Running OK Nov 28 16:32:52 EMserver1 heartbeat: [1695]: WARN: node emserver2: is dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Dead node emserver2 gave up resources. Nov 28 16:32:52 EMserver1 ipfail: [1807]: info: Status update: Node emserver2 now has status dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Link emserver2:eth1 dead. Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: NS: We are still alive! Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: Link Status update: Link emserver2/eth1 now has status dead Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Asking other side for ping node count. Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Checking remote count of ping nodes. Nov 28 16:32:57 EMserver1 heartbeat: [1695]: info: Heartbeat shutdown in progress. (1695) Nov 28 16:32:57 EMserver1 heartbeat: [2734]: info: Giving up all HA resources. ResourceManager[2751]: 2012/11/28_16:32:57 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[2829]: 2012/11/28_16:32:57 INFO: Running stop for /dev/drbd1 on /shr Filesystem[2829]: 2012/11/28_16:32:57 INFO: Trying to unmount /shr Filesystem[2829]: 2012/11/28_16:32:58 INFO: unmounted /shr successfully Filesystem[2823]: 2012/11/28_16:32:58 INFO: Success ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[2971]: 2012/11/28_16:32:58 INFO: ifconfig eth0:0 down IPaddr[2958]: 2012/11/28_16:32:58 INFO: Success Nov 28 16:32:58 EMserver1 heartbeat: [2734]: info: All HA resources relinquished. Nov 28 16:32:59 EMserver1 heartbeat: [1695]: info: killing /usr/lib/heartbeat/ipfail process group 1807 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBFIFO process 1777 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1778 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1779 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1780 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1781 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1782 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1783 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1784 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1785 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1786 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1787 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1778 exited. 11 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1779 exited. 10 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1780 exited. 9 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1781 exited. 8 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1782 exited. 7 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1783 exited. 6 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1784 exited. 5 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1785 exited. 4 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1786 exited. 3 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1787 exited. 2 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1777 exited. 1 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: emserver1 Heartbeat shutdown complete. If I restarted heartbeat at this point... the resources heartbeat controls would start up fine.... please help!

    Read the article

  • HP D2D 4312 Bacula configuration

    - by krisdigitx
    I have configured 5 libraries on the HP D2D system Discovery on the Bacula server shows only the last library and not all libraries. Why? [root@server bacula]# iscsiadm --mode discovery --type sendtargets --portal 10.66.59.114 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dca5e.library12.drive1 10.66.59.114:3260,1 iqn.1986-03.com.hp:storage.d2dbs.czj2020vvy.50014380075dcaf2.library12.robotics I can query it fine using... [root@server bacula]# mtx -f /dev/sg2 inquiry Product Type: Tape Drive Vendor ID: 'HP ' Product ID: 'Ultrium 5-SCSI ' Revision: 'ED51' Attached Changer API: No [root@bray bacula]# mtx -f /dev/sg3 inquiry Product Type: Medium Changer Vendor ID: 'HP ' Product ID: 'MSL G3 Series ' Revision: 'EL41' Attached Changer API: No [root@server bacula]# mtx -f /dev/sg3 status Storage Changer /dev/sg3:1 Drives, 97 Slots ( 1 Import/Export ) Data Transfer Element 0:Empty Storage Element 1:Full :VolumeTag=50507F82 Storage Element 2:Full :VolumeTag=50507F83 Storage Element 3:Full :VolumeTag=50507F84 Storage Element 4:Full :VolumeTag=50507F85 Storage Element 5:Full :VolumeTag=50507F86 Storage Element 6:Full :VolumeTag=50507F87 Storage Element 7:Full :VolumeTag=50507F88 Does anyone have any good documentation for implementing Bacula with an HP D2D tape drive for server backups, and how to allocate libraries?

    Read the article

  • Must partprobe before using drive?

    - by Jeff Welling
    This is a followup question to Cannot mount /dev/sdc1 on Debian 5.0, special device /dev/sdc1 doesn't exist Basically, I have 6 SATA hard drives in a machine and I'm trying to create a RAID6 array with them. When I try to run the mdadm command to create (with the verbose option) a raid array, I see messages like "mdadm: super1.x cannot open /dev/sdf1: No such device or address" which are resolved by doing partprobe /dev/sdf and then re-running the mdadm command. The problem is that I have to run partprobe after each reboot, and from experience I don't think this is normal behaviour -- on no other linux machine do I have to partprobe the device before I can use it. Something must be going wrong, but how do I troubleshoot this to find out what? Could this be caused by a hardware problem? Edit: Additional note - before I seemed to only have this problem with one drive, but now I'm having it with 3 drives.

    Read the article

  • Ubuntu NBR karmic boot freezes at fsck from util-linux-ng 2.16

    - by BlueBill
    Hi all, I have a netbook (emachine e250 - equivalent to an acer aspire one) and I have Ubunutu NBR 9.10 installed on it. Every other cold boot freezes at the following error message: "fsck from util-linux-ng 2.16" There is no disk activity, no activity what so ever. I have left the machine sit for over an hour and nothing. It takes a couple of hard resets to be able to boot properly. Once it boots everything works great (wireless, suspend/resume, etc.)! I have spent the last couple of weeks researching the problem and the only thing that seems to work is setting nolapic in the boot string in grub - it boots every time. Unfortunately, nolapic disables the second core and causes problems with suspend resume. At first I thought it was an fsck problem with the first partition on the hard disk as it is a hidden ntfs partition containing the windows xp recover information. So in /etc/fstab I set the partition so that it would be ignored by fsck. This didn't seem to do anything. I have these partitions: /dev/sda1 - an ntfs recovery partition /dev/sda2 - /boot /dev/sda3 - swap /dev/sda5 - / /dev/sda6 - /home I am running kernel version 2.6.31-19-generic and have all the patches (as indicated by update manager). I also have no splash screen so I can see the boot progress. I have only been using NBR since January, I have been using Ubuntu on my desktop since last June (2009-06). What logs should I be looking at? Is there a log for failed boots? Thanks, Troy

    Read the article

  • Mysql server crashes Innodb

    - by martin
    Today we got some DB crash. The DB is InnoDB. At firstin log: 120404 10:57:40 InnoDB: ERROR: the age of the last checkpoint is 9433732, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:58:48 InnoDB: ERROR: the age of the last checkpoint is 9825579, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:59:04 InnoDB: ERROR: the age of the last checkpoint is 13992586, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. 120404 10:59:20 InnoDB: ERROR: the age of the last checkpoint is 18059881, InnoDB: which exceeds the log group capacity 9433498. InnoDB: If you are using big BLOB or TEXT rows, you must set the InnoDB: combined size of log files at least 10 times bigger than the InnoDB: largest such row. after manual service stop and normal PC restart : 120404 11:12:35 InnoDB: Error: page 3473451 log sequence number 105 802365904 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1103869440 120404 11:12:37 InnoDB: Error: page 0 log sequence number 105 834817616 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: Last MySQL binlog file position 0 3710603, file name .\mysql-bin.000336 InnoDB: Starting in background the rollback of uncommitted transactions 120404 11:12:38 InnoDB: Rolling back trx with id 0 1103866646, 1 rows to undo 120404 11:12:38 InnoDB: Started; log sequence number 105 796344770 120404 11:12:38 InnoDB: Error: page 2097163 log sequence number 105 803249754 InnoDB: is in the future! Current system log sequence number 105 796344770. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. InnoDB: Rolling back of trx id 0 1103866646 completed 120404 11:12:39 InnoDB: Rollback of non-prepared transactions completed 120404 11:12:39 [Note] Event Scheduler: Loaded 0 events 120404 11:12:39 [Note] wampmysqld: ready for connections. Version: '5.1.53-community' socket: '' port: 3306 MySQL Community Server (GPL) 120404 11:12:40 InnoDB: Error: page 2097162 log sequence number 105 803215859 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. 120404 11:12:40 InnoDB: Error: page 2097156 log sequence number 105 803181181 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. 120404 11:12:40 InnoDB: Error: page 2097157 log sequence number 105 803193066 InnoDB: is in the future! Current system log sequence number 105 796345097. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: for more information. when tried to recover data get : key_buffer_size=16777216 read_buffer_size=262144 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 133725 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 0000000140262AFC mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AAFA1 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AB33A mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140268219 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014027DB13 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A909F mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A91B6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014025B9B0 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014022F9C6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140219979 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014009ABCF mysqld.exe!?ha_initialize_handlerton@@YAHPEAUst_plugin_int@@@Z() 000000014003308C mysqld.exe!?plugin_lock_by_name@@YAPEAUst_plugin_int@@PEAVTHD@@PEBUst_mysql_lex_string@@H@Z() 00000001400375A9 mysqld.exe!?plugin_init@@YAHPEAHPEAPEADH@Z() 000000014001DACE mysqld.exe!handle_shutdown() 000000014001E285 mysqld.exe!?win_main@@YAHHPEAPEAD@Z() 000000014001E632 mysqld.exe!?mysql_service@@YAHPEAX@Z() 00000001402EA477 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402EA545 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000007712652D kernel32.dll!BaseThreadInitThunk() 000000007725C521 ntdll.dll!RtlUserThreadStart() The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 120404 14:17:49 [Note] Plugin 'FEDERATED' is disabled. 120404 14:17:49 [Warning] option 'innodb-force-recovery': signed value 8 adjusted to 6 InnoDB: The user has set SRV_FORCE_NO_LOG_REDO on InnoDB: Skipping log redo InnoDB: Error: trying to access page number 4290979199 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 120404 14:17:52 InnoDB: Assertion failure in thread 3928 in file .\fil\fil0fil.c lin23 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 120404 14:17:52 - mysqld got exception 0xc0000005 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=262144 max_used_connections=0 max_threads=151 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 133725 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... 0000000140262AFC mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AAFA1 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402AB33A mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140268219 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014027DB13 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A909F mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402A91B6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014025B9B0 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014022F9C6 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 0000000140219979 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000014009ABCF mysqld.exe!?ha_initialize_handlerton@@YAHPEAUst_plugin_int@@@Z() 000000014003308C mysqld.exe!?plugin_lock_by_name@@YAPEAUst_plugin_int@@PEAVTHD@@PEBUst_mysql_lex_string@@H@Z() 00000001400375A9 mysqld.exe!?plugin_init@@YAHPEAHPEAPEADH@Z() 000000014001DACE mysqld.exe!handle_shutdown() 000000014001E285 mysqld.exe!?win_main@@YAHHPEAPEAD@Z() 000000014001E632 mysqld.exe!?mysql_service@@YAHPEAX@Z() 00000001402EA477 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 00000001402EA545 mysqld.exe!?check_next_symbol@Gis_read_stream@@QEAA_ND@Z() 000000007712652D kernel32.dll!BaseThreadInitThunk() 000000007725C521 ntdll.dll!RtlUserThreadStart() The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. Any suggestion how to get DB working ????

    Read the article

  • AWS Large Instance: /mnt does not show all the space that should be available

    - by Emile Baizel
    I just created a Large (m1.large) 64 bit instance which comes with 850 GB instance storage. Look at the Large Instance http://aws.amazon.com/ec2/instance-types/ A 'df -h' from the root folder gives me the output below. The /mnt is where I'm thinking the instance storage is but here it is only showing me 414G. I have set up two servers and both are showing the same numbers. root@ip-11-11-11-11:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 1.1G 6.5G 14% / none 3.7G 112K 3.7G 1% /dev none 3.7G 0 3.7G 0% /dev/shm none 3.7G 48K 3.7G 1% /var/run none 3.7G 0 3.7G 0% /var/lock /dev/sdb 414G 199M 393G 1% /mnt

    Read the article

  • Linux partitioning problem

    - by Claudiu
    I am using cfdisk to repartition my hdd as from OS install I only got 1 big partition a swap. I wanted to resize the big partition to 1 GB /boot and use the rest of the space for an extended partition. After I do cfdisk, I recheck the partitions with fdisk -l and I get these: Disk /dev/sda: 320 GB, 320070320640 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda3 1 38455 308881755 f Extended LBA Warning: Partition 3 does not end on cylinder boundary. /dev/sda2 38455 38698 1951897 82 Linux swap /dev/sda1 * 38699 38913 311349654 83 Linux My problem is the Warning message, I think I know the cause, I think its because of sda1 Blocks size. How could that be soo big if Start and End interval is small?

    Read the article

  • How to reinstall Mac OS X on OS X/Linux dual-boot system?

    - by strangeronyourtrain
    My setup: I have a MacBook Pro 5,5 with a Mac OS X Snow Leopard partition and a Linux partition. I use rEFIt to boot into Linux. I didn't use Boot Camp when I originally installed Linux; instead, I manually created the partition (with either Disk Utility in OS X or Gparted on a Linux live CD--I don't recall which one) and then installed Linux on it from a live CD. The problem: My OS X partition is corrupt, and I need to reinstall Snow Leopard. Since I installed rEFIt from within OS X, I'm concerned that wiping the OS X partition will prevent me from booting into my Linux partition. How can I do this without losing access to my Linux partition? Is it possible to install Snow Leopard on the partition I reserved for it, or will it automatically overwrite the entire drive? And if I do the fresh OS X install and then install rEFIt again, will it automatically recognize my Linux partition? Thanks for any tips! Specs: MacBook Pro 5,5 (Mid-2009); Snow Leopard 10.6.7/64-bit Sabayon Linux, 2.6.36 kernel EDIT/UPDATE: Thanks, but the situation has taken a more complicated turn: I tried to reinstall Snow Leopard from the DVD, but it refused to install onto my Mac partition, claiming: "The disk cannot be used to start up your computer." Disk Utility wouldn't let me resize the partition or create a new one, and it doesn't see my Linux partition. It only displays the two partitions "Macintosh HD" and Linux Swap. I can, however, see all the partitions from Linux. This is the partition table as shown in Gparted: And the output of "fdisk -l" is: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 409639 204819+ ee GPT /dev/sda2 409640 349590464 174590412+ af HFS / HFS+ /dev/sda3 483122745 488392064 2634660 82 Linux swap / Solaris /dev/sda4 * 349590465 483122744 66766140 83 Linux Partition table entries are not in disk order I wonder if this is because I originally partitioned my disk with Gparted instead of OS X's Disk Utility (at this point, I don't recall whether I used Gparted or Disk Utility). In any case, it doesn't seem safe to do any reformatting with Disk Utility now, as I'm afraid it will wipe sda2 ("Macintosh HD") as well as sda4 (my Linux partition). So... I'm hoping to find a solution that doesn't involve wiping my entire hard disk. Would it be safe/possible to use Gparted to erase sda2 ("Macintosh HD") and then use the Snow Leopard DVD to install OS X onto [I]just[/I] sda2 without touching the other partitions? Thanks for any insight!

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • How to create a Windows 7 installation usb media from linux ? (to install Windows 7) - Help need to know better method

    - by Abel Coto
    I have been reading some web pages and posts here and in other forums about how to create a Windows 7 installation Usb media (to install windows 7 using a usb) from linux. I asked in technet about this , and they give me general ideas about how to do it I personally am not very familiar with linux, but basicaly all that you need to do... in whatever way you do it is the following: Format a usb flash drive, either fat32 or ntfs create a partition that is large enough to host the windows installation (give or take 3GB for 64bit, aroudn 2.5gb for 32bit) and mark that partition as active/bootable. Since this can be done with windows, but just as well with a tool like gparted, you should be able to do the same in debian. Once you have created that partition, mount the iso that you download, and copy all files starting from the root, into the root of the usb flash drive. That's all there's to it. There is a method that i found in various places,that is almost the same that the man of technet has said. But,there is a step,that in that method is done,that i don't know if it is really necessary,or not. Not allways dd works.Basically, the missing step was to write a proper boot sector to the usb stick, which can be done from linux with ms-sys. This works with the Win7 retail version. Here is the complete rundown again: Install ms-sys Check what device your usb media is asigned - here we will assume it is /dev/sdb. Delete all partitions, create a new one taking up all the space, set type to NTFS, and set it bootable: *# cfdisk /dev/sdb* Create NTFS filesystem: *# mkfs.ntfs -f /dev/sdb1* Mount iso and usb media: *# mount -o loop win7.iso /mnt/iso # mount /dev/sdb1 /mnt/usb* Copy over all files: *# cp -r /mnt/iso/* /mnt/usb/* Write Windows 7 MBR on usb stick: *# ms-sys -7 /dev/sdb* ...and you're done. Shouldn't the usb work without doing the last step "# ms-sys -7 /dev/sdb" or to make the usb bootable , is a must , not only to mark the partition as bootable ? Would be better use rsync instead of cp -r ? All this steps should be done as root, i suppose , or if not , chmod to 664 and chown the directories where are mounted the usb and the iso, no ? But i suppose that the easier thing is to copy the data as root , and that this will not affect to the data. Has anyone tried this method or some similar like copying the iso with dd ?

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • Why do I get xfs_freeze "Operation not supported" error with ec2-consistent-snapshot? Debian Squeeze w/ext4 filesystem

    - by Michael Endsley
    I'm running the following command: [root@somehost ~]# ec2-consistent-snapshot --aws-credentials-file '/some/dir/file' --mysql --mysql-socket '/var/run/mysqld/mysql.sock' --mysql-username 'backup' --mysql-password 'password' --freeze-filesystem '/dev/xvda1' vol-xxxxxx It returns this error: xfs_freeze: cannot freeze filesystem at /dev/xvda1: Operation not supported ec2-consistent-snapshot: ERROR: xfs_freeze -f /dev/xvda1: failed(256) snap-eeb66393 xfs_freeze: cannot unfreeze filesystem mounted at /dev/xvda1: Invalid argument ec2-consistent-snapshot: ERROR: xfs_freeze -u /dev/xvda1: failed(256) This is being run on Debian Squeeze with the ext4 Linux filesystem. Can anyone explain this error to me, or what might be its cause? When googling, I found information about it needing to be executed with sudo, but I'm performing the entire operation as root. I also found some posts about trying to run it after a CentOS upgrade using yum, but the situation appeared dissimilar. It's difficult to find things referring to this situation exactly. xfs_freeze is available for use on the filesystem. Is it possible that the filesystem, despite being ext4, somehow doesn't support freezing? Sorry if I've missed some bit of StackExchange etiquette with this post -- it's my first venture here!

    Read the article

  • How do I usb tether my Cyanogen modded G1's internet connection to my Toshiba Tecra 8000 running Xub

    - by atticus
    I have usb-tethering enabled in my phone. It works fine with Vista. When I plug my phone into my Tecra 8000 laptop running Xubuntu, dmesg shows: "usb 1-1: new full speed USB device using uhci_hcd and address 8". I see that the OS has detected it as a storage device, but I can't get it to function correctly as a network device. /dev/us* shows no usb0, but it does show /dev/usbdev1.1_ep00 /dev/usbdev1.1_ep81 /dev/usbdev1.8_ep00 ... usbdev1.8_ep83. I could just use the wireless tether app for android, but I can't get my Netgear wg511 v2 (made in China) wireless card to work in this laptop either. But that's another post for later.

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Can't manage iPod from linux anymore

    - by kemp
    I used to be able to see and manage my iPod with different softwares: Amarok, Rhythmbox, GTKPod. The device is a nano 1st generation 4gb. Currently it mounts regularly and can be accessed from the file system, but I get this in dmesg: [ 1547.617891] scsi 11:0:0:0: Direct-Access Apple iPod 1.62 PQ: 0 ANSI: 0 [ 1547.619103] sd 11:0:0:0: Attached scsi generic sg2 type 0 [ 1547.620478] sd 11:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 [ 1547.620494] sd 11:0:0:0: [sdb] 7999487 512-byte hardware sectors: (4.09 GB/3.81 GiB) [ 1547.621718] sd 11:0:0:0: [sdb] Write Protect is off [ 1547.621726] sd 11:0:0:0: [sdb] Mode Sense: 68 00 00 08 [ 1547.621732] sd 11:0:0:0: [sdb] Assuming drive cache: write through [ 1547.623591] sd 11:0:0:0: [sdb] Adjusting the sector count from its reported value: 7999488 [ 1547.624993] sd 11:0:0:0: [sdb] Assuming drive cache: write through [ 1547.625003] sdb: sdb1 sdb2 [ 1547.629686] sd 11:0:0:0: [sdb] Attached SCSI removable disk [ 1548.084026] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ 1548.369502] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ 1548.504358] FAT: invalid media value (0x2f) [ 1548.504363] VFS: Can't find a valid FAT filesystem on dev sdb1. [ 1548.945173] FAT: utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ 1548.945179] FAT: invalid media value (0x2f) [ 1548.945182] VFS: Can't find a valid FAT filesystem on dev sdb1. [ 1610.092886] usb 2-6: USB disconnect, address 9 The only application that can access it (partially) is Rhythmbox. I say partially because I can transfer files to the iPod but can't remove or modify them. Also one transfer didn't finish and only 9 out of 16 songs were delivered to the device. All other softwares I tried (GTKPod, Amarok, Songbird) don't even detect it. What can I do to troubleshoot this? EDIT: # fdisk -l /dev/sdb Disk /dev/sdb: 4095 MB, 4095737344 bytes 241 heads, 62 sectors/track, 535 cylinders Units = cylinders of 14942 * 512 = 7650304 bytes Disk identifier: 0x20202020 Device Boot Start End Blocks Id System /dev/sdb1 1 11 80293+ 0 Empty Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 1, 1) logical=(0, 1, 2) Partition 1 has different physical/logical endings: phys=(9, 254, 63) logical=(10, 181, 8) Partition 1 does not end on cylinder boundary. /dev/sdb2 11 536 3919415+ b W95 FAT32 Partition 2 has different physical/logical beginnings (non-Linux?): phys=(10, 0, 7) logical=(10, 181, 15) Partition 2 has different physical/logical endings: phys=(497, 240, 62) logical=(535, 88, 61) EDIT2: The "before" state is hard to tell, it was a lot of updates ago. Haven't been using my iPod for a while so I can't say when exactly it stopped working. I'm sure Amarok was still at version 1.X but can't remember when it was. My current system is debian testing fully updated. NOTE: just noticed that if I mount the device manually instead of letting nautilus automount it, I can see it again on GTKPod but still not on Banshee AND it's vanished from Rhythmbox...

    Read the article

  • Limiting bandwidth on internal interface on Linux gateway

    - by Jack Scott
    I am responsible for a Linux-based (it runs Debian) branch office router that takes a single high-speed Internet connection (eth2) and turns it into about 20 internal networks, each with a seperate subnet (192.168.1.0/24 to 192.168.20.0/24) and a seperate VLAN (eth0.101 to eth0.120). I am trying to restrict bandwidth on one of the internal subnets that is consistently chewing up more bandwidth than it should. What is the best way to do this? My first try at this was with wondershaper, which I heard about on SuperUser here. Unfortunately, this is useful for exactly the opposite situation that I have... it's useful on the client side, not on the Internet side. My second attempt was using the script found at http://www.topwebhosts.org/tools/traffic-control.php, which I modified so the active part is: tc qdisc add dev eth0.113 root handle 13: htb default 100 tc class add dev eth0.113 parent 13: classid 13:1 htb rate 3mbps tc class add dev eth0.113 parent 13: classid 13:2 htb rate 3mbps tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip dst 192.168.13.0/24 flowid 13:1 tc filter add dev eth0.113 protocol ip parent 13:0 prio 1 u32 match ip src 192.168.13.0/24 flowid 13:2 What I want this to do is restrict the bandwidth on VLAN 113 (subnet 192.168.13.0/24) to 3mbit up and 3mbit down. Unfortunately, it seems to have no effect at all! I'm very inexperienced with the tc command, so any help getting this working would be appreciated.

    Read the article

  • 6to4 tunnel: cannot ping6 to ipv6.google.com?

    - by quanta
    Hi folks, Follow the Setup of 6to4 tunnel guide, I want to test ipv6 connectivity, but I cannot ping6 to ipv6.google.com. Details below: # traceroute 192.88.99.1 traceroute to 192.88.99.1 (192.88.99.1), 30 hops max, 40 byte packets 1 static.vdc.vn (123.30.53.1) 1.514 ms 2.622 ms 3.760 ms 2 static.vdc.vn (123.30.63.117) 0.608 ms 0.696 ms 0.735 ms 3 static.vdc.vn (123.30.63.101) 0.474 ms 0.477 ms 0.506 ms 4 203.162.231.214 (203.162.231.214) 11.327 ms 11.320 ms 11.312 ms 5 static.vdc.vn (222.255.165.34) 11.546 ms 11.684 ms 11.768 ms 6 203.162.217.26 (203.162.217.26) 42.460 ms 42.424 ms 42.401 ms 7 218.188.104.173 (218.188.104.173) 42.489 ms 42.462 ms 42.415 ms 8 218.189.5.10 (218.189.5.10) 42.613 ms 218.189.5.42 (218.189.5.42) 42.273 ms 42.300 ms 9 d1-26-224-143-118-on-nets.com (118.143.224.26) 205.752 ms d1-18-224-143-118-on-nets.com (118.143.224.18) 207.130 ms d1-14-224-143-118-on-nets.com (118.143.224.14) 206.970 ms 10 218.189.5.150 (218.189.5.150) 207.456 ms 206.349 ms 206.941 ms 11 * * * 12 10gigabitethernet2-1.core1.lax1.he.net (72.52.92.121) 214.087 ms 214.426 ms 214.818 ms 13 192.88.99.1 (192.88.99.1) 207.215 ms 199.270 ms 209.391 ms # ifconfig tun6to4 tun6to4 Link encap:IPv6-in-IPv4 inet6 addr: 2002:x:x::/16 Scope:Global inet6 addr: ::x.x.x.x/128 Scope:Compat UP RUNNING NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:11 dropped:0 overruns:0 carrier:11 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) # iptunnel sit0: ipv6/ip remote any local any ttl 64 nopmtudisc tun6to4: ipv6/ip remote any local x.x.x.x ttl 64 # ip -6 route show ::/96 via :: dev tun6to4 metric 256 expires 21332777sec mtu 1480 advmss 1420 hoplimit 4294967295 2002::/16 dev tun6to4 metric 256 expires 21332794sec mtu 1480 advmss 1420 hoplimit 4294967295 fe80::/64 dev eth0 metric 256 expires 15674592sec mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth1 metric 256 expires 15674597sec mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev tun6to4 metric 256 expires 21332794sec mtu 1480 advmss 1420 hoplimit 4294967295 default via ::192.88.99.1 dev tun6to4 metric 1 expires 21332861sec mtu 1480 advmss 1420 hoplimit 4294967295 # ping6 -n -c 4 ipv6.google.com PING ipv6.google.com(2404:6800:8005::68) 56 data bytes From 2002:x:x:: icmp_seq=0 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=1 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=2 Destination unreachable: Address unreachable From 2002:x:x:: icmp_seq=3 Destination unreachable: Address unreachable --- ipv6.google.com ping statistics --- 4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms What is my problem? Thanks,

    Read the article

  • perl hide system output

    - by Chris
    Using perl 5.8.8 on linux, need the output of a perl 'system' command to be hidden. The command in my code is : system("wget", "$url", "-Omy_folder/$date-$target.html", "--user-agent=$useragent"); I've tried using " /dev/null 2&1" in different places in the system command, like this- system("wget", "$url", "-Omy_folder/$date-$target.html", "--user-agent=$useragent"," /dev/null 2&1"); Can anyone help me with where the redirection to /dev/null should be?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >