Search Results

Search found 4170 results on 167 pages for 'dnfs 11g mount'.

Page 149/167 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • NFSv3 + ACL: mask is gone on clients

    - by Jorge Suárez de Lis
    I'm sharing a NFS folder among a user group. The default umask on the clients is 0700, and this is a problem because newly created files won't be readable/writable by another users. So, I'm using ACLs to force the umask 0770 on the shared folder, and this works OK on the server, but not on the clients. server # getfacl /export/proyectos getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:mask::rwx default:other::r-x server # getfacl /export/proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: export/proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- As you see, the default (and also a specific on the second directory) mask ACLs are being applied. I mount the whole share on the client: 172.16.54.56:/export/proyectos on /proyectos type nfs (rw,noatime,rsize=131072,wsize=131072,acregmin=10,acl,nfsvers=3,addr=172.16.54.56) But the mask and default:mask ACLs are gone. client $ getfacl /proyectos/ getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/ # owner: root # group: root user::rwx group::rwx other::r-x default:user::rwx default:group::rwx default:other::r-x client $ getfacl /proyectos/innovacion getfacl: Eliminando «/» inicial en nombres de ruta absolutos # file: proyectos/innovacion # owner: root # group: proyecto-innovacion # flags: ss- user::rwx group::rwx other::--- default:user::rwx default:group::rwx default:other::--- It lacks the default:mask and mask ACLs, the only ones that I've setted. So the proposed solution to enforce umask won't work for me. Why is happening this?

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • Copying large files from USB devices to the internal hard drive fails on Mac OS

    - by John M. P. Knox
    I have a second-generation 13" MacBook Air running Mac OS X 10.6.6 with a 2.13 GHz processor, 4 GB of RAM, and a 256 GB SSD hard disk. I often get failures when I attempt to copy a large file or large collection of files from an external USB drive (typically a "Firewire" generation Drobo) to the internal drive. The failure behaves almost exactly as if I had pulled the USB cable from the computer in mid-transfer. I get a warning that I have removed the hard disk improperly. After this event, the drive no longer appears mounted in the finder, and I have to unplug and reinsert the USB cable to mount the drive again. I have also seen a similar problem when using Aperture 3 to import a large number of photos and videos from a USB Compact Flash card reader. The import will fail and I will have to unplug the Card Reader and import the missing items. Oddly, reversing the direction of the copy seems pretty reliable. I've never had a problem copying a large file to a USB device, meaning that I have quite a few large files which are stranded on my Drobo. Model Identifier: MacBookAir3,2 Boot ROM Version: MBA31.0061.B01 I have seen a similar issue reported on Apple's website: http://discussions.apple.com/thread.jspa?threadID=2648590&tstart=0 The only suggested resolutions there seems to be switching to another form of connectivity (e.g. firewire, which does not exist on MacBook Air), or downgrading to Mac OS 10.6.4, or reverting the USB kernel extensions to the 10.6.4 versions: http://discussions.apple.com/message.jspa?messageID=12566073#12582956 I'm not too keen on the idea of downgrading kernel extensions. Does anyone know of a hardware revision without this issue that I can trade up to? Are there any other potential solutions out there?

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • ADF Enterprise Application Development - Made Simple (Book Review)

    - by Frank Nimphius
      Sten E. Vesterli wrote the "Oracle ADF Enterprise Application Development – Made Simple" book published by Packt Publishing in 2011 http://www.packtpub.com/oracle-adf-enterprise-application-development/book A common question on OTN, but also when talking to clients or customers is about where and how to start your ADF application development. Especially when the current programming background is not in Java, but 4 GL or PLSQL, developers often look for answers to the following questions: · How long does it take to learn Oracle ADF ? · How long does it take to replace a Forms application with ADF ? · How many developers do I need? · Do I need to know Java to use ADF and if yes, how good do I need to know this? · How do I structure my programming files, organizing them in JDeveloper work spaces, projects and libraries? · What is best practices for naming Java packages and how to void naming conflicts in ADF in general? · How many Application Modules do I need or should I create? · How to test applications? Sten Vesterli answers all of the above questions and more in his book http://www.packtpub.com/oracle-adf-enterprise-application-development/book , which makes it great value add to the 3 existing Oracle ADF books. In order of complexity (which also is the order in which reading the available Oracle ADF books makes sense), in my opinion, Sten's book should come second – though it also is useful to those that are already more advanced with Oracle ADF. So if you are absolutely new to Oracle ADF, then the order of books to read to get you up on an expert level should be: 1. Grant Ronald; "Quick Start Guide to Oracle Fusion Development: Oracle JDeveloper and Oracle ADF" (McGraw Hill 2010) 2. Sten Vesterli; "Oracle ADF Enterprise Application Development – Made Simple" (Packt Publishing 2011) 3. Duncan Mills, Peter Koletzke; " Oracle JDeveloper 11g Handbook: A Guide to Fusion Web Development" (McGraw Hill 2009) 4. Frank Nimphius, Lynn Munsinger; " Oracle Fusion Developer Guide: Building Rich Internet Applications with Oracle ADF Business Components and Oracle ADF Faces" (McGraw Hill 2010) If you are not new to Oracle ADF and Orace JDeveloper, then buy Sten Vesterli's book anyway. It is worth it and you want to have it on your book shelf. See below the table of content to get a better idea of what this book covers: · Chapter 1: The ADF Proof of Concept · Chapter 2: Estimating the Effort · Chapter 3: Getting Organized · Chapter 4: Productive Teamwork · Chapter 5: Prepare to Build · Chapter 6: Building the Enterprise Application · Chapter 7: Testing your Application · Chapter 8: Look and Feel · Chapter 9: Customizing the Functionality · Chapter 10: Securing your ADF Application · Chapter 11: Package and Deliver · Appendix: Internationalization The book is written with a lot of good humor, which makes the read very enjoyable (from a geek's perspective, of course). My favorite quote – just in case you are interested - is from page 97, when Sten talks about getting organized: " Stop sending e-mails to your team. Just stop it. E-mail is so last century.…" So true, so true! This quote's runner up is the "boss key" on page 128 where Sten talks about productivity and how Oracle Team Productivity Center (TPC) can help you with this. Quotes like these stick to your brains and make sure you never forget. Go for it!

    Read the article

  • Oracle OpenWorld Update: Oracle GoldenGate for High Availability

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} One of our primary themes this year for the Oracle OpenWorld Sessions featuring Oracle GoldenGate is High Availability. This is a pretty wide theme, but the focus will be on ways of maximizing uptime for critical systems during planned and unplanned events. We have a number of very informative sessions dedicated to exploring this theme in detail; from deep product implementation strategies up to lessons learned by our customers when using Oracle GoldenGate to meet strict SLAs. We kick this track off with our Customer Panel on Zero Downtime Operations on Monday, which I overviewed in my last posting. This is followed by Comcast, who will be hosting a sessions at 1:45PM in Moscone West 3014. Their session will discuss using Oracle GoldenGate to reduce downtime during a database upgrade. Here’s an overview: CON8571 - Oracle Database Upgrade with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability? In this session, Comcast, among the largest telecom firms in the world, shares tips on how to achieve zero downtime while upgrading to Oracle Database 11g Release 2, using a combination of Oracle technologies: Oracle Real Application Clusters (Oracle RAC), Oracle Database’s Grid Infrastructure and Oracle Recovery Manager (Oracle RMAN) features, Oracle GoldenGate, and Oracle Active Data Guard. This successful upgrade took place on a mission-critical system that handles more than 60 million business requests and service calls a day. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to maximize performance and availability of its Oracle technologies. On Tuesday, Joydip Kundu (Director of Software Development) will be presenting “Oracle GoldenGate and Oracle Data Guard: Working Together Seamlessly” at 10:15AM in Moscone South 3005. This session focuses on how both modes of Oracle GoldenGate extract (Classic and Integrated Capture) can be used with Oracle Data Guard for disaster recovery purposes or to offload extract processing. That afternoon at 1:15PM Comcast takes the stage again to discuss firsthand lessons learned implementing Oracle GoldenGate in a heterogeneous, highly available environment. Here’s a rundown of their session: CON8750 - High-Volume OLTP with Oracle GoldenGate: Best Practices from Comcast Does your business demand high availability in a mission-critical environment? In this session, Comcast, one of the largest telecom firms, shares best practices for leveraging Oracle GoldenGate to replicate high-volume online transaction processing data from Tandem NSK SQL/MX to Teradata. Hear critical success factors from Comcast for overall platform and component architectures as well as configuration and tuning techniques. Learn how it met the challenges of replication in a complex heterogeneous environment. You’ll also hear how Comcast leverages Oracle Advanced Customer Support Services, including an Oracle Solution Support Center, to provide mission-critical support for maximized performance and availability of its Oracle environment. The final session on the high availability track will be hosted by Patricia Mcelroy (Distinguished Product Manager) and Stephan Haisly (Principle Member of Technical Staff). Their session (CON8401 - Tuning and Troubleshooting Oracle GoldenGate on Oracle Database) covers techniques for performance tuning and troubleshooting of Oracle GoldenGate on Oracle Database. Using various types of workloads (OLTP, batch, Oracle’s PeopleSoft Enterprise), the presentation steps through the process of monitoring and troubleshooting the configuration to maximize performance and replication throughput within and between Oracle clouds. Join us at our sessions or stop by our demo pods in Moscone south and meet the product management and development teams.

    Read the article

  • Drive Mapped On Amazon EC2 Instance Startup Disconnected After Logon

    - by jsn
    I am launching an Amazon EC2 instance (Windows Server 2012), and on startup (though User Data field), it runs this powershell script: <powershell> NET CONFIG SERVER /AUTODISCONNECT:-1 # clear all prior connections net use * /delete /y > C:\delLog.txt 2>&1 # mount new drive net use R: \\dbHost\share /user:username pass /persistent:yes > C:\useLog.txt 2>&1 ipconfig /all > C:\ipLog.txt </powershell> When it launches and I connect to it through RDP, it shows in explorer "Diconnected Network Drive (R:). If I double click it, error message displays "R:\ is not accesible. Access id denied". Normally, it would ask me for credentials to reconnect. I need for this drive to be connected through the duration of the instance. delLog.txt contents: You have these remote connections: T: \\dbHost\share Continuing will cancel the connections. The command completed successfully. useLog.txt contents: The command completed successfully. ipLog.txt contents as expected. The net use commands works fine by itself, it connects. Anyone have any idea what could be wrong? There is only one account on these machines - Administrator. It is a samba share to a Linux server on a private network.

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • OSX root user keeps re-enabling itself on reboot

    - by geodave
    Running Snow Leopard. Completely inexplicably, I seem to have enabled the OSX root user by accident. I honestly have no idea how it happened, but if memory serves I was looking at the login pane (with my two user accounts) when I must have hit something, and suddenly the two accounts were replaced by one that just said "Other..." Clicking the "Other..." account allows me to type a username and password, but neither of the normal two accounts would work. Since I never set a root password, it wouldn't let me in that way either. So I booted into Single User mode and ran these commands: /sbin/mount -uw / fsck -fy launchctl load /System/Library/LaunchDaemons/com.apple.DirectoryServices.plist dscl . -passwd /Users/root newpassword and that let me login as root. Then, I went to System Preferences, Accounts, Login Options, clicked Join, Open Directory Utility, and lastly in the Edit menu I clicked "Disable Root User" Great, I thought, back to normal. Except rebooting, I still only have the Other... account visible, and the root password I set beforehand doesn't work anymore! I have to reboot into Single User Mode and go through the whole process again just to get back into the system (as root) How on Earth did I accidentally enable this? I didn't even know about the Directory Utility before now. And most importantly, why the heck would it be re-enabling the root user on boot? Thanks in advance to any help!

    Read the article

  • Exchange Server 2010: move mailboxes from recoveded and mounted edb to user's mailbox

    - by user36090
    One of our exchange servers crashed, and I am trying to recover the mailboxes. We had 1 exchange 2003 server named "apex" and 1 exchange 2010 server named "2008Enterprise. the exchange 2010 server named "2008Enterprise" crashed. I created a new exchange 2010 server named "Providence". I ran the command on Providence: New-MailboxDatabase -Recovery -Name JBCMail -Server Providence -EdbFilePath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147\Mailbox Database 0579285147.edb" -LogFolderPath "c:\data\Exchange\Mailbox\Mailbox Database 0579285147" this command executed and finished without error I then ran the command: eseutil /p E00 this command was executed from the below directory: c:\data\Exchange\Mailbox\Mailbox Database 0579285147 I then mounted the JBCMail with the mount command note: I do not have my full typed command. Inside my Exchange Management Console (EMC) I can view the new mailbox database named JBCMail. The JBCMail database is show as mounted on the exchange server named Providence. I can see the crashed Exchange server named 2008Exchange. In the EMC the crashed exchange server states the Copy Status under ServerConfiguration-Mailbox is ServiceDown. From here I need to recover three mailboxes The mail boxes are on the apex server. How do I move the mailboxs from apex to Providence? How do I restore the mailboxes from JBCmail mounted database to the user's mailbox? I do not fully understand how to use the Restore-Mailbox command because when I use this command it tries to restore the mailbox to the dead apex server. Restore-Mailbox -ID 'Jason Young' -RecoveryDatabase JBCMail

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • ephemeral vs EBS partitions

    - by hortitude
    I launched an EBS backed AMI with all the defaults. I noticed that it automicatlly had attached an ephemeral disk. I was just wondering if there was a good programtic way to know that this particular device is ephemeral vs some EBS volume I had decided to attach: ubuntu@-----:~$ df -ahT Filesystem Type Size Used Avail Use% Mounted on /dev/xvda1 ext4 7.9G 867M 6.7G 12% / proc proc 0 0 0 - /proc sysfs sysfs 0 0 0 - /sys none fusectl 0 0 0 - /sys/fs/fuse/connections none debugfs 0 0 0 - /sys/kernel/debug none securityfs 0 0 0 - /sys/kernel/security udev devtmpfs 1.9G 12K 1.9G 1% /dev devpts devpts 0 0 0 - /dev/pts tmpfs tmpfs 751M 172K 750M 1% /run none tmpfs 5.0M 0 5.0M 0% /run/lock none tmpfs 1.9G 0 1.9G 0% /run/shm /dev/xvdb ext3 394G 199M 374G 1% /mnt ubuntu@-----:~$ mount /dev/xvda1 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/xvdb on /mnt type ext3 (rw,_netdev)

    Read the article

  • Have an Input/output error when connecting to a server via ssh

    - by Shehzad009
    Hello I seem to be having a problem while connecting to a Ubuntu Server while connecting via ssh. When I login, I get this error. Could not chdir to home directory /home/username: Input/output error It seems like my home folder is corrupt or something. I cannot ls in the home folder directory, and in my usename directory, I can't cd into this. As root I cannot ls in the home directory as well or in any directory in Home. I notice as well when I save in vim or quit, it get this error at the bottom of the page E138: Cannot write viminfo file /home/root/.viminfo! Any ideas? EDIT: this is what happens if I type in these commands mount proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/RAID1-lvvar on /var type xfs (rw) /dev/mapper/RAID5-lvsrv on /srv type xfs (rw) /dev/mapper/RAID5-lvhome on /home type xfs (rw) /dev/mapper/RAID1-lvtmp on /tmp type reiserfs (rw) dmesg | tail [1213273.364040] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213274.084081] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213309.364038] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213310.084041] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213345.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213346.084042] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213381.365036] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213382.084047] Filesystem "dm-4": xfs_log_force: error 5 returned. [1213417.364039] Filesystem "dm-3": xfs_log_force: error 5 returned. [1213418.084063] Filesystem "dm-4": xfs_log_force: error 5 returned. fdisk -l /dev/sda Cannot open /dev/sda

    Read the article

  • Windows Share authentication from Active Directory Linux login

    - by Kenny
    I'm using Active Directory to log into RHEL. To do this, I followed the steps outlined here: http://www.markwilson.co.uk/blog/2007/05/using-active-directory-to-authenticate-users-on-a-linux-computer.htm I'd like to be able to read data from Windows Servers shared folders without being prompted for a password. On Windows I log into an AD domain, and when I access windows file shares on a server on the LAN (also part of the AD domain) my I can just access them with no authentication step. I've used SMBclient on Linux to access these shares, but it asks for my password. I would like to be able to script access to the data on the shares, but I can't if there's a password prompt in the way. Well, I could, but it's not how I want to do it. Now, since I'm logged in using my active directory username & password, can't I just access the shares without jumping that extra hoop? I know I can mount the share using something like: //192.168.0.5/share /mnt/windows cifs auto,username=steve,password=secret,rw 0 0 but access will depend who is logged in... each user logging in should have their own unique AD access privelages. Thanks for reading!

    Read the article

  • Why am I seeing MailSlot Browse messages on unrouted ports of my Linux box?

    - by nmichaels
    I have a Linux box (Debian squeeze) with several NICs. The ones of interest are: eth3 - my main link to the network (dhcp on 10.20.30.0/24) eth0 - the first connection to my test network (static: 192.168.1.2) eth4 - the second connection to my test network (static: 192.168.1.1) My routing table looks like this: $ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.20.30.0 * 255.255.255.0 U 0 0 0 eth3 default 10.20.30.254 0.0.0.0 UG 0 0 0 eth3 I have the 2 test net ports connected to each other with a crossover cable and an instance of wireshark running on each port. Every once in a while, I'll see a packet like the following show up. Who could be doing this, and how do I convince them to stop? I do have Samba running on the machine (for a cifs mount) but don't see why it would be sending packets out to unrouted ports. I had a Windows VM running in VMWare Client and thought that might be causing it, but it still happens without it. What I want is totally silent interfaces so I can run some tests with Scapy over them.

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • Flushing iptables broke my pipe, how can I save my instance?

    - by Niels
    I was setting up my iptables when I performed a iptables -F and my ssh pipe broke. This is the last output of my session: root@alfapaints:~# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp dpt:2222 ACCEPT tcp -- li465-68.members.linode.com anywhere state NEW,ESTABLISHED tcp dpt:nrpe ACCEPT tcp -- anywhere anywhere tcp dpt:9200 state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHED ACCEPT udp -- anywhere anywhere udp spt:domain Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:2222 ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:nrpe ACCEPT tcp -- anywhere anywhere tcp spt:9200 state ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHED ACCEPT udp -- anywhere anywhere udp dpt:domain root@alfapaints:~# iptables -F Write failed: Broken pipe I tested my connection just before and I was able to connect with ssh. Now I did a nmap scan and not a single port is open anymore. I know my VPS is running on VMWare ESXi, could a reboot help? Or if not could I attach and mount the disk to another vm to save the data? Does anybody have some advise? And maybe an explanation what happend or what could have cause my pipe to break? ps: I didn't save my rules on the config directories of iptables. But used a file I stored in ~/rules.config to apply my rules like this: iptables-restore < rules.config So probably a reboot would help? Thanks a lot in advance.

    Read the article

  • DMG image containing Mac OS X 10.6.5 update not recognized

    - by Lawliet
    Hello, I want to update my Mac OS X 10.6.3 and want to use the manual package update (not Software Update) and I am having this problem for two days now. First I downloaded Mac OS X 10.6.5 Combo update and on the first opening it popped up a message saying "Unable to mount: disk image not recognized". I thought that somehow the image became corrupt during the download, so I downloaded it again. Same result. And I downloaded again. And again... No use, it still states that the .dmg is not recognized. Then I tried to update my system from 10.6.3 to 10.6.4 using the same method (package installer). It worked. And of course I downloaded the 10.6.5 update (not the combo package, but the 10.6.4 10.6.5 update package)... guess what. Not recognized. I don't understand how is this possible. Did anyone has the same problem? Please don't advice me to use software update, as I need to do that on multiple Macs Thanks in advance.

    Read the article

  • Windows Share authentication from Active Directory Linux login

    - by Kenny
    Hi, I'm using Active Directory to log into RHEL. To do this, I followed the steps outlined here: http://www.markwilson.co.uk/blog/2007/05/using-active-directory-to-authenticate-users-on-a-linux-computer.htm I'd like to be able to read data from Windows Servers shared folders without being prompted for a password. On Windows I log into an AD domain, and when I access windows file shares on a server on the LAN (also part of the AD domain) my I can just access them with no authentication step. I've used SMBclient on Linux to access these shares, but it asks for my password. I would like to be able to script access to the data on the shares, but I can't if there's a password prompt in the way. Well, I could, but it's not how I want to do it. Now, since I'm logged in using my active directory username & password, can't I just access the shares without jumping that extra hoop? I know I can mount the share using something like: //192.168.0.5/share /mnt/windows cifs auto,username=steve,password=secret,rw 0 0 but access will depend who is logged in... each user logging in should have their own unique AD access privelages. Thanks for reading!

    Read the article

  • samba "username map" stopped to work

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user Kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems. p.s. i've asked this question at serverfault but no answer came. Maybe I have more luck with this forum. Sorry for duplicate if any of you reads both.

    Read the article

  • Oracle WebLogic Server and Oracle Database: A Robust Infrastructure for your Applications

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It has been said that a chain is as strong as its weakest link. Well, this is also true for your application infrastructure. Not only are the various components that constitute your infrastructure, like database and application server critical, the integration between these things [whether coming out of the box from your vendor or done in-house] is paramount. Imagine your database being down and your application server not knowing about it and as a result your application waiting indefinitely for a database response – not a great situation if high availability is critical to your application. Or one of your database nodes is very busy, but your application server doesn’t have the intelligence to decipher that – it keeps pinging the busy node when it can in fact get a response from another idle node much faster. This is what Oracle WebLogic and Database integration provides: Intelligent integration out of the box. Tight integration between Oracle WebLogic and Database makes your infrastructure robust enough that not only does each of your infrastructure component provide you with improved RASP [reliability availability, scalability, and performance] but these components work together to offer improved performance & availability, better resource sharing, inherent scalability, ease of configuration and automated management for your entire infrastructure. Oracle WebLogic Server is the only application server with this degree of integration to Oracle Database. With Oracle WebLogic Server 11g, we introduced Active GridLink for Real Application Clusters (RAC). In conjunction with Oracle Database, this powerful software technology simplifies management, increases availability, and ensures fast connection failover with runtime connection, load balancing and affinity capabilities. With the release of Oracle Database 12c this summer, even tighter integration between Oracle WebLogic Server 12c (12.1.2) and Oracle Database 12c has been achieved and this further optimizes the integration for a global cloud environment. Read about these capabilities in detail in the Oracle WebLogic-Database Integration Whitepaper. Get in depth ‘how-to’ details from this YouTube video on the topic from our resident expert, Monica Roccelli. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Recover LVM2 volume group after one HDD failed

    - by Bernd
    I had two HDDs, each one containing a LVM partition which formed a volume group. Then I had two LVs, one for my / directory and one for my /home/ directory. Yesterday where I had my / dir failed. I'm trying to recover at least my /home/ dir. What I've done so far: Boot a live system Extract LVM2 metadata from the working HDD using dd Copy metadata to /etc/lvm/backup/vg0 Now I'm trying to do this: pvcreate --restore /etc/lvm/backup/vg0 --uuid "[uuid of my working hdd]" /dev/sdb2 But I always get: Couldn't find device with uuid '[uuid of broken hdd]'. Couldn't find device with uuid '[uuid of working hdd]'. Device /dev/sdb2 not found (or ignored by filtering). I confirmed that /dev/sdb2 exists and I've commented out all filtering settings from /etc/lvm/lvm.conf so I don't know what might be causing pvcreate not to find the device. So: What might be the problem? Is it even possible to restore this partition? (As I'm writing this I'm starting to think it's impossible D:) Edit: Okay, looks like I've got it figured out. I was using a Ubuntu 8.10 CD (yeah, I know it's not supported anymore) and it seems that was the problem. When I started from a Ubuntu 10.04 CD everything worked 'fine', I could mount my LVM partitions partially without problems. (Will answer the question in 4 hours. But if anyone has still got some hints/tips, please share! :)

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >