Search Results

Search found 10755 results on 431 pages for 'cluster shared volume'.

Page 35/431 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Making a hidden truecrypt volume with existing data

    - by Bill Grey
    I have a 1TB hdd, which I would like to encrypt. I would like to make a hidden volume, with almost nothing within but some decoy data, and the rest in a hidden volume. However, my driver is over 95% full. Is it still possible to do this, or would it have to be done on an empty drive, and then copy the data over? I could not find the answer to this question in the documentation. Also, how easy would it be to undo, or unencrypt the drive? Would it again need another empty drive to begin with?

    Read the article

  • Putting our OLTP and OLAP services on the same cluster

    - by Dynamo
    We're currently in a bit of a debate about what to do with our scattered SQL environment. We are setting up a cluster for our data warehouses for sure and are now in the process of deciding if our OLTP databases should go on the same one. The cluster will be active/active with database services running on one node and reporting and analytical services on the other node. From a technical standpoint I don't see an issue here. With the services being run on different nodes they shouldn't compete too heavily for resources. The only physical resource that may be an issue would be the shared disk space. Our environment is also quite small. Our biggest OLAP database at the moment is only about 40GB and our OLTP are all under 10GB. I see a potential political issue here as different groups are involved but I'm just strictly wondering if there would be any major technical issues that could arise from this setup.

    Read the article

  • Removing information about deleted volume

    - by Pravin
    In order to increase space in my c drive, I had to delete all my volumes and create again allocating more space to C which I did, after which my drive name G didn't exist. Before this I used to install all my softwares in G. Now since the drive does not exist, I want to remove all info about the softwares I installed in G as they got deleted when volume got deleted. I also want to install cilk++ but it gives me an error-invalid drive g:. If I insert pendrives so I get a volume named G, cilk++ installer runs but says that it will be integrated to visual studio 2008 which i previously had in G drive(but no longer exists) and doesn't show visual studio 2010 which i recently installed in C drive. How do I fix this? Please help.

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • Mysterious "media" volume mounted on desktop Mac OS X

    - by Allen
    I have a mysterious volume mounted on my desktop that I can't seem to forcibly unmount. I've tried using umount and also diskutil, but it seems to automatically remount itself. I've copied my hdd with Time Machine, and copied it onto a new computer, and it also has the drive mounted on it. It's not pointing to anything and I can't open it, nor can I forcibily remove it by hand with rm -Rf. Any ideas? I noticed this problem after I upgraded to Mountain Lion from Lion. It causes problems because when I try to select a file using the built in Finder dialog box, it freezes for a few minutes because it tries to cache or read into the "media" mounted volume.

    Read the article

  • HP Pavilion dv3 volume control display driver on Windows 7

    - by Farinha
    I've recently bought an HP Pavilion dv3-2150ep and I'm having a hard time getting the volume control display to work as expected. The control is a back-lit touch-sensitive bar above the keyboard. Now the buttons to turn the volume up and down actually do it, but the lightning is not changing at all. The mute button does change color when toggled. I'm not sure if I'm missing any drivers here (I've installed all of those on the HP support page that seem to have something to do with sound and/or display) or if I have to activate this somewhere. Any ideas?

    Read the article

  • Where does truecrypt store the backup volume header?

    - by happygolucky
    When using WDE, where does (if anywhere) truecrypt store a backup volume header? As i know there is always a backup header for regular truecrypt volumes, however i am not sure if this applies when system encryption is used. Because if i damage the volume header in track 0, my password won't boot my system anymore. So there is no backup header on the drive? I read somewhere on a forum that truecrypt might have a backup header relative to some position from the END of the HDD, however this doesn't make sense as it could easily be wiped over by programs running in Windows. And how would truecrypt know where this backup is anyway?

    Read the article

  • Encrypted TrueCrypt volume in file with decoy content?

    - by penyuan
    I've been reading about the encryption features of TrueCrypt, including the feature to have an encrypted volume inside a file with any name. For example, I can create an encrypted volume inside a file named music.mp3. However, the file won't really play when I try to open it in a music player. Is there a way to add "decoy" content to music.mp3 so that someone who doesn't know its got encrypted content can double click on it and music will play? Obviously it doesn't have to be music, but also images, decoy test document, etc. etc.

    Read the article

  • Windows shows incorrect free space on Raid 10 volume

    - by Adenverd
    I have 4 1TB hard drives in RAID 1+0 configuration. Theoretically, I should have ~2TB of available space. Windows says the drive has a total size of 1.81 TB, which I'm fine with. As far as files on the volume go, I used WinDirStat to determine that I have 552.8GB of files on the volume. This means that I should have somewhere around 1.3TB minimum of free space. Yet Windows shows the drive as only having 492GB of free space. Are there hidden files somewhere that I can't see (I have show hidden files/folders turned on)? Does Windows not recognize that old files have been deleted? Is there any way to correct this problem?

    Read the article

  • Can i use a Windows 2008 r2 Cluster for file redundancy

    - by JERiv
    I'm researching a sever clustering architecture as a redundancy and backup solution for a client, and something that isn't made clear is whether or not i can use server clustering to replace a file server with backup solution. Forgive my Elementary understanding of server clustering but supposing: 2 Sites (NJ, CA) Identical Servers at each site setup as a Remote Site Cluster nodes with Windows Enterprise server 2008 r2 Services: File, Terminal, AD, and maybe DNS Will the following will be true: Files (including data drives) will be synced between the two servers eliminating the need for third party backup/mirroring software to sync/backup files. Also supposing i use roaming profiles w/ folder redirection; How will client computer in the WAN access their data through the cluster (i.e. will they automatically choose the best route)

    Read the article

  • How to recover logical volume deleted with lvremove

    - by John P
    I am on CentOS 5.5 and am running Xen. I have a large volume group that I create logical volumes on using lvcreate. Today I had a customer cancel her account, then change her mind about an hour later. Unfortunately I had already removed the LVM her Xen image resided on. (just using a standard lvremove ). There has been no other LVM activity on this disk since then (nothing else added or deleted). Is it possible to "undo" a lvremove, or recover logical volume? If so, how would I go about it?

    Read the article

  • OCFS2 Now Certified for E-Business Suite Release 12 Application Tiers

    - by sergio.leunissen
    Steven Chan writes that OCFS2 is now certified for use as a clustered filesystem for sharing files between all of your E-Business Suite application tier servers.  OCFS2 (Oracle Cluster File System 2) is a free, open source, general-purpose, extent-based clustered file system which Oracle developed and contributed to the Linux community.  It was accepted into Linux kernel 2.6.16.OCFS2 is included in Oracle Enterprise Linux (OEL) and supported under Unbreakable Linux support.

    Read the article

  • Focus On Systems Admins and Developers

    - by rickramsey
    Even if you're not going to Oracle Open World, you might find it interesting to hear what the different technology groups at Oracle are going to be talking about. And if you are going, here's your Systems schedule: Note: all links go to PDF files. Focus On: Oracle Linux Focus On: Oracle Solaris Focus On: Oracle Solaris Cluster Focus On: Oracle Solaris Studio Focus On: Desktop Virtualization Focus On: Oracle VM Server Virtualization Focus On: SPARC Servers Focus On: Storage Focus On: SPARC Supercluster - Rick Website Newsletter Facebook Twitter

    Read the article

  • Strategy for avoiding duplicate object ids for data shared across devices using iCloud

    - by rmaddy
    I have a data intensive iOS app that is not using CoreData nor does it support iCloud synching (yet). All of my objects are created with unique keys. I use a simple long long initialized with the current time. Then as I need a new key I increment the value by 1. This has all worked well for a few years with the app running isolated on a single device. Now I want to add support for automatic data sync across devices using iCloud. As my app is written, there is the possibility that two objects created on two different devices could end up with the same key. I need to avoid this possibility. I'm looking for ideas for solving this issue. I have a few requirements that the solution must meet: 1) The key needs to remain a single integral data type. Converting all existing keys to a compound key or to a string or other type would affect the entire code base and likely result in more bugs than it's worth. 2) The solution can't depend on an Internet connection. A user must be able to run the app and add data even with no Internet connection. The data should still resolve properly later when the data syncs through iCloud once a connection is available. I'll accept one exception to this rule. If no other option is available, I may be open to requiring an Internet connection the first time the app's data is initialized. One idea I have been toying around with in my head is logically splitting the integer key into two parts. The high 4 or 5 bits could be used as some sort of device id while the rest represents the actual key. The fuzzy part is figuring out how to come up with non-conflicting device ids that fit in a few bits. This should be viable since I don't need to deal will millions of devices. I just need to deal with the few devices that would be shared by a given iCloud account. I'm open to suggestions. Thanks.

    Read the article

  • Debugging into a shared library source from consuming app, using QTCreator

    - by morpheous
    I am using QTCreator (1.3.1) on Ubuntu Karmic. I have built two projects: a shared library an application that links to the shared library I am debugging the application, and need to step into the implementation (i.e. the source) of one of the functions exported by the shared library. Does anyone know how to setup the QTCreator to allow me to step into the source of a shared library?

    Read the article

  • How do I use waf to build a shared library?

    - by James Morris
    I want to build a shared library using waf as it looks much easier and less cluttered than GNU autotools. I actually have several questions so far related to the wscript I've started to write: VERSION='0.0.1' APPNAME='libmylib' srcdir = '.' blddir = 'build' def set_options(opt): opt.tool_options('compiler_cc') pass def configure(conf): conf.check_tool('compiler_cc') conf.env.append_value('CCFLAGS', '-std=gnu99 -Wall -pedantic -ggdb') def build(bld): bld.new_task_gen( features = 'cc cshlib', source = '*.c', target='libmylib') The line containing source = '*.c' does not work. Must I specify each and every .c file instead of using a wildcard? How can I enable a debug build for example (currently the wscript is using the debug builds CFLAGS, but I want to make this optional for the end user). It is planned for the library sources to be within a sub directory, and programs that use the lib each in their own sub directories.

    Read the article

  • how to set breakpoint on function in a shared library which has not been loaded in gdb

    - by pierr
    Hi, I have a shared library libtest.so which will be loaded into the the main program using dlopen. Function test() reside in libtest.so and will be called in the main program through dlsym. Is there any way I could set up a break point on test? Please note that the main programm has not been linked to libtest.so during linking time. Otherwise , I should be able to set the break point although it is a pending action. In my case, when I do b test, gdb will tell me Function "test" not defined.

    Read the article

  • How do I use a shared library (in this case JsonCpp) in my C++ program on Linux?

    - by Not Joe Bloggs
    I'm a new-ish C++ programmer, and I'm doing my first program on my own using C++. I decided I would like to use JSON to store some of the data I'm going to be using, and I've found a library to handle JSON, JsonCpp. I've installed the library using my Linux system's package manager, and in my C++ code, I've used in my source code file #include <json> and compiled it using g++ and it's -ljson and -L/usr/lib options (libjson.so is located in /usr/lib). However, the first usage of Json::Value, an object provided by the library, gives a compilation error of "Json has not declared". I'm sure my mistake is something simple, so could someone explain what I'm doing wrong? None of the books I had mention how to use shared libraries, so I've had to google to find this much.

    Read the article

  • PHP: Using browscap.ini on shared host. - ini_set() failing

    - by GreybeardTheUnready
    I'm trying to use get_browser() , unfortunately my page is on a shared host, and I have no access to php.ini. I have downloaded the latest version of browscap.ini and placed in my document root. I have then added the following:- if (!ini_set('browscap', '/home/private stuff/browscap.ini')) { echo "Failed to set browscap"; } else { echo "browscap = [" . ini_get('browscap') . "]"; } exit(); But this fails, (nb: the echo statement for the failed condition always shows [] - even if I didn;t have the browscap.ini file the setting should still show up in the ini_get.... shouldn't it?) I have looked at the previous questions on this and they don't seem to help, any ideas?

    Read the article

  • How shared hostings, domain names and DNS work together?

    - by vtortola
    Hi, I 've this little doubt but I couldn't find information about it, probably because I'm not searching the correct thing. When a browser ask for "www.mydomain.com", the DNS server returns an IP Address, then the browser go there... but what does happen then? I mean, that IP address could be a shared hosting that contains hundreds of web pages and domains, so how does it knows where it have to go? Is something that the web server does? is it something that I could implement in a web application? I mean, for example I have a web application that contains accounts, and each account has a default web page. You could access that page passing the account namne, for example "www.mydomain.com/myaccount", but now I want to register "www.myaccount.com" and then it will get the "www.mydomain.com/myaccount" content. Is it possible? Kind regards.

    Read the article

  • How do you detach an array of strings from shared memory? C

    - by Tim
    I have: int array_id; char* records[10]; // get the shared segment if ((array_id = shmget(IPC_PRIVATE, 1, 0666)) == -1) { perror("Array Creating"); } // attach records[0] = (char*) shmat(array_id, (void*)0, 0); if ((int) *records == -1) { perror("Array Attachment"); } which works fine, but when i try and detach i get an "invalid argument" error. // detach int error; if( (error = shmdt((void*) records[0])) == -1) { perror(array detachment); } any ideas? thank you

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >