Search Results

Search found 8843 results on 354 pages for 'partition master'.

Page 25/354 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • I’m now a SQL Server Microsoft Certified Master

    - by simonsabin
    What a day, well what a week really. I found out last week that I passed the Microsoft Certified Master Exam and that I needed to decide to sit the Lab part next. I decided to get it done in a few days of down time I have before going to the MVP summit so scheduled it for today. Five and half hours later I had finished, after a few visits to the toilet. (lesson learnt don’t drink too much during your lab). The lab is marked by humans which I think is a great thing given the various ways of doing...(read more)

    Read the article

  • Automatic Storage Management (ASM)

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Automatic Storage Management (ASM) (Doc ID 1187723.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Automatic Storage Management (ASM) environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: Automatic Storage Management (ASM) Concepts and Overview Automatic Storage Management (ASM) Installation Automatic Storage Management (ASM) Configuration Automatic Storage Management (ASM) Administration Automatic Storage Management (ASM) Migration and Upgrade Automatic Storage Management (ASM) Monitoring Automatic Storage Management (ASM) Troubleshooting and Debugging Automatic Storage Management (ASM) Best Practices Automatic Storage Management (ASM) Versions and Patches ASMLIB Database Machine, Exadata Storage Server and RAC Documentation Using My Oracle Support Effectively

    Read the article

  • SQL SERVER Simple Installation of Master Data Services (MDS) and Sample Packages Very Easy

    I twitted recently about: ‘Installing #sql Server 2008 R2 – Master Data Services. Painless.‘ After doing so, I got quite a few emails from other users as to why I thought it was painless. The reason was very simple- I was able to install it rather quickly on my laptop without any issues. There were [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why ISO master (editor) does not read Windows images

    - by Jacek Blocki
    I have the followjng problem with ISO master software: I try to edit WIndows 7 ISO image $ isomaster windows7.iso The file does open, unfortunately all I get is README with message: This disc contains a "UDF" file system and requires an operating system that supports the ISO-13346 "UDF" file system specification. isomaster comes form Ubuntu repository, I am using 12.04. The system has kernel support for UDF installed, I can mount above ISO (mount -o loop) and see its content read only. Any idea how to fix it? Using other than isomaster tool is also an option. Regards, Jacek

    Read the article

  • Slides and Scripts from Metalogix Webcast Master Your SharePoint Migration With PowerShell

    - by Brian Jackett
    Thanks to everyone who attended the Metalogix webcast “Master Your SharePoint Migration with PowerShell” I guest presented on today.  We had great attendance and no technical hitches which is always a plus.  A number of attendees asked for my slide deck which you can find at the link below.  As a bonus I am including a set of demo scripts that I typically use with the longer version of this presentation.  If you have any questions or comments please feel free to reach out to me.  A big thanks once again to Metalogix for giving me the opportunity to work with them. Scripts and Slidedeck Click Here         -Frog Out

    Read the article

  • SQLAuthority News List of Master Data Services White Paper

    Since my TechEd India 2010 presentation I am very excited with SQL Server 2010 MDS. I just come across very interesting white paper on Microsoft site related to this subject. Here is the list of the same and location where you can download them. They are all written by Top Experts at Microsoft. Master Data [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Video: Master/Detail in WinPhone 7 with oData

    The companion video to my mini-tutorial on  Windows Phone 7 Animation, Master/Detail and accessing an oData web service, is now available.    I am currently working on four video/tutorial series: Getting Started with Silverlight Windows Phone 7 Programming Blend for Developers The HyperVideo Platform project.  Which correspond to the Key Topics folders in the sidebar.  Please feel free to [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle Master Data Management at OOW 2012: A Look Back

    - by Mala Narasimharajan
    Oracle Master Data Management had a great showing at OOW 2012 ! Special thanks to our customers and partners for presenting with us, sharing their use cases and successes as well as co-sponsoring events.  Almost every session at the show featured a customer and the tremendous success or transformation Oracle MDM resulted in at their organization.  At the DemoGrounds, Oracle MDM saw tremendous interest with many individuals enquiring to see demos, and have their technical questions answered.  The demos provided a perfect opportunity to showcase technical enhancements as well as what features are on the horizon.The MDM customer appreciation dinner event was a smashing success as cusotmers and partners enjoyed a spectacular water view, fine dining and cocktails and one of San Francisco's finest restaurants - The Water Bar.  In a short while the planning for next year's OpenWorld will be under full swing and we can't wait to get started.  See you at OOW 2013!!    

    Read the article

  • How to remove geoclue-master?

    - by dunderhead
    Looking in System Monitor I saw a process called geoclue-master. As far as I can tell, this reports my precise location to applications and the internet. This computer is sitting in the same office all the time so I have no need whatsoever for such a thing, and would rather have the memory and processor cycles that this unnecessary service uses up - however small - free for things that I do need and want. I could not find instructions for disabling and removing this service so I renamed the file, but then the calendar disappeared from the notification area. Is there a way to remove this service but leave the calendar? I'm using 11.10.

    Read the article

  • How do I mount a "GPT Protective Partition" in Windows XP?

    - by Michael Haren
    I formatted an external USB harddrive while it was connected to a 32-bit Windows 2003 Server Std. edition server. After loading it up with files, I moved it to my Windows XP SP3 where it didn't show up automatically in My Computer. I opened up Computer ManagementDisk Management and see it listed as a "Health (GPT Protective Parititon)". What's up with that? Can I mount it?

    Read the article

  • SharePoint 2010 Server Configuration Error -> "Cannot connect to database master"

    - by Chrish Riis
    I recieve the following error when I try to configure SharePoint 2010 Server: "Cannot connect to the database master at SQL server at [computer.domain]. The database might not exist, or the current user does not have permission to connect to it." I run the following setup: Windows Server 2008 R2 Standard with SP1 and all the updates SQL Server 2008 R2 with SP1 SharePoint Server 2010 with SP1 Everything is installed on the same server (it's a testserver) I have tried the following: Rebooting the server Checking the install account's DB rights (dbcreator, securityadmin - I even let it have sysadmin) Opened up the firewall on port 1433 and 1434 Uninstalled both SQL and SP, then reinstalled the both Enabled all client protocols in SQL Server Configuration Made sure I used the correct account for installing SharePoint (local admin) Useful links: TCP/IP settings – http:// blog.vanmeeuwen-online.nl/2010/10/cannot-connect-to-database-master-at.html http:// ybbest.wordpress.com/2011/04/22/cannot-connect-to-database-master-at-sql-server-at-sql2008r2/ Wrong slash - http:// yakimadev.com/2010/11/cannot-connect-to-database-master-at-sql-server-at-serverdbname-error-during-sharepoint-2010-products-configuration-wizard-and-installation/ Port error - http:// www.knowsharepoint.com/2011/08/error-connecting-to-database-server.html

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Sporadic crash of master-slave MySQL replication process

    - by obarshay
    Hello, I was wondering if someone has experienced this and can perhaps provide some insight into this issue. We have a plan-vanilla MySQL master-slave replication set up. The tables are MyISAM and the master can get quite read/write active. We use the slave instance to perform full daily backups in order to avoid bringing down the master server. The backup process does the following: STOP SLAVE SQL_THREAD mysqlhotcopy all tables START SLAVE SQL_THREAD Every once in a while (once a month or so) the replication breaks with varying error messages indicating a corrupt query or log file. Here's one that happened last night: mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: server8.propreports.com Master_User: nexus8 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: bin.000045 Read_Master_Log_Pos: 581644327 Relay_Log_File: relay.000086 Relay_Log_Pos: 94131 Relay_Master_Log_File: bin.000045 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1064 Last_Error: Error 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '138070603'£' at line 1' on query. Default database: 'wtsdb'. Query: 'UPDATE fill SET clearing_fee='0.0E id='138070603'£' Skip_Counter: 0 Exec_Master_Log_Pos: 4164743 Relay_Log_Space: 577574251 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL I follow the following procedure to recover from above error and resume replication: stop slave; change master to MASTER_LOG_POS = 4164743, MASTER_LOG_FILE = 'bin.000045'; start slave; We have multiple servers set up this way and they all sporadically stop replicating with a similar error. Any advice on how to resolve this would be greatly appreciated.

    Read the article

  • Passenger throwing undefined method `-@' for "master":String after Puppet 3.0.0 upgrade

    - by Andy Shinn
    My Puppet master is using Passenger to serve. After upgrading to Puppet 3.0.0 I am getting the following error: [ pid=17576 thr=70231398486460 file=utils.rb:176 time=2012-10-01 17:37:12.892 ]: *** Exception NoMethodError in PhusionPassenger::Rack::ApplicationSpawner (undefined method `-@' for "master":String) (process 17576, thread #): from config.ru:7 from /usr/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 My config.ru is as follows: # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: # $LOAD_PATH.unshift('/opt/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" # Rack applications typically don't start as root. Set --confdir to prevent # reading configuration from ~/.puppet/puppet.conf ARGV << "--confdir" << "/etc/puppet" # NOTE: it's unfortunate that we have to use the "CommandLine" class # here to launch the app, but it contains some initialization logic # (such as triggering the parsing of the config file) that is very # important. We should do something less nasty here when we've # gotten our API and settings initialization logic cleaned up. # # Also note that the "$0 = master" line up near the top here is # the magic that allows the CommandLine class to know that it's # supposed to be running master. # # --cprice 2012-05-22 require 'puppet/util/command_line' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Util::CommandLine.new.execute Any idea what may be happening?

    Read the article

  • Host not visible in ECC after pushing master agent on server

    - by wildchild
    Hi all!! ECC master Agent has been pushed to one of the servers.. Once a Master Agent installed the host should appear within ECC.However, I am unable to see the host. I asked the Server team to stop and start the master agent ,and to check whether required port is enabled, so that server can talk with ECC server. After restarting the service I get an error.I even checked if the port is used by any other service ,it is not.. Somebody plz help what should done here..it’s a Windows server. Here's the error:- "The EMC control Center Master agent service on local computer atrted and the stopped.Some services stopped automatically if they 've no work to do,for example prformance logs and alert services" Thanks in advance!!

    Read the article

  • Windows authenticated users have lost access to master (default) database

    - by Rob Nicholson
    Something very strange has occurred on our production SQL database. Users connecting via Windows authentication appear to have lost all access to the master database. By default, all logins have the default database set to master. So when you connect using SQL Server management studio, they get the error: "Cannot open user default database. Login failed error 4064". What's also worrying is that we have a group called "COMPANY - SQL Administrator" which has sysadmin rights and users in this group also get the same error. Worse, they don't appear to be system administrators anymore... If they change their default database to something else, they can connect and then work on the database, it's just the master database that is problematic. I'm not even sure by what mechanism windows authenticated users get access to the master database. Is it something hard coded in or some property that's got changed? Any ideas? Cheers, Rob.

    Read the article

  • Filter data in sheets from a master sheet

    - by sam
    I have a 'master sheet' with lots of furniture data in it, in column A there are the suppliers names. What I would like is to be able to have my master sheet with all the info and then sub sheets named by supplier; in these sub sheets I would like to reference the master sheet and pull out all of the items that are from that supplier. For example: I would have a sheet called 'Ikea' which would look in the master sheet and search the A column for all entries of 'Ikea'. If present, copy or reference that row 1:12 in the 'ikea' sheet. I would like to do it all dynamically using references rather than copying the data. Also, I would like it to auto update rather than having to run a macro to recalculate it each time. Can this be done with formulars rather than macros?

    Read the article

  • Configure fallback redis server

    - by snøreven
    I am using redis as a cache server. Can I somehow configure multiple redis servers, that the cache is fully functional (read/write) even if some of them go offline? I looked into master-slave, but the problem I see there is, that if the master fails, and I allow writes to the slaves, they get overwritten once the master is up again. Now the master just serves the old data. The only solution I could come up was disabling write-to-disc, but that sucks as I loose everything if I have to restart the master. And I guess, slaves wouldn't be synced anymore if the master is gone.

    Read the article

  • Master does not appear to be a git repository error

    - by EmmyS
    I've inherited a position and instructions for creating a new git repository. Unfortunately I've run into problems and no one here knows what to do. Hoping someone can help me out. Here are the instructions I was left: Create a new repository: For these steps you need to be in the gitosis-admin repository, if you don't have it, in a suitable parent folder do: git clone [email protected]:gitosis-admin.git Edit gitosis.conf file - in gitosis-admin root, under [group base-repo] section, add the name of the new repo to the end of the "writable =" section. Commit change and push back to gitosis-admin master. For the next commands, my_new_project represents the name of your project mkdir my_new_project cd my_new_project git init Copy in any files you want to use to start the repo git commit -a -m "Initializing new repository" git remote add origin [email protected]:my_new_project.git git push master git push master:qa So I did 1 and 2, with no problem. It created a local folder on my machine called gitosis-admin. I edited the gitosis.conf file as indicated. But when I try to do step 3 (which I assume is git push gitosis-admin master) bash tells me that fatal: 'master' does not appear to be a git repository fatal: The remote end hung up unexpectedly What am I doing wrong?

    Read the article

  • Can SysPrep (or anything else) be used to make a Win XP partition from another computer bootable?

    - by chris5gd
    I've used Paragon Backup and Recovery Free as recommended to me in my other question, to backup my C: (Win XP) and D: (installed apps) partitions. Before taking the rather scary step of breaking the RAID 0 array on which it's currently installed, and restoring to one of the individual drives, I'd quite like to test the restorability of the imaged partitions. I've restored them on to a spare disk in another computer, which of course won't boot from them in their current state. Is it possible to use SysPrep (or another tool) on the restored partitions, to make it bootable?

    Read the article

  • How can I create a 4TB partition on my software RAID5 device?

    - by Kris Harper
    I have set up a RAID5 device with three 2TB hard drives using mdadm. The device was successfully created, but I cannot seem to create a partition on the device. When I try to make an ext3 or ext4 partition via Disk Utility, I get the following error Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/md0, start=0, size=4000526106624, type= Entering MS-DOS parser (offset=0, size=4000526106624) MSDOS_MAGIC found found partition type 0xee => protective MBR for GPT Exiting MS-DOS parser Entering EFI GPT parser GPT magic found partition_entry_lba=2 num_entries=128 size_of_entry=128 Leaving EFI GPT parser EFI GPT partition table detected containing partition table scheme = 3 got it got disk new partition guid '' is not valid type '' for GPT appear to be malformed I have seen this question, but that seems to suggest using gparted to do the partitioning. I'm fine with doing that, but my RAID device doesn't show up in the list of gparted devices. I suspect because this is a RAID and not a regular disk. I have already created a GPT partition table on the device. How can I add a partition to my device?

    Read the article

  • Permission denied after creating home partition

    - by Magnus
    I have recently created a separate home partition following this tutorial https://help.ubuntu.com/community/Partitioning/Home/Moving. Since I’m still a newbie in the Linux (struggling to learn) I felt happy when every thing seemed to work smooth. How ever, I realised after a while that I had lost all permission to my subfolders in the my home folder. I still can read/write the files placed directly in /home/magnus but I'm denied access to any of the subfolders. I just realised one more disturbing thing, probably related to home-partition story above: When I try cd ~/Music/ I get the message bash: cd: /home/magnus/Music/: Permission denied When I try: sudo cd ~/Music/ I get the result sudo: cd: command not found Seems strange that the cd command have been lost? What have I done wrong and is there a way to fix this? btw: I use Ubuntu 12.04 LTS Thanks for all the help! Magnus

    Read the article

  • Dual Boot Installation with Win7 - Install Ubuntu in New Partition

    - by RC Russell
    Under Win 7 I created a new 100 GB disk partition (L:) to install Ubuntu 12.04. I then rebooted from the Ubuntu install CD, selected "Install side by side" and now I'm stuck. I end up at the Advanced Partitioning Tool and I do not know how to tell the installer to use the L: partition. Any help would be appreciated. Thanks! Thank you. I have successfully installed Ubuntu 12.04 alongside Win 7. However, now when I reboot the laptop it goes directly to Win 7 with no option to choose Ubuntu. Any thoughts on how to get the boot-time choice to show up? Thanks!

    Read the article

  • Prevent nautilus showing partition mounted in bash script

    - by bcbc
    In my bash script I mount partitions, check them, copy files to them, and unmount. When the script mounts the partition, Nautilus pops up with a Window showing the partition and stealing focus. This is something I want to avoid. Note: I know I can change the behaviour of this in System settings, Details, Removable media, Never prompt or start programs on media insertion, but I don't want to change the behaviour e.g. if a USB stick is plugged in, I just want to prevent it in my bash script. Actually this auto display doesn't seem consistent. If I do the exact same command from the terminal, Nautilus doesn't show, and I know there are other mounts in my script that don't show. So what could be causing this? Here's an example of the code: mkdir -p $target/home mount $target/home $homedev Thanks in advance

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >