Search Results

Search found 11961 results on 479 pages for 'boot arguments'.

Page 166/479 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Hard drive is working fine then next boot up shows up as a unformatted.

    - by evolvd
    The drive is a Samsung Spinpoint F3 HD103SJ 1TB. There are no SMART errors and the drive sounds healthy. Restarted the computer from the drive working fine and then noticed that the drive had the same drive letter but has RAW as the file system. I have a few file/partition recovery software titles available but since doing any scans on this drive takes about 2.5 hours I wanted to know if any one had any advice.

    Read the article

  • Is it possible to create a Mirror or Stripe volume for the boot partition in Windows 2008/R2?

    - by Georgios
    Hello, I have a server with two identical disks and I have installed Windows Server 2008 R2 on C, which is a 60GB volume on Disk 0. Using the disk manager, I have attempted to create both a Mirror and Stripe volume in Disk 1 but every time I get the same error "No extents found in the plex". This error occurs after Windows has converted both disks to Dynamic. The fact that the manager allows me to attempt to do this would point to the fact that this is possible. However I have been unable to find any solutions to this error. Any ideas on how to solve this? Thanks Georgios

    Read the article

  • What''s the "earliest" place one can set an environment variable during Linux boot process?

    - by amn
    I know I can set a variable in a shell startup file, but the thing is, I am trying to set up a POSIX-compatible environment, and a POSIX shell does not parse any startup files other than the one specified by the environment variable ENV. This presents a problem - currently my login starts the shell as bash, which I will try to replace with sh so Bash runs as POSIX shell - however then it will not parse the default startup files and I need ENV set to specify these. Which means as far as I understand that I need to specify ENV before login starts the shell, correct? Now, how would I do that? I hope my question is clear, if not I will gladly redact it.

    Read the article

  • Why my samba printer is not visible (after boot) until i restart smb ?

    - by j23tom
    I've an old hp1100 and ubuntu 9.10 and now upgraded to lucid prerelease. I can't see my printer on network (using smb://mycomputer on nautilus or \mycomputr from xp). As long as i will not restart smbd (on lucid: sudo restart smbd) my printer is not visible as network share. All file shares are always visible. My printer is visible and working after smbd restart Any clue what might cause this ?

    Read the article

  • How can I install new boot splash theme into Ubuntu 9.10?

    - by gcc
    I want to change my splash screen. But when I download any splash screen to my computer, I cannot install them. Every time, the computer gives me the same warning "that packet is not a format wanted" -warning like this- I am asking "is there any other way to install splash screens?". Note: I have also used 'Art manager' but it did not work properly.

    Read the article

  • How to set ulimits for a service starting at boot?

    - by jayofdoom
    I need, for mysql to use large-pages, to set a ulimit - I've done this in limits.conf. However, limits.conf (pam_limits.so), doesn't get read in for init, only for "real" shells. I solved this before by adding a "ulimit -l" to the initscript start function. but I need some sort of repeatable way to do this, now that the boxes are managed with chef, and we don't want to take over a file that's actually owned by the RPM.

    Read the article

  • Mount a remote Linux hard drive as another Windows 7 partition during boot?

    - by zhuanyi
    I would like to mount a hard drive on a remote computer (running on CentOS 6) as a Windows drive so that I can install programs to that drive. The primary hard drive for my Windows machine (which is at home) is pretty small, I have a Linux server sitting in a remote data center with a much larger hard drive and allow me to install more stuff. I know most of you are going to say Samba, unfortunately the biggest problem for me in this case is that I can not mount Samba as a network share unless I start OpenVPN or SSH tunneling first, which is not good for my case because I will install some startup programs to the remote drive as well. Therefore, the remote drive has to be ready and work just like another drive BEFORE any of the startup programs start to load. Is that possible? My home PC has Windows 7 Professional 32 bit installed and the remote server is a Xen virtual server running on CentOS 6. I have admin/root permissions for both. Thanks a lot!

    Read the article

  • Is there any way to make an external monitor the primary under Boot Camp?

    - by mmc
    This may be a general problem for all Windows XP portables, I don't know, I don't have a dedicated Windows portable. I'm running a MacBookPro Unibody (so it's using the Nvidia 9600M chip under Windows XP SP2). Is there any way to get my external monitor to be the "main"? Even when I use the Nvidia Control Panel to move the task bar to my external monitor, games still refuse to run on anything other than the internal monitor. (In full screen mode, of course, if it's in a window, I can drag it to the other screen, no problem) I know I'm missing something elementary here.

    Read the article

  • Ubuntu USB flash boot drive gets spontaneous "Unhandled sense code" error and causes drive to switch to Write protected

    - by Steve
    What happens is that the system runs fine for several days or even a week and then suddenly the root file-system / goes read-only. Looking at the syslog it shows that there was an 'Unhandled sense code'. This is under Ubuntu 10.04 but I saw the same thing with Ubuntu 9 with different flash media. /dev/sdg1 on / type ext4 (rw,errors=remount-ro) Jun 26 08:50:04 host1 kernel: [926247.565090] sd 5:0:0:0: [sda] Unhandled sense code Jun 26 08:50:04 host1 kernel: [926247.565094] sd 5:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 26 08:50:04 host1 kernel: [926247.565098] sd 5:0:0:0: [sda] Sense Key : Data Protect [current] Jun 26 08:50:04 host1 kernel: [926247.565103] sd 5:0:0:0: [sda] Add. Sense: Write protected Jun 26 08:50:04 host1 kernel: [926247.565108] sd 5:0:0:0: [sda] CDB: Write(10): 2a 00 00 46 29 18 00 00 08 00 Jun 26 08:50:04 host1 kernel: [926247.565117] end_request: I/O error, dev sda, sector 4598040 Jun 26 08:50:04 host1 kernel: [926247.569788] Buffer I/O error on device sda1, logical block 574499 Jun 26 08:50:04 host1 kernel: [926247.574677] lost page write due to I/O error on sda1

    Read the article

  • Weird behavior of fork() and execvp() in C

    - by ron
    After some remarks from my previous post , I made the following modifications : int main() { char errorStr[BUFF3]; while (1) { int i , errorFile; char *line = malloc(BUFFER); char *origLine = line; fgets(line, 128, stdin); // get a line from stdin // get complete diagnostics on the given string lineData info = runDiagnostics(line); char command[20]; sscanf(line, "%20s ", command); line = strchr(line, ' '); // here I remove the command from the line , the command is stored in "commmand" above printf("The Command is: %s\n", command); int currentCount = 0; // number of elements in the line int *argumentsCount = &currentCount; // pointer to that // get the elements separated char** arguments = separateLineGetElements(line,argumentsCount); printf("\nOutput after separating the given line from the user\n"); for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // here we call a method that would execute the commands pid_t pid ; if (-1 == (pid = fork())) { sprintf(errorStr,"fork: %s\n",strerror(errno)); write(errorFile,errorStr,strlen(errorStr + 1)); perror("fork"); exit(1); } else if (pid == 0) // fork was successful { printf("\nIn son process\n"); // if (execvp(arguments[0],arguments) < 0) // for the moment I ignore this line if (execvp(command,arguments) < 0) // execute the command { perror("execvp"); printf("ERROR: execvp failed\n"); exit(1); } } else // parent { int status = 0; pid = wait(&status); printf("Process %d returned with status %d.", pid, status); } // print each element of the line for (i = 0; i < *argumentsCount; i++) { printf("Argument %i is: %s\n", i, arguments[i]); } // free all the elements from the memory for (i = 0; i < *argumentsCount; i++) { free(arguments[i]); } free(arguments); free(origLine); } return 0; } When I enter in the Console : ls out.txt I get : The Command is: ls execvp: No such file or directory In son process ERROR: execvp failed Process 4047 returned with status 256.Argument 0 is: > Argument 1 is: out.txt So I guess that the son process is active , but from some reason the execvp fails . Why ? Regards REMARK : The ls command is just an example . I need to make this works with any given command . EDIT 1 : User input : ls > qq.out Program output : The Command is: ls Output after separating the given line from the user Argument 0 is: > Argument 1 is: qq.out In son process >: cannot access qq.out: No such file or directory Process 4885 returned with status 512.Argument 0 is: > Argument 1 is: qq.out

    Read the article

  • What functionality should a (basic) mock framework have?

    - by user1175327
    If i would start on writing a simple Mock framework, then what are the things that a basic mock framework MUST have? Obviously mocking any object, but what about assertions and perhaps other things? When I think of how I would write my own mock framework then I realise how much I really know (or don't know) and what I would trip up on. So this is more for educational purposes. Of course I did research and this is what i've come up with that a minimal mocking framework should be able to do. Now my question in this whole thing is, am I missing some important details in my ideas? Mocking Mocking a class: Should be able to mock any class. The Mock should preserve the properties and their original values as they were set in the original class. All method implementations are empty. Calls to methods of Mock: The Mock framework must be able to define what a mocked method must return. IE: $MockObj->CallTo('SomeMethod')->Returns('some value'); Assertions To my understanding mocking frameworks also have a set of assertions. These are the ones I think are most important (taken from SimpleTest). expect($method, $args) Arguments must match if called expectAt($timing, $method, $args) Arguments must match when called on the $timing'th time expectCallCount($method, $count) The method must be called exactly this many times expectMaximumCallCount($method, $count) Call this method no more than $count times expectMinimumCallCount($method, $count) Must be called at least $count times expectNever($method) Must never be called expectOnce($method, $args) Must be called once and with the expected arguments if supplied expectAtLeastOnce($method, $args) Must be called at least once, and always with any expected arguments And that's basically, as far as I understand, what a mock framework should be able to do. But is this really everything? Because it currently doesn't seem like a big deal to build something like this. But that's also the reason why I have the feeling that i'm missing some important details about such a framework. So is my understanding right about a mock framework? Or am i missing alot of details?

    Read the article

  • GNU ld removes section

    - by Jonatan
    I'm writing a boot script for an ARM-Cortex M3 based device. If I compile the assembler boot script and the C application code and then combine the object files and transfer them to my device everything works. However, if I use ar to create an archive (libboot.a) and combine that archive with the C application there is a problem: I've put the boot code in a section: .section .boot, "ax" .global _start _start: .word 0x10000800 /* Initial stack pointer (FIXME!) */ .word start .word nmi_handler .word hard_fault_handler ... etc ... I've found that ld strips this from the final binary (the section "boot" is not available). This is quite natural as there is no dependency on it that ld knows about, but it causes the device to not boot correctly. So my question is: what is the best way to force this code to be included?

    Read the article

  • Ubuntu 13.XX unable to mount USB HDD. Tried everything. I/O error boot sector/file system

    - by XaviGG
    I know that there are many posts related but none of them helped me. I will jump to the last test because it is the one that should work, but it does not. An external HDD with single partition slow NTFS formatted in Windows, empty and clean. Checked for errors, it tells that not errors where found. Moving to Ubutnu 13.04... Gparted throws the first error when trying to read the disk: Input/output error It appears as unknown the content of the disk. Unable to create partition table or format it, getting the same error when trying. If I try to mount it in the terminal it tells me the same, specifying that also there is an I/O error reading the boot sector. I have this problem since I upgraded (always with fresh install) to 13.04. I thought it will be solved by the 13.10 but it has the same behavior. I tried with two different HDD (HD and SSHD) that work perfectly in Windows 7. In 13.04 at least I got a trying of mounting where the icon of the drive started showing and disappearing until finally it disappeared. But now it doesn't even try. Possible causes: -The HDD was my old main HDD, so it had WIN,RECOVERY,SYSTEM,UBU,SWAP partitions. Maybe the way or place where the partition table is defined is not the best for an external HDD but I don't know a lot in that topic. I would appreciate a lot if someone can give me a guideline to convert one of these HDD in a working external HDD. No files to recover, nothing to care about. Just format completely the disk and be able to use it for storing backups without having to move the files first to the windows partition, load windows and then copy them to the external HDD. Because I want to use a file comparator for the backups. Thanks a lot Edit 1: I found an option in Windows to convert it to a dynamic HDD that warns me that I wont be able to run O.S. after changing. I suppose that is what I need because in the current mode I cannot safely extract it. But it tells me an error that it couldn't change the mode.

    Read the article

  • ASP.NET exception gives irrelevant stack trace on YSOD, very challenging!

    - by pootow
    Here is the YSOD: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +428 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 ECommerce.PMethod.Sql.SqlConns.Open() +78 ECommerce.PMethod.Sql.SqlConns..ctor() +120 ECommerce.login.DatasInfo.Proc.UserCenter.IsLogin(String UserGUID, Int32 UserID) +49 ECommerce.login.Rules.Users.UserLogin.isLogin() +44 Config.isUserLogined() +5 Shopping_Shopping.Page_Load(Object sender, EventArgs e) +10 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 [TypeInitializationException: The type initializer for 'ECommerce.ERP.DAL.DBConn' threw an exception.] ECommerce.ERP.DAL.DBConn.get_ConnString() +0 [ObjectDefinitionStoreException: Factory method 'System.String get_ConnString()' threw an Exception.] Spring.Objects.Factory.Support.SimpleInstantiationStrategy.Instantiate(RootObjectDefinition definition, String name, IObjectFactory factory, MethodInfo factoryMethod, Object[] arguments) +257 Spring.Objects.Factory.Support.ConstructorResolver.InstantiateUsingFactoryMethod(String name, RootObjectDefinition definition, Object[] arguments) +624 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateUsingFactoryMethod(String name, RootObjectDefinition definition, Object[] arguments) +60 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.CreateObjectInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +56 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateObject(String name, RootObjectDefinition definition, Object[] arguments, Boolean allowEagerCaching, Boolean suppressConfigure) +436 [ObjectCreationException: Error thrown by a dependency of object 'styleService' defined in 'assembly [ECommerce.Services.Impl, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Services.Impl.AppContext.xml] line 56' : Initialization of object failed : Factory method 'System.String get_ConnString()' threw an Exception. while resolving 'constructor argument with name promotionservice' to 'promotionService' defined in 'assembly [ECommerce.Services.Impl, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Services.Impl.AppContext.xml] line 31' while resolving 'constructor argument with name domainservice' to 'promotionDomainService' defined in 'assembly [ECommerce.Domain, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Domain.AppContext.xml] line 20' while resolving 'constructor argument with name promotionrepos' to 'promotionRepos' defined in 'assembly [ECommerce.Data.AdoNet, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Data.AdoNet.AppContext.xml] line 34' while resolving 'constructor argument with name connstr' to 'ECommerce.ERP.DAL.DBConn#389F399' defined in 'assembly [ECommerce.Data.AdoNet, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null], resource [ECommerce.Data.AdoNet.AppContext.xml] line 34'] Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveReference(IObjectDefinition definition, String name, String argumentName, RuntimeObjectReference reference) +394 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolvePropertyValue(String name, IObjectDefinition definition, String argumentName, Object argumentValue) +312 Spring.Objects.Factory.Support.ObjectDefinitionValueResolver.ResolveValueIfNecessary(String name, IObjectDefinition definition, String argumentName, Object argumentValue) +17 Spring.Objects.Factory.Support.ConstructorResolver.ResolveConstructorArguments(String objectName, RootObjectDefinition definition, ObjectWrapper wrapper, ConstructorArgumentValues cargs, ConstructorArgumentValues resolvedValues) +993 Spring.Objects.Factory.Support.ConstructorResolver.AutowireConstructor(String objectName, RootObjectDefinition rod, ConstructorInfo[] chosenCtors, Object[] explicitArgs) +171 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.AutowireConstructor(String name, RootObjectDefinition definition, ConstructorInfo[] ctors, Object[] explicitArgs) +65 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.CreateObjectInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +161 Spring.Objects.Factory.Support.AbstractAutowireCapableObjectFactory.InstantiateObject(String name, RootObjectDefinition definition, Object[] arguments, Boolean allowEagerCaching, Boolean suppressConfigure) +636 Spring.Objects.Factory.Support.AbstractObjectFactory.CreateAndCacheSingletonInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +174 Spring.Objects.Factory.Support.WebObjectFactory.CreateAndCacheSingletonInstance(String objectName, RootObjectDefinition objectDefinition, Object[] arguments) +150 Spring.Objects.Factory.Support.AbstractObjectFactory.GetObjectInternal(String name, Type requiredType, Object[] arguments, Boolean suppressConfigure) +990 Spring.Objects.Factory.Support.AbstractObjectFactory.GetObject(String name) +10 Spring.Context.Support.AbstractApplicationContext.GetObject(String name) +20 ECommerce.Common.ServiceLocator.GetService() +334 ECommerce.Mvc.Controllers.StylesController..ctor() +72 [TargetInvocationException: Exception has been thrown by the target of an invocation.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) +86 System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) +230 System.Activator.CreateInstance(Type type, Boolean nonPublic) +67 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +80 [InvalidOperationException: An error occurred when trying to create a controller of type 'ECommerce.Mvc.Controllers.StylesController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +190 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +68 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +118 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +46 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +63 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +13 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677954 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Version Information: Microsoft .NET Framework Version:2.0.50727.3082; ASP.NET Version:2.0.50727.3082 Question is: the first stack trace is irrelevant to others, what happened? Any ideas? Let me make this more clear: a MVC page uses the spring part trying to load a lazy-init service which constructor wants a connection string through a static property like this: <object id="promotionRepos" type="ECommerce.Data.AdoNet.Promotions.PromotionRepos, ECommerce.Data.AdoNet" lazy-init="true"> <constructor-arg name="provider"> <null /> </constructor-arg> <constructor-arg name="connStr"> <object type="ECommerce.ERP.DAL.DBConn, ECommerce.ERP.DAL" factory-method="get_ConnString" /> </constructor-arg> <property name="RefreshInterval" value="00:00:10" /> </object> the timeout part is some what irrelevent to all others. see this in the first exception: Shopping_Shopping.Page_Load(Object sender, EventArgs e) +10 it's another page at all. And also, ECommerce.PMethod.Sql.SqlConns.Open() uses its own connection string, not the one loaded by spring, it's different module from diffrent team. And I am sure the connection string is correct. And, this ysod cames up randomly. Sometimes nothing is wrong, and sometimes, it appears. I thought there could be something wrong with my database or the network/firewall, I will check it later, but now I want understand this tricky stack trace.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • General Purpose ASP.NET Data Source Control

    - by Ricardo Peres
    OK, you already know about the ObjectDataSource control, so what’s wrong with it? Well, for once, it doesn’t pass any context to the SelectMethod, you only get the parameters supplied on the SelectParameters plus the desired ordering, starting page and maximum number of rows to display. Also, you must have two separate methods, one for actually retrieving the data, and the other for getting the total number of records (SelectCountMethod). Finally, you don’t get a chance to alter the supplied data before you bind it to the target control. I wanted something simple to use, and more similar to ASP.NET 4.5, where you can have the select method on the page itself, so I came up with CustomDataSource. Here’s how to use it (I chose a GridView, but it works equally well with any regular data-bound control): 1: <web:CustomDataSourceControl runat="server" ID="datasource" PageSize="10" OnData="OnData" /> 2: <asp:GridView runat="server" ID="grid" DataSourceID="datasource" DataKeyNames="Id" PageSize="10" AllowPaging="true" AllowSorting="true" /> The OnData event handler receives a DataEventArgs instance, which contains some properties that describe the desired paging location and size, and it’s where you return the data plus the total record count. Here’s a quick example: 1: protected void OnData(object sender, DataEventArgs e) 2: { 3: //just return some data 4: var data = Enumerable.Range(e.StartRowIndex, e.PageSize).Select(x => new { Id = x, Value = x.ToString(), IsPair = ((x % 2) == 0) }); 5: e.Data = data; 6: //the total number of records 7: e.TotalRowCount = 100; 8: } Here’s the code for the DataEventArgs: 1: [Serializable] 2: public class DataEventArgs : EventArgs 3: { 4: public DataEventArgs(Int32 pageSize, Int32 startRowIndex, String sortExpression, IOrderedDictionary parameters) 5: { 6: this.PageSize = pageSize; 7: this.StartRowIndex = startRowIndex; 8: this.SortExpression = sortExpression; 9: this.Parameters = parameters; 10: } 11:  12: public IEnumerable Data 13: { 14: get; 15: set; 16: } 17:  18: public IOrderedDictionary Parameters 19: { 20: get; 21: private set; 22: } 23:  24: public String SortExpression 25: { 26: get; 27: private set; 28: } 29:  30: public Int32 StartRowIndex 31: { 32: get; 33: private set; 34: } 35:  36: public Int32 PageSize 37: { 38: get; 39: private set; 40: } 41:  42: public Int32 TotalRowCount 43: { 44: get; 45: set; 46: } 47: } As you can guess, the StartRowIndex and PageSize receive the starting row and the desired page size, where the page size comes from the PageSize property on the markup. There’s also a SortExpression, which gets passed the sorted-by column and direction (if descending) and a dictionary containing all the values coming from the SelectParameters collection, if any. All of these are read only, and it is your responsibility to fill in the Data and TotalRowCount. The code for the CustomDataSource is very simple: 1: [NonVisualControl] 2: public class CustomDataSourceControl : DataSourceControl 3: { 4: public CustomDataSourceControl() 5: { 6: this.SelectParameters = new ParameterCollection(); 7: } 8:  9: protected override DataSourceView GetView(String viewName) 10: { 11: return (new CustomDataSourceView(this, viewName)); 12: } 13:  14: internal void GetData(DataEventArgs args) 15: { 16: this.OnData(args); 17: } 18:  19: protected virtual void OnData(DataEventArgs args) 20: { 21: EventHandler<DataEventArgs> data = this.Data; 22:  23: if (data != null) 24: { 25: data(this, args); 26: } 27: } 28:  29: [Browsable(false)] 30: [DesignerSerializationVisibility(DesignerSerializationVisibility.Visible)] 31: [PersistenceMode(PersistenceMode.InnerProperty)] 32: public ParameterCollection SelectParameters 33: { 34: get; 35: private set; 36: } 37:  38: public event EventHandler<DataEventArgs> Data; 39:  40: public Int32 PageSize 41: { 42: get; 43: set; 44: } 45: } Also, the code for the accompanying internal – as there is no need to use it from outside of its declaring assembly - data source view: 1: sealed class CustomDataSourceView : DataSourceView 2: { 3: private readonly CustomDataSourceControl dataSourceControl = null; 4:  5: public CustomDataSourceView(CustomDataSourceControl dataSourceControl, String viewName) : base(dataSourceControl, viewName) 6: { 7: this.dataSourceControl = dataSourceControl; 8: } 9:  10: public override Boolean CanPage 11: { 12: get 13: { 14: return (true); 15: } 16: } 17:  18: public override Boolean CanRetrieveTotalRowCount 19: { 20: get 21: { 22: return (true); 23: } 24: } 25:  26: public override Boolean CanSort 27: { 28: get 29: { 30: return (true); 31: } 32: } 33:  34: protected override IEnumerable ExecuteSelect(DataSourceSelectArguments arguments) 35: { 36: IOrderedDictionary parameters = this.dataSourceControl.SelectParameters.GetValues(HttpContext.Current, this.dataSourceControl); 37: DataEventArgs args = new DataEventArgs(this.dataSourceControl.PageSize, arguments.StartRowIndex, arguments.SortExpression, parameters); 38:  39: this.dataSourceControl.GetData(args); 40:  41: arguments.TotalRowCount = args.TotalRowCount; 42: arguments.MaximumRows = this.dataSourceControl.PageSize; 43: arguments.AddSupportedCapabilities(DataSourceCapabilities.Page | DataSourceCapabilities.Sort | DataSourceCapabilities.RetrieveTotalRowCount); 44: arguments.RetrieveTotalRowCount = true; 45:  46: if (!(args.Data is ICollection)) 47: { 48: return (args.Data.OfType<Object>().ToList()); 49: } 50: else 51: { 52: return (args.Data); 53: } 54: } 55: } As always, looking forward to hearing from you!

    Read the article

  • Part 14: Execute a PowerShell script

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application With PowerShell you can add powerful scripting to your build to for example execute a deployment. If you want more information on PowerShell, please refer to http://technet.microsoft.com/en-us/library/aa973757.aspx For this example we will create a simple PowerShell script that prints “Hello world!”. To create the script, create a new text file and name it “HelloWorld.ps1”. Add to the contents of the script: Write-Host “Hello World!” To test the script do the following: Open the command prompt To run the script you must change the execution policy. To do this execute in the command prompt: powershell set-executionpolicy remotesigned Now go to the directory where you have saved the PowerShell script Execute the following command powershell .\HelloWorld.ps1 In this example I use a relative path, but when the path to the PowerShell script contains spaces, you need to change the syntax to powershell "& '<full path to script>' " for example: powershell "& ‘C:\sources\Build Customization\SolutionToBuild\PowerShell Scripts\HellloWorld.ps1’ " In this blog post, I create a new solution and that solution includes also this PowerShell script. I want to create an argument on the Build Process Template that holds the path to the PowerShell script. In the Build Process Template I will add an InvokeProcess activity to execute the PowerShell command. This InvokeProcess activity needs the location of the script as an argument for the PowerShell command. Since you don’t know the full path at the build server of this script, you can either specify in the argument the relative path of the script, but it is hard to find out what the relative path is. I prefer to specify the location of the script in source control and then convert that server path to a local path. To do this conversion you can use the ConvertWorkspaceItem activity. So to complete the task, open the Build Process Template CustomTemplate.xaml that we created in earlier parts, follow the following steps Add a new argument called “DeploymentScript” and set the appropriate settings in the metadata. See Part 2: Add arguments and variables  for more information. Scroll down beneath the TryCatch activity called “Try Compile, Test, and Associate Changesets and Work Items” Add a new If activity and set the condition to "Not String.IsNullOrEmpty(DeploymentScript)" to ensure it will only run when the argument is passed. Add in the Then branch of the If activity a new Sequence activity and rename it to “Start deployment” Click on the activity and add a new variable called DeploymentScriptFilename (scoped to the “Start deployment” Sequence Add a ConvertWorkspaceItem activity on the “Start deployment” Sequence Add a InvokeProcess activity beneath the ConvertWorkspaceItem activity in the “Start deployment” Sequence Click on the ConvertWorkspaceItem activity and change the properties DisplayName = Convert deployment script filename Input = DeploymentScript Result = DeploymentScriptFilename Workspace = Workspace Click on the InvokeProcess activity and change the properties Arguments = String.Format(" ""& '{0}' "" ", DeploymentScriptFilename) DisplayName = Execute deployment script FileName = "PowerShell" To see results from the powershell command drop a WriteBuildMessage activity on the "Handle Standard Output" and pass the stdOutput variable to the Message property. Do the same for a WriteBuildError activity on the "Handle Error Output" To publish it, check in the Build Process Template This leads to the following result We now go to the build definition that depends on the template and set the path of the deployment script to the server path to the HelloWorld.ps1. (If you want to see the result of the PowerShell script, change the Logging verbosity to Detailed or Diagnostic). Save and run the build. A lot of the deployment scripts you have will have some kind of arguments (like username / password or environment variables) that you want to define in the Build Definition. To make the PowerShell configurable, you can follow the following steps. Create a new script and give it the name "HelloWho.ps1". In the contents of the file add the following lines: param (         $person     ) $message = [System.String]::Format(“Hello {0}!", $person) Write-Host $message When you now run the script on the command prompt, you will see the following So lets change the Build Process Template to accept one parameter for the deployment script. You can of course make it configurable to add a for-loop that reads through a collection of parameters but that is out of scope of this blog post. Add a new Argument called DeploymentScriptParameter In the InvokeProcess activity where the PowerShell command is executed, modify the Arguments property to String.Format(" ""& '{0}' '{1}' "" ", DeploymentScriptFilename, DeploymentScriptParameter) Check in the Build Process Template Now modify the build definition and set the Parameter of the deployment to any value and run the build. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Error on `gksu nautilus` because Nautilus cannot create folder "/root/.config/nautilus"

    - by luciehn
    I have a problem executing nautilus in root mode on a fresh installation. I just installed Ubuntu 12.04, and the first thing I did after first boot was run the command gksudo nautilus, I've got this error message: Nautilus could not create the required folder "/root/.config/nautilus". Before running Nautilus, please create the following folder, or set permissions such that Nautilus can create it. I am triple booting Windows 7, Fedora 17 and Ubuntu 12.04. My partition configuration is this: sda1 --> (ntfs) Windows 7 boot partition sda2 --> (ntfs) Windows 7 sda3 --> (ext4) Fedora 17 /boot partition sda4 --> Extended partition sda5 --> LVM Fedora 17, with 3 partitions inside (/, /home and swap) sda6 --> (ext4) Ubuntu 12.04 /boot partition sda7 --> (ext4) Ubuntu 12.04 swap partition sda8 --> (ext4) Ubuntu 12.04 / partition sda9 --> (ext4) Ubuntu 12.04 /home partition The MBR is using Windows, so is this one which is controlling the machine boot menu. Looks like Nautilus does not have permission to write in /root/.config, but it should right? I prefer asking before doing nothing wrong. Any ideas?

    Read the article

  • some files just wont install- linux-image-3.5.0-42-generic

    - by jatin
    It gives the following details of the package operation failing after using the sofware updater on 12.10: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 217566 files and directories currently installed.) Preparing to replace linux-image-3.5.0-36-generic 3.5.0-36.57~precise1 (using .../linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-36-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-36-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic Preparing to replace linux-image-3.5.0-42-generic 3.5.0-42.65~precise1 (using .../linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-42-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-42-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Preparing to replace memtest86+ 4.20-1.1ubuntu1 (using .../memtest86+_4.20-1.1ubuntu2.1_amd64.deb) ... Unpacking replacement memtest86+ ... dpkg: error processing /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb (--unpack): unable to make backup link of `./boot/memtest86+.bin' before installing new version: Operation not permitted No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb Error in function: Output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 38G 6.5G 30G 19% / udev 3.5G 4.0K 3.5G 1% /dev tmpfs 1.5G 924K 1.5G 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.6G 296K 3.6G 1% /run/shm none 100M 64K 100M 1% /run/user /dev/sda1 197M 87M 111M 44% /boot /dev/sda5 243G 971M 230G 1% /home

    Read the article

  • Ubuntu on the Sony Vaio VPCY 21S1E

    - by tiax
    My girlfriend got a new Subnotebook for Christmas (Sony Vayo VPCY21S1E), which comes with an "Intel HD graphics" vga adaptor. When I try to boot the Ubuntu installer from USB, the screen goes blank after a short while, before I even see the Ubuntu logo. However, when I select "nomodeset" in the boot options, I can boot it to the CLI login prompt. When I start X, though, that only works in VESA mode (I've read Intel eventually got rid of Usermode Mode Setting and only offers KMS now, which I've disabled to get it to boot). What can I do to enable a) higher resolution than 1024x768 in VESA b) hardware acceleration for compiz, video playback, etc?

    Read the article

  • How do I use with LTSP with a Dell FX170 thin client?

    - by v4169sgr
    Just got a reconditioned DELL Optiplex FX170 thin client delivered, without an image. On power-up, I see the message: DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER I am very sure all the wires etc are connected properly. I would like it to PXE boot and find my LTSP installation. I have an HP T5525 with the same connectivity that works fine. On F12 there's a BIOS boot order menu. The only options available though are LS120, the HDD, CD / DVD [via USB presumably], and various USB options. I do not see 'PXE' or any network card. And I don't know how to change the boot order :( Any help appreciated!

    Read the article

  • Ubuntu only boots with USB plugged in

    - by Ben
    I'm new to the Linux world so please bear with me! :-) I installed Ubuntu via USB drive onto my hard drive. If I boot the PC without the usb drive I used, Ubuntu will not load. After booting I can unplug without any consequences. I looked on the hard drive and there is a boot folder. On the USB drive, this is the tree contents: /media/disk$ tree . |-- adtext.cfg |-- boot.cat |-- f10.txt |-- f1.txt |-- f2.txt |-- f3.txt |-- f4.txt |-- f5.txt |-- f6.txt |-- f7.txt |-- f8.txt |-- f9.txt |-- initrd.gz |-- isolinux.bin |-- isolinux.cfg |-- ldlinux.sys |-- linux |-- menu.c32 |-- menu.cfg |-- po4a.cfg |-- prompt.cfg |-- splash.png |-- stdmenu.cfg |-- syslinux.cfg |-- text.cfg |-- ubnfilel.txt |-- ubnpathl.txt `-- vesamenu.c32 Am I correct in my assumption that the boot aspect is associated to the USB drive? If so, how do I get it to boot without the USB? I'm guessing copying into some location and modifying grub?

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • Macbook Pro 13" Retina (10,2): Keyboard and Touchpad don't work correcltly

    - by Dirk
    I'm dealing with Ubuntu since about 5 years and installed it on several laptops. Now I'm stuck when trying to install Ubuntu 12.04.1 on a brand new Macbook Pro 13" Retina (10,2). I sucessfully can start Ubuntu from an USB stick, the Ubuntu desktop is visible, a mouse cursor is visible. But there is no respond to keyboard or touchpad input. So I cannot really install Ubuntu on the Macbook. The details of my approach: Prepare an empty USB stick Download "ISO 2 USB EFI Booter for Mac" and copy the file bootX64.efi to the USB drive as /efi/boot/bootX64.efi. Download Ubuntu 12.04.1 Desktop for Mac from http://cdimage.ubuntu.com/releases/1-amd64+mac.iso and copy the iso the USB drive as /efi/boot/boot.iso Put the USB stick into the Macbook Press and hold the "alt" button while switching the Macbook on Select "EFI Boot" from the boot menu that appears and press the Return / Enter key Immediately a black terminal screen appears with the headline "Welcome to the Ubuntu ISO << - EFI booter". 30 seconds later the familiar Ubuntu startup graphics screen is showing. Further 20 seconds later Ubuntu has started and the desktop is visible - in wonderfully fine resolution Now the computer does not respond to any actions on the touchpad nor the keyboard Who did install Ubuntu on this Macbook Pro 13" Retina (10,2) successfully? On this site https://help.ubuntu.com/community/MacBookPro this unit is not listed yet, anyway. Any help would be greatly appreciated! Dirk PS: I could now install ubuntu with an external USB Keyboard/Mouse Set. But now, after showing the grub menu, a kernel panic error appears and booting stops :-/ Seems that the ubuntu images fit not to a macbook pro retina 13" (10,2) yet. PPS: Ok, there are new facts: If I edit the boot options and enter " nomodeset noapic" ubuntu starts and Keyboard and Touchpad work! Now I have to enable WiFi... PPPS: After installing Broadcom firmware from USB Live stick as described in other posts, WiFi was enabled. Then I could update ubuntu normally to 12.10. After this, I must not enter "nomodeset noapic" in the grub menu anymore. Last Thing now is the Touchpad. The driver seems not to be there. The touch pad is only showing as mouse. t.b.c.

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >