Search Results

Search found 25718 results on 1029 pages for 'external hard drive'.

Page 278/1029 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • how to use a batch file to delete a line of text in a bunch of text files? [on hold]

    - by wbt
    I have a bunch of txt files in my D drive which are placed randomly in different locations.Some files also contain symbols.I want a batch file so that i can delete their specific lines completely at the same time without doing it one by one for each file and please refer to a code which does not create a new text file at some other location with the changes being incorporated i.e I do not want the input.txt and output.txt thing.I just need the original files to be replaced with the changes as soon as i click the batch file. e.g D:\abc\1.txt D:\xyz\2.txt etc I want both of their 3rd lines erased completely with a single click and the new file must be saved with the same name in the same location i.e the new changed text files must replace the old text files with their respective lines removed.Maybe some sort of *.txt thing i.e i should be able to change all the files with the .txt extensions in a drive via a single batch file perhaps in another drive,not placing my batch file into each and every folder separately and then running them.Alternatively a vbs file is also welcomed. SORRY FOR THE LONG AND THOROUGH MESSAGE BUT I'M WONDERING ALL OVER THE INTERNET FOR THE LAST TWO DAYS JUST FOR THIS ONE BATCH FILE.ALL THE INFORMATION I GET IS A SORT OF JARGON FOR ME AS I AM NOT A GEEK WITH THE SCRIPTING.PLEASE DESCRIBE THE CODE TOO.YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • How to use Crtl in a Delphi unit in a C++Builder project? (or link to C++Builder C runtime library)

    - by Craig Peterson
    I have a Delphi unit that is statically linking a C .obj file using the {$L xxx} directive. The C file is compiled with C++Builder's command line compiler. To satisfy the C file's runtime library dependencies (_assert, memmove, etc), I'm including the crtl unit Allen Bauer mentioned here. unit FooWrapper; interface implementation uses Crtl; // Part of the Delphi RTL {$L FooLib.obj} // Compiled with "bcc32 -q -c foolib.c" procedure Foo; cdecl; external; end. If I compile that unit in a Delphi project (.dproj) everthing works correctly. If I compile that unit in a C++Builder project (.cbproj) it fails with the error: [ILINK32 Error] Fatal: Unable to open file 'CRTL.OBJ' And indeed, there isn't a crtl.obj file in the RAD Studio install folder. There is a .dcu, but no .pas. Trying to add crtdbg to the uses clause (the C header where _assert is defined) gives an error that it can't find crtdbg.dcu. If I remove the uses clause, it instead fails with errors that __assert and _memmove aren't found. So, in a Delphi unit in a C++Builder project, how can I export functions from the C runtime library so they're available for linking? I'm already aware of Rudy Velthuis's article. I'd like to avoid manually writing Delphi wrappers if possible, since I don't need them in Delphi, and C++Builder must already include the necessary functions. Edit For anyone who wants to play along at home, the code is available in Abbrevia's Subversion repository at https://tpabbrevia.svn.sourceforge.net/svnroot/tpabbrevia/trunk. I've taken David Heffernan's advice and added a "AbCrtl.pas" unit that mimics crtl.dcu when compiled in C++Builder. That got the PPMd support working, but the Lzma and WavPack libraries both fail with link errors: [ILINK32 Error] Error: Unresolved external '_beginthreadex' referenced from ABLZMA.OBJ [ILINK32 Error] Error: Unresolved external 'sprintf' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external 'strncmp' referenced from ABWAVPACK.OBJ [ILINK32 Error] Error: Unresolved external '_ftol' referenced from ABWAVPACK.OBJ AFAICT, all of them are declared correctly, and the _beginthreadex one is actually declared in AbLzma.pas, so it's used by the pure Delphi compile as well. To see it yourself, just download the trunk (or just the "source" and "packages" directories), disable the {$IFDEF BCB} block at the bottom of AbDefine.inc, and try to compile the C++Builder "Abbrevia.cbproj" project.

    Read the article

  • Log4j: Issues about the FallbackErrorHandler

    - by rdogpink
    I am working on a client-server-application and wanted to implement a flexible Loggingframework, so I chose log4j, which doesn´t really evolve anymore, but it is still handy framework. Because the Logging happens along the network, i wanted a solution for the case, that the network drive isn´t available, so the Logger has to change its destination file(s). Now I wanted to use the FallbackErrorHandler (configured with a XML-File) from the Log4j-library and the implementation worked: When my network drive isn´t available, it switches to a local Logfile, so no logging should be lost. But I headded two problems since yesterday and couldn´t figure or find out, how to solve it. No return to initial Logging Configuration: When the network drive is on again and the Logger could write to the old destinations, log4j still logs at the local drive and I can´t figure out, how to notify the original (primary) Logger to start again. I also tried to attach a second Appender to the ErrorHandler, which should mirror the failed primary Logger, that it tries to write on the network destination and when the network is on again, it logs in both files, on the local and on the network drive. But unfortunately it didn´t work out, I only got a failure message that the ErrorHandler-content doesn´t fit. log4j:WARN The content of element type "errorHandler" must match "(param*,root-ref?,logger-ref*,appender-ref?)". This is the responsible code. <appender name="TraceAppender" class="org.apache.log4j.DailyRollingFileAppender"> <!-- The second appender-ref "TestAppender" leads to the error. --> <errorHandler class="org.apache.log4j.varia.FallbackErrorHandler"> <logger-ref ref="com.idoh"/> <appender-ref ref="TraceFallbackAppender"/> <appender-ref ref="TestAppender"/> </errorHandler> <param name="datePattern" value=".yyyy-MM-dd" /> <param name="file" value="logs/Trace.txt" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-6r %d{HH:mm:ss,SSS} [%t] %-5p - %m%n"/> </layout> </appender> So, how could I trigger log4j to reset to initial configuration or hold a second appender parallel to the "Local-Logger". My Application should work by itself and shouldn´t have to be restarted often. First Error message is swallowed: I recognized, that the first message, which leads to the switching between the primary logger and the FallbackErrorHandler (for example a logging-request to a readonly-File), is swallowed, so neither the primary logger logs it (because it can´t) nor the backup-Logger knows what it missed. So anybody else ran in this problem and could solve it? Or has any suggestions?

    Read the article

  • Error #1009 Cannot access a property or method of a null object reference.

    - by user288920
    Hey everyone, I'm trying to import an external SWF with a scrollbar, calling out to an external .AS, into my main SWF. Someone told me, it's an issue that my scrollbar isn't instantiated yet, but stopped short of helping me how to fix the problem. Here's the error below: TypeError: Error #1009: Cannot access a property or method of a null object reference. at Scrollbar/init() at Sample2_fla::MainTimeline/scInit() at flash.display::DisplayObjectContainer/addChild() at Sample2_fla::MainTimeline/frame1() On my main SWF, I was to click a button and load my external SWF. I want to then click another button in the external SWF and reveal my scrollbar (alpha=1;). The scrollbar is the issue. Here's my script: Sample1.swf (main) this.addEventListener(MouseEvent.CLICK, clickListener); var oldSection=null; function clickListener(evt:Event) { if (evt.target.name=="button_btn") { loadSection("Sample2.swf"); } } function loadSection(filePath:String) { var url:URLRequest=new URLRequest(filePath); var ldr:Loader = new Loader(); ldr.contentLoaderInfo.addEventListener(Event.COMPLETE, sectionLoadedListener); ldr.load(url); } function sectionLoadedListener(evt:Event) { var section=evt.target.content; if (oldSection) { removeChild(oldSection); } oldSection=section; addChild(section); section.x=0; section.y=0; } Sample2.SWF (external): import com.greensock.*; import com.greensock.easing.*; import com.greensock.plugins.*; scroll_mc.alpha=0; import Scrollbar; var sc:Scrollbar=new Scrollbar(scroll_mc.text,scroll_mc.maskmc,scroll_mc.scrollbar.ruler,scroll_mc.scrollbar.background,scroll_mc.area,true,6); sc.addEventListener(Event.ADDED, scInit); addChild(sc); function scInit(e:Event):void { sc.init(); } button2_btn.addEventListener(MouseEvent.CLICK, clickListener); function clickListener(evt:MouseEvent){ TweenMax.to(this.scroll_mc, 1,{alpha:1}); } I really appreciate your help. Cheers!

    Read the article

  • Tunnel is up but cannot ping directly connected network

    - by drmanalo
    We configured a site-to-site VPN and here is the topology. I control the network on the left but not the one on the right. All devices in our network has public IPs. Server---ASA5505---Cisco887======Internet=====ASA5510---devices I can see the tunnel is up and can do extended ping using a loopback interface. From the 10.175 and 10.165 networks, they can also ping my loopback address. I can also dial in using a Cisco VPN client, and can connect to the devices on the right. #show crypto session Crypto session current status Interface: Vlan3 Profile: xxx-profile Session status: UP-ACTIVE Peer: 213.121.x.x port 500 IKEv1 SA: local 77.245.x.x/500 remote 213.121.x.x/500 Active IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.175.0.0/255.255.128.0 Active SAs: 0, origin: crypto map IPSEC FLOW: permit ip 10.0.20.0/255.255.255.240 10.165.0.0/255.255.192.0 Active SAs: 2, origin: crypto map #ping 10.165.29.39 source loopback 2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.165.29.39, timeout is 2 seconds: Packet sent with a source address of 10.0.20.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/17/20 ms My problem is the devices on the right cannot reach my server. They could only ping the loopback address and nothing else. I'm pasting some diagnostics related to routing thinking perhaps routing is my issue. I can paste all the running-config on my side of network if needed. #show ip int brief Interface IP-Address OK? Method Status Protocol ATM0 unassigned YES NVRAM administratively down down Ethernet0 unassigned YES NVRAM administratively down down FastEthernet0 unassigned YES unset up up connected to ASA FastEthernet1 unassigned YES unset administratively down down FastEthernet2 unassigned YES unset administratively down down FastEthernet3 unassigned YES unset up up Loopback1 10.0.20.65 YES NVRAM up up Loopback2 10.0.20.1 YES NVRAM up up Virtual-Template1 77.245.x.x YES unset up down Virtual-Template2 77.245.x.x YES unset up down Vlan1 unassigned YES unset down down Vlan3 77.245.x.x YES NVRAM up up connected to the Internet #show run | section ip route ip route 0.0.0.0 0.0.0.0 77.245.x.x ip route 213.121.240.36 255.255.255.255 Vlan3 #show access-list Extended IP access list 102 10 permit ip 10.0.20.0 0.0.0.15 10.175.0.0 0.0.127.255 (3332 matches) 20 permit ip 10.0.20.0 0.0.0.15 10.165.0.0 0.0.63.255 (3498 matches) #show vlan-switch VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active 3 VLAN0003 active Fa0, Fa1, Fa2, Fa3 1002 fddi-default act/unsup 1003 token-ring-default act/unsup 1004 fddinet-default act/unsup 1005 trnet-default act/unsup #show ip route Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP + - replicated route, % - next hop override Gateway of last resort is 77.245.x.x to network 0.0.0.0 S* 0.0.0.0/0 [1/0] via 77.245.x.x 10.0.0.0/8 is variably subnetted, 5 subnets, 3 masks C 10.0.20.0/28 is directly connected, Loopback2 L 10.0.20.1/32 is directly connected, Loopback2 C 10.0.20.64/28 is directly connected, Loopback1 L 10.0.20.65/32 is directly connected, Loopback1 S 10.165.0.0/18 [1/0] via 213.121.x.x 77.0.0.0/8 is variably subnetted, 3 subnets, 3 masks S 77.0.0.0/8 [1/0] via 77.245.x.x C 77.245.x.x/29 is directly connected, Vlan3 L 77.245.x.x/32 is directly connected, Vlan3 213.121.x.0/32 is subnetted, 1 subnets S 213.121.x.x is directly connected, Vlan3 I read some of the posts here which lead to NATing issue but I'not sure of my next step. Should I translate my public address to private and route it to the loopback address? (only guessing) CISCO VPN site to site Site-to-Site VPN between two ASA 5505s only working in one direction Hope someone could help. Thanks in advance!

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Ubuntu 12.04 KVM running Ubuntu 12.04 with linux-image-virtual crash on boot

    - by D.Mill
    One of my VMs is stuck on "pause" in virsh. If I destroy and restart it, it will go to pause after a bit of time as "running". I can at best enter my username at login if I'm quick but it'll then shutdown. I don't know where to start with this so any help would be great!! I can access the VMs files via guestfish. the kern.log and syslog don't populate new lines. This is the last input I get from kern.log: Dec 13 11:21:08 soft201 kernel: imklog 5.8.6, log source = /proc/kmsg started. Dec 13 11:21:08 soft201 kernel: [ 0.000000] Initializing cgroup subsys cpuset Dec 13 11:21:08 soft201 kernel: [ 0.000000] Initializing cgroup subsys cpu Dec 13 11:21:08 soft201 kernel: [ 0.000000] Linux version 3.2.0-34-virtual (buildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #53-Ubuntu SMP Thu Nov 15 11:08:40 UTC 2012 (Ubuntu 3.2.0-34.53-virtual 3.2.33) Dec 13 11:21:08 soft201 kernel: [ 0.000000] Command line: root=UUID=61d48b48-a06a-48fb-842e-b38014086a93 ro quiet splash Dec 13 11:21:08 soft201 kernel: [ 0.000000] KERNEL supported cpus: Dec 13 11:21:08 soft201 kernel: [ 0.000000] Intel GenuineIntel Dec 13 11:21:08 soft201 kernel: [ 0.000000] AMD AuthenticAMD Dec 13 11:21:08 soft201 kernel: [ 0.000000] Centaur CentaurHauls Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-provided physical RAM map: Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 0000000000100000 - 00000000dfffc000 (usable) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000dfffc000 - 00000000e0000000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] BIOS-e820: 0000000100000000 - 0000000a20000000 (usable) Dec 13 11:21:08 soft201 kernel: [ 0.000000] NX (Execute Disable) protection: active Dec 13 11:21:08 soft201 kernel: [ 0.000000] DMI 2.4 present. Dec 13 11:21:08 soft201 kernel: [ 0.000000] DMI: Bochs Bochs, BIOS Bochs 01/01/2007 Dec 13 11:21:08 soft201 kernel: [ 0.000000] e820 update range: 0000000000000000 - 0000000000010000 (usable) ==> (reserved) Dec 13 11:21:08 soft201 kernel: [ 0.000000] e820 remove range: 00000000000a0000 - 0000000000100000 (usable) Dec 13 As you can see the last line gets cut off. I don't even know if this is that relevant. dmesg logs are empty. The qemu log for the VM returns this: 2012-12-13 12:29:47.584+0000: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 40960 -smp 14,sockets=14,cores=1,threads=1 -name numerink201 -uuid f4a889ed-a089-05d0-cc9d-9825ab1faeba -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/numerink201.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/client.soft.fr/tmpcZAD9U.qcow2,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -fsdev local,security_model=none,id=fsdev-fs0,path=/home/shared_folders/soft201 -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=hostshare,bus=pci.0,addr=0x5 -netdev tap,fd=18,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:00:1d:b9:e7,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/3 qemu: terminating on signal 15 from pid 28248 2012-12-13 12:30:14.455+0000: shutting down I've added more logging, libvirt.log gives me this: 2012-12-13 13:24:38.525+0000: 27694: info : libvirt version: 0.9.8 2012-12-13 13:24:38.525+0000: 27694: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory 2012-12-13 13:24:38.525+0000: 27694: warning : qemuCapsInit:856 : Failed to get host power management capabilities 2012-12-13 13:24:39.865+0000: 27694: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory 2012-12-13 13:24:39.865+0000: 27694: warning : lxcCapsInit:77 : Failed to get host power management capabilities 2012-12-13 13:24:39.866+0000: 27694: error : virExecWithHook:328 : Cannot find 'pm-is-supported' in path: No such file or directory 2012-12-13 13:24:39.866+0000: 27694: warning : umlCapsInit:87 : Failed to get host power management capabilities I don't really know where to go from here. I'll post whatever info you require

    Read the article

  • Lag spikes at full CPU usage, lagy mouse, maybe video card

    - by Roberts
    My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now 01.04.2012 (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can). Update 02.04.2012 - 7:21 Forgot one thing, my hard disk is extrimly slow and I will upgrade it this week or next week so I will be installing same OS again. I am multi-tasker but I can't do much because of 1.8 GHz CPU and slow hard drive (Model ID - WDC WD800JD-60JRC0). The Windows Experience Index is back to normal. Actually "Spelu grafika" (Gaming graphics) are higher than month ago. During this test mouse was very lagy, but month ago there weren't any problems. WHY!?

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • Tulsa SharePoint Interest Group - How SharePoint 2010 Business Connectivity Services could change yo

    - by dmccollough
    Bio: Corey Roth is a consultant at Stonebridge specializing in SharePoint solutions in the Oil & Gas Industry. He has ten plus years of experience delivering solutions in the energy, travel, advertising and consumer electronics verticals. Corey has always focused on rapid adoption of new Microsoft technologies including Visual Studio 2010, SharePoint 2010, .NET Framework 4.0, LINQ, and SilverLight. He also contributed greatly to the beta phases of Visual Studio 2005. For his contributions, he was awarded the Microsoft Award for Customer Excellence (ACE). Corey is a graduate of Oklahoma State University. Corey is a member of the .NET Mafia (www.dotnetmafia.com) where he blogs about the latest technology and SharePoint. Abstract: How SharePoint 2010 Business Connectivity Services could change your life - The New BDC How many hours have your wasted building simple ASP.NET applications to do nothing more than simple CRUD operations against a database.  Many tools have made this easier, but now it's so easy, you'll be up and running in minutes.  This session will show you hot easy it is to get started integrating external data from your line of business systems in SharePoint 2010.  You will learn how to register an external content type using SharePoint Designer based upon a database table or web service and then build an external list.  With external lists, you will see how you can perform CRUD operations on your line of business directly from SharePoint without ever having to do manual configuration in XML files.  Finally, we will walk through how to create custom edit forms for your list using InfoPath 2010. Agenda: 6pm - 6:30 Pizza and Mingle - Sponsored by TekSystems 6:30 - 6:45 Announcements 6:45 - 7:45 Presentation! 7:45 - 8:00 Drawings and Door Prizes Location: TCC (Tulsa Community College) Northeast Campus 3727 East Apache Tulsa, OK 74115 918-594-8000 Campus Map | Live | Yahoo | Google | MapQuest Door Prizes: We will be giving away one of each of these: XBox 360 - Halo 3 ODST Telerik Premium Collection ($1300.00 value) ReSharper ($199.00 value) SQLSets ($149.00 value) 64 bit Windows 7 Introducing Windows 7 for Developers Developing Service-Oriented AJAX Applications on the Microsoft Platform Sponsors: Thanks to our sponsors: TekSystems - Thanks for purchasing the Pizza for our meetings. ISOCentric - Thanks for providing us hosting for the groups web site. Tulsa Community College - Thanks for providing us a place to have our meetings. NEVRON - Thanks for providing us prizes to give away. INETA.org - For allowing us to be a Charter Member and providing awesome Speakers! PERPETUUM Software - Thanks for providing us prizes to give away. Telerik - Thanks for providing us prizes to give away. GrapeCity - Thanks for providing us prizes to give away. SQLSets - Thanks for providing us prizes to give away. K2 - Thanks for providing us prizes to give away. Microsoft - For providing us with a lot of support and product giveaways! Orielly books - For providing us with books and discounts. Wrox books - For providing us with books and discounts. Have any special requests? Let us know at this link: http://tinyurl.com/lg5o38. RSVP for this month's meeting by responding to this thread: http://tinyurl.com/yafkzel . (Must be logged in to the site) Be SURE to RSVP no later than Noon on April 12th and you will get an extra entry for the prize drawings! So, do it now, before you forget and miss out! Show up for the first time or bring a new buddy and you both get TWO extra entries!

    Read the article

  • Another sound not working post

    - by Thomas Smart
    Tried all the other "sound not working" posts i think, lost count. purge/reinstall alsa and pulse, reboot, add user to audio group, various lines in the alsa config file such as "options snd-hda-intel model=" then tried different options like generic, auto, basic, default, etc. tried pulseaudio -k && sudo alsa force-reload a few times, with and without rebooting. Hardware: 16gb ram, core I7-4790, Intel Haswell mboard with onboard sound and graphics Multimedia: Audio Adapter: HDA-Intel-HDA Intel HDMI OS: Ubuntu server 14.04 with ubuntu-desktop installed. GUI sound settings lists only the dummy sound card alsamixer -c 0 ¦ Card: HDA Intel HDMI F1: Help ¦ ¦ Chip: Intel Haswell HDMI F2: System information ¦ ¦ View: F3:[Playback] F4: Capture F5: All F6: Select sound card ¦ ¦ Item: S/PDIF ¦ ¦ +--+ ¦ ¦ ¦OO¦ ¦ ¦ +--+ ¦ ¦ < S/PDIF > ¦ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: HDMI [HDA Intel HDMI], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 aplay -L default Playback/recording through the PulseAudio sound server null Discard all samples (playback) or generate zero samples (capture) pulse PulseAudio Sound Server hdmi:CARD=HDMI,DEV=0 HDA Intel HDMI, HDMI 0 HDMI Audio Output dmix:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample mixing device dsnoop:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct sample snooping device hw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Direct hardware device without any conversions plughw:CARD=HDMI,DEV=3 HDA Intel HDMI, HDMI 0 Hardware device with all software conversions cat /proc/asound/cards 0 [HDMI ]: HDA-Intel - HDA Intel HDMI HDA Intel HDMI at 0xf7d14000 irq 46 cat /proc/asound/devices 1: : sequencer 2: [ 0- 3]: digital audio playback 3: [ 0- 0]: hardware dependent 4: [ 0] : control 33: : timer mplayer -ao alsa:device=hdmi /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] alsa-lib: confmisc.c:768:(parse_card) cannot find card '1' [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_card_driver returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:392:(snd_func_concat) error evaluating strings [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_concat returned error: No such file or directory [AO_ALSA] alsa-lib: confmisc.c:1251:(snd_func_refer) error evaluating name [AO_ALSA] alsa-lib: conf.c:4248:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directory [AO_ALSA] alsa-lib: conf.c:4727:(snd_config_expand) Evaluate error: No such file or directory [AO_ALSA] alsa-lib: pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM hdmi [AO_ALSA] Playback open error: No such file or directory Failed to initialize audio driver 'alsa:device=hdmi' Could not open/initialize audio device -> no sound. Audio: no sound Video: no video Exiting... (End of file) mplayer -ao alsa:device=hw=0.3 /usr/share/sounds/ubuntu/stereo/system-ready.ogg MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team mplayer: could not connect to socket mplayer: No such file or directory Failed to open LIRC support. You will not be able to use your remote control. Playing /usr/share/sounds/ubuntu/stereo/system-ready.ogg. libavformat version 54.20.4 (external) Mismatching header version 54.20.3 libavformat file format detected. [lavf] stream 0: audio (vorbis), -aid 0 Load subtitles in /usr/share/sounds/ubuntu/stereo/ ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders libavcodec version 54.35.0 (external) AUDIO: 44100 Hz, 1 ch, floatle, 80.0 kbit/5.67% (ratio: 10000->176400) Selected audio codec: [ffvorbis] afm: ffmpeg (FFmpeg Vorbis) ========================================================================== [AO_ALSA] Format floatle is not supported by hardware, trying default. AO: [alsa] 44100Hz 2ch s16le (2 bytes per sample) Video: no video Starting playback... A: 0.4 (00.4) of 0.8 (00.7) 0.1% Exiting... (End of file) Thank you for your time and help :)

    Read the article

  • BlueTooth not working on my HP Probook 4720s

    - by mtrento
    the blue tooth on my ubuntu 11.10 does not work. When i try to ad a device it scans indefinitely and never find anything. Wireless is working perfeclty and with windows 7 it is detected. As i read somewhere , the bluetooth is not listed in the usb devices. Is it supported under ubuntu? here are the output of the various debug command i tested : hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: E0:2A:82:7A:8B:04 ACL MTU: 310:10 SCO MTU: 64:8 UP RUNNING PSCAN ISCAN RX bytes:1895 acl:0 sco:0 events:70 errors:0 TX bytes:1986 acl:0 sco:0 commands:64 errors:0 Features: 0xff 0xff 0x8f 0xfe 0x9b 0xff 0x59 0x83 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'PC543host-0' Class: 0x5a0100 Service Classes: Networking, Capturing, Object Transfer, Telephony Device Class: Computer, Uncategorized HCI Version: 2.1 (0x4) Revision: 0x149c LMP Version: 2.1 (0x4) Subversion: 0x149c Manufacturer: Cambridge Silicon Radio (10) hcitool scan hcitool scan Scanning ... lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 04f2:b1ac Chicony Electronics Co., Ltd Bus 002 Device 003: ID 413c:3010 Dell Computer Corp. Optical Wheel Mouse Bus 002 Device 004: ID 148f:1000 Ralink Technology, Corp. lsmod | grep -i bluetooth bluetooth 166112 23 bnep,rfcomm,btusb dmesg | grep -i bluetooth [ 18.543947] Bluetooth: Core ver 2.16 [ 18.544017] Bluetooth: HCI device and connection manager initialized [ 18.544020] Bluetooth: HCI socket layer initialized [ 18.544021] Bluetooth: L2CAP socket layer initialized [ 18.545469] Bluetooth: SCO socket layer initialized [ 18.548890] Bluetooth: Generic Bluetooth USB driver ver 0.6 [ 30.204776] Bluetooth: RFCOMM TTY layer initialized [ 30.204782] Bluetooth: RFCOMM socket layer initialized [ 30.204784] Bluetooth: RFCOMM ver 1.11 [ 30.247291] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 30.247295] Bluetooth: BNEP filters: protocol multicast lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) 00:1c.3 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4 (rev 05) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 05) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5400 Series] 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] 44:00.0 Network controller: Ralink corp. RT3090 Wireless 802.11n 1T/1R PCIe 45:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no 2: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 3: hp-bluetooth: Bluetooth Soft blocked: no Hard blocked: no

    Read the article

  • Booting the liveCD/USB in EFI mode fails on Samsung Tablet XE700T1A

    - by F.L.
    My tablet is Samsung Series 7 Slate (XE700T1A-A02FR (French Language)). It operates an Intel Sandy Bridge architecture. The main issue about this tablet is that it ships with an installed Windows 7 in (U)EFI mode (GPT partition table, etc.), so I'd like to get an EFI dual boot with Ubuntu. But it seems I can't boot on the liveCD in EFI mode. It starts loading (up to initrd), but I then get a blank (black) screen. I've tried the nomodeset kernel option (as well as removing quiet and splash) with no luck. [2012-09-27] I have used the Ubuntu 12.04.1 Desktop ISO (I have read somewhere that it is the only one that can boot in EFI mode). I'd say this has something to do with UEFI since the LiveCD boots in bios mode but not in efi mode. Besides, I am not sure my boot info will help, since I can't boot the LiveCD in EFI mode. As a result I can't install ubuntu in EFI mode. So it would be the boot info from the liveCD boot in bios mode. This happens on a ubuntu-12.04.1-desktop-amd64 iso used on a LiveUSB. Live USB was created by dd'ing the iso onto the full disk device (i.e. /dev/sdx no number) of the Flash drive. I have also tried copying the LiveCD files on a primary GPT partition, but with no luck, I just get the grub shell, no menu, no install option. [2012-09-28] I tried today a flash drive created with Ubuntu's Startup Disk Creator and the alternate 12.04.1 64 bit ISO. I get a grub menu in text mode (which meens it did start in efi mode) with install options / test options. But when I start any of these, I simply get a black screen (no cursor, neither mouse nor text-mode cursor). I tried removing the 'quiet' option and adding nomodeset and acpi=off, but it didn't do any good. So this is the same result as for the LiveCD. [2012-10-01] I have tried with a version of the secure remix version via usb-creator-gtk. The boot on the USB key has the same symptoms. Boot in EFI mode is impossible (I have menu but whatever entry I choose, I get the blank screen problem). The boot in BIOS mode works, I did the install. Then I used boot-repair to try installing grub-efi and get a system that would boot in efi mode. But I can't boot this system, because the EFI firmware doesn't seem to detect that sda contains a valid efi partition. Here is the resulting boot-info Boot info 1253554 [2012-10-01] Today, I have reinstalled the pre-shipped version of windows 7, and then installed ubuntu from a secure-remix iso dumped on USB flash drive vie usb-creator-gtk booted in BIOS mode. When install ended, I said "continue testing" then I used boot-repair to try get the bootloader installed. Now, when I boot the tablet, I get the grub menu, it can chainload windows 7 flawlessly. But when I try to start one of the ubuntu options I get the same old blank screen. Here is the new boot-info: Boot info 1253927 [2012-10-01] I tried installing the 3.3 kernel by chrooting a live usb boot (secure remix again) into the installed system. Same symptoms. I feel the key to this is that the device's efi firmware (which is EFI v2.0) would expose the graphics hardware in a way that prevents the kernel to initialize it, and thus prevents it from booting (the kernel stops all drive access just after the screen turns kind of very dark purple). Here is some info on the UEFI firmware as given by rEFInd: EFI revision: 2.00 Platform: x86_64 (64 bit) Firmware: American Megatrends 4.635 Screen Output: Graphics Output (UEFI), 800x600 [2012-10-08] This week end I tried loading the kernel with elilo. Eventhough I didn't have more luck on booting the kernel, elilo gives more info when loading the kernel. I think the next step is trying to load a kernel with EFI stub directly.

    Read the article

  • Access Control Service: Transitioning between Active and Passive Scenarios

    - by Your DisplayName here!
    As I mentioned in my last post, ACS features a number of ways to transition between protocol and token types. One not so widely known transition is between passive sign ins (browser) and active service consumers. Let’s see how this works. We all know the usual WS-Federation handshake via passive redirect. But ACS also allows driving the sign in process yourself via specially crafted WS-Federation query strings. So you can use the following URL to sign in using LiveID via ACS. ACS will then redirect back to the registered reply URL in your application: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3dwsfederation%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f The wsfederation bit in the wctx parameter indicates, that the response to the token request will be transmitted back to the relying party via a POST. So far so good – but how can an active client receive that token now? ACS knows an alternative way to send the token request response. Instead of doing the redirect back to the RP, it emits a page that in turn echoes the token response using JavaScript’s window.external.notify. The URL would look like this: GET /login.srf?   wa=wsignin1.0&   wtrealm=https%3a%2f%2faccesscontrol.windows.net%2f&   wreply=https%3a%2f%2fleastprivilege.accesscontrol.windows.net%3a443%2fv2%2fwsfederation&   wp=MBI_FED_SSL&   wctx=pr%3djavascriptnotify%26rm%3dhttps%253a%252f%252froadie%252facs2rp%252frest%252f ACS would then render a page that contains the following script block: <script type="text/javascript">     try{         window.external.Notify('token_response');     }     catch(err){         alert("Error ACS50021: windows.external.Notify is not registered.");     } </script> Whereas token_response is a JSON encoded string with the following format: {   "appliesTo":"...",   "context":null,   "created":123,   "expires":123,   "securityToken":"...",   "tokenType":"..." } OK – so how does this all come together now? As an active client (Silverlight, WPF, WP7, WinForms etc). application, you would host a browser control and use the above URL to trigger the right series of redirects. All the browser controls support one way or the other to register a callback whenever the window.external.notify function is called. This way you get the JSON string from ACS back into the hosting application – and voila you have the security token. When you selected the SWT token format in ACS – you can use that token e.g. for REST services. When you have selected SAML, you can use the token e.g. for SOAP services. In the next post I will show how to retrieve these URLs from ACS and a practical example using WPF.

    Read the article

  • WiFi stops working after a while in Lenovo ThinkPad W520 (Ubuntu 12.04)

    - by el10780
    After several minutes(I do not know how many) there is no internet connection on my laptop via Wi-Fi.Ubuntu doesn't show any kind of message that my WiFi was disconnected neither there is a signal drop,but suddenly Firefox stops connecting to web pages.I checked my modem/router and it seems that it is working fine.I tried also to reboot the WiFi device and nothing happens.The only thing that it makes it work again is a reboot of the system and if I do not want to do a reboot then I am enforced to connect to the Internet using Ethernet cable.Does anybody know what is happening? ## Some Hardware info that might be helpful ## el10780@ThinkPad-W520:~$ sudo lshw -class network *-network description: Ethernet interface product: 82579LM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 04 serial: f0:de:f1:f1:be:10 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.5.1-k duplex=full firmware=0.13-3 ip=192.168.0.10 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:50 memory:f3a00000-f3a1ffff memory:f3a2b000-f3a2bfff ioport:6080(size=32) *-network description: Wireless interface product: Centrino Advanced-N + WiMAX 6250 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 5e serial: 64:80:99:63:14:74 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-26-generic firmware=41.28.5.1 build 33926 ip=192.168.0.6 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn resources: irq:52 memory:f3900000-f3901fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.3 logical name: wmx0 serial: 00:1d:e1:53:b2:e8 capabilities: ethernet physical configuration: driver=i2400m firmware=i6050-fw-usb-1.5.sbcf link=no el10780@ThinkPad-W520:~$ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:16.3 Serial controller: Intel Corporation 6 Series/C200 Series Chipset Family KT Controller (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b4) 00:1c.6 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 7 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation QM67 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) 03:00.0 Network controller: Intel Corporation Centrino Advanced-N + WiMAX 6250 (rev 5e) 0d:00.0 System peripheral: Ricoh Co Ltd Device e823 (rev 08) 0d:00.3 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 PCIe IEEE 1394 Controller (rev 04) 0e:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) el10780@ThinkPad-W520:~$ rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: i2400m-usb:2-1.3:1.0: WiMAX Soft blocked: yes Hard blocked: no The weirdest thing is this screenshot which I took after running the **Additional Drivers** program.I mean I have a NVidia Quadro 1000M and my Intel Centrino WiFi Card and this shows that there are not proprietay drivers for my system. http://imageshack.us/photo/my-images/268/screenshotfrom201207062.png/

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Failed to unmount partitions

    - by msknapp
    I'm trying to install ubuntu from a pen drive. I have windows 7 installed already and want to keep that installation. I have a 3TB drive that has one 2TB partition on it, so the last 1TB is completely unused, which is where I want to install ubuntu. I started ubuntu in "try ubuntu" mode and then opened gparted, and then deleted the unused partition for the last third of my drive, then tried to install ubuntu. During the install, it asked me if I wanted to unmount the drives I already have The installer has detected that the following disks have mounted partitions: /dev/sda, /dev/sdb Do you want the installer to try to unmount the partitions on these disks before continuing? If you leave them mounted, you will not be able to create, delete, or resize partitions on these disks, but you may be able to install to existing partitions there. No, Yes I said no because I don't want to lose my windows 7 installation, nor any of that data. I wonder, if I had said yes above, would I have lost all the data on those drives? Anyways, I hit no and continued. I chose to install ubuntu alongside windows 7, and hit continue. A few minutes passed when this popup appeared: Failed to unmount partitions The installer needs to commit changes to partition tables, but cannot do so because the partitions on the following mount points could not be unmounted: /media/ubuntu/Three\ Terabyte Drive Terabyte\ DriveDrive Please close any applications using these mount points. Would you like the installer to try to unmount these partitions again? Go Back, Continue Why is this not working? What am I supposed to do? ========== Update: I went ahead and said yes, it can unmount those partitions. It finished installing Ubuntu, but now when i start my machine it just takes me to the grub rescue prompt. Seems like it broke something. What can I do now? =============== Results of fdisk -l: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027e14 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 976771071 488282112 7 HPFS/NTFS/exFAT WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdc: 16.0 GB, 16008609792 bytes 255 heads, 63 sectors/track, 1946 cylinders, total 31266816 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 * 32 31266815 15633392 c W95 FAT32 (LBA) Disk /dev/sdd: 999.5 GB, 999501594624 bytes 255 heads, 63 sectors/track, 121515 cylinders, total 1952151552 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002ae3f Device Boot Start End Blocks Id System /dev/sdd1 2048 1952151551 976074752 7 HPFS/NTFS/exFAT

    Read the article

  • Moving data files failing

    - by Miles Hayler
    Trying to migrate data from C: to D: via the SBS console is failing. The wizard starts running but drops out in the first few seconds. I'll post the full logs, but the important lines appear to be as follows: An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task --- ErrorCode:0 I've been googling for days with no luck. I have found that mscorlib is a component of .net, and I've discovered multiple instances of the file in %windir%, %windir%\winsxs, %windir%\Microsoft.net Anyone come across and fixed this one before? --------------------------------------------------------- [1516] 110315.190856.1105: Storage: Initializing...C:\Program Files\Windows Small Business Server\Bin\MoveData.exe [1516] 110315.190856.2875: Storage: Data Store to be moved: Exchange [1516] 110315.190856.5305: TaskScheduler: Exception System.IO.FileNotFoundException: [1516] 110315.190856.5605: Exception: --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5625: Storage: Exception Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: [1516] 110315.190856.5635: Exception: --------------------------------------- [b]An exception of type 'Type: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException, Common, Version=6.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' has occurred.[/b] Timestamp: 03/15/2011 19:08:56 Message: Failed to find the task path Stack: at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --------------------------------------- An exception of type 'Type: System.IO.FileNotFoundException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' has occurred. Timestamp: 03/15/2011 19:08:56 Message: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) Stack: at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) [1516] 110315.190856.5665: Storage: Error Retrieving Server Backup Task Status: ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: GetServerBackupTaskStatus: fail to find the task ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Common.WindowsTaskSchedulerException: Failed to find the task path ---> System.IO.FileNotFoundException: The system cannot find the file specified. (Exception from HRESULT: 0x80070002) at TaskScheduler.TaskSchedulerClass.GetFolder(String Path) at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Common.WindowsTaskScheduler..ctor(String taskPath, String taskName) at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.ServerBackupUtility.GetServerBackupTaskStatus() at Microsoft.WindowsServerSolutions.Storage.MoveData.Helper.get_ServerBackupTaskState() [1516] 110315.190857.6216: Storage: Backup Task State: Unknown [1516] 110315.190857.9347: Storage: Launching the Move Data Wizard! [1516] 110315.190857.9397: Wizard: Admin:QueryNextPage(null) = Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190857.9417: Wizard: TOC Storage.MoveDataWizard.GettingStartedPage is on ExpectedPath [1516] 110315.190857.9577: Wizard: Storage.MoveDataWizard.GettingStartedPage entered [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.DiagnoseDataStorePage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9657: Wizard: TOC Storage.MoveDataWizard.NewDataStoreLocationPage is on ExpectedPath [1516] 110315.190857.9657: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190857.9697: Wizard: ---------------------------------- [1516] 110315.190857.9697: Wizard: The pages visted: [1516] 110315.190857.9697: Wizard: Current Page := [TOC Storage.MoveDataWizard.GettingStartedPage] [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190857.9697: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190857.9697: Wizard: Step 1 of 3 [1516] 110315.190907.0406: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.GettingStartedPage) = Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0416: Wizard: Storage.MoveDataWizard.GettingStartedPage exited with the button: Next [1516] 110315.190907.0416: WizardChainEngine Next Clicked: Going to page {0}.: Storage.MoveDataWizard.DiagnoseDataStorePage [1516] 110315.190907.0496: Wizard: Storage.MoveDataWizard.DiagnoseDataStorePage entered [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.DiagnoseDataStorePage) = Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0606: Wizard: Admin:QueryNextPage(Storage.MoveDataWizard.NewDataStoreLocationPage) = null [1516] 110315.190907.0606: Wizard: ---------------------------------- [1516] 110315.190907.0606: Wizard: The pages visted: [1516] 110315.190907.0606: Wizard: [TOC] visited: TOC Storage.MoveDataWizard.GettingStartedPage [1516] 110315.190907.0606: Wizard: Current Page := [TOC Storage.MoveDataWizard.DiagnoseDataStorePage] [1516] 110315.190907.0616: Wizard: [TOC] : TOC Storage.MoveDataWizard.NewDataStoreLocationPage [1516] 110315.190907.0616: Wizard: Step 2 of 3 [19772] 110315.190907.0656: Storage: Starting System Diagnosis [19772] 110315.190907.0656: Storage: Getting Data Store Information [19772] 110315.190907.1086: Storage: Create the list of storage and DB directory path [19772] 110315.190907.1246: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks..ctor [19772] 110315.190907.1546: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190907.1596: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190907.1606: Messaging: Exchange install path: C:\Program Files\Microsoft\Exchange Server\bin [19772] 110315.190908.4157: Messaging: E12 Monad runspace created ID: Microsoft.PowerShell [19772] 110315.190908.4237: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190908.4287: Messaging: Executed management shell command: get-exchangeserver [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.2369: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190910.5699: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190910.5719: Messaging: Executed management shell command: get-user -Identity "dmagroup.local\Administrator" [19772] 110315.190911.0870: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.0880: Messaging: Executed management shell command: get-mailbox -Identity "d2ae2bf0-48a7-4ce9-9e72-bb3c765454ac" [19772] 110315.190911.1300: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1310: Messaging: User Administrator is mail enabled and can use MessagingManagement to send mail. [19772] 110315.190911.1310: Messaging: Email address used for user: [email protected] [19772] 110315.190911.1440: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1440: Messaging: Executed management shell command: get-group -Identity "Domain Admins" [19772] 110315.190911.1630: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1640: Messaging: User Administrator is a member of Domain Admins and can use MessagingManagement to manage Exchange. [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.GatherAdminInfo [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for Exchange management: True [19772] 110315.190911.1640: Messaging: MessagingManagement enabled for mail submission: True [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingTasks.Initialize [19772] 110315.190911.1640: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Tasks.TaskMoveExchangeData.CreateDataStoreDriveList [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.1670: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.1670: Messaging: Executed management shell command: get-storagegroup -Server "SERVER01" [19772] 110315.190911.2990: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.3070: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.3070: Messaging: Executed management shell command: get-mailboxdatabase -Server "SERVER01" [19772] 110315.190911.4440: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.Initialize [19772] 110315.190911.4520: Messaging: Begin Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.4520: Messaging: Executed management shell command: get-publicfolderdatabase -Server "SERVER01" [19772] 110315.190911.5240: Messaging: End Microsoft.WindowsServerSolutions.Messaging.Management.MessagingRunspace.StaticExecute [19772] 110315.190911.5510: Storage: Data Store Drive/s Details:Name=C:\,Size=12675712420 [19772] 110315.190911.5510: Storage: Data Store Size Details: Current Total Size=12675712420 Required Size=12675712420 [19772] 110315.190911.5510: Storage: MoveData Task can move the Data Store=True [19772] 110315.190911.8401: Storage: An error was encountered when performing system diagnosis : ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) [1516] 110315.190912.0331: Storage: An error occured during the execution: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: Diagnosing the Data Store failed (see the inner exception) ---> ErrorCode:0 BaseException: Microsoft.WindowsServerSolutions.Storage.Common.StorageException: WMI error occurred while accessing drive ---> System.Management.ManagementException: Not found at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.Common.DriveUtil.IsDriveRemovable(String drive) at Microsoft.WindowsServerSolutions.Storage.Common.DataStoreInfo.LoadAvailableDrives() at Microsoft.WindowsServerSolutions.Storage.Common.MoveDataUtil.CanMoveData(DataStoreInfo storeInfo, MoveDataError& error) at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.DiagnoseDataStore(Object sender, DoWorkEventArgs args) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Storage.MoveData.DiagnoseDataStorePagePresenter.backgroundWorker_RunWorkerCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardFrameView.Create() at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.LaunchMoveDataWizard() at Microsoft.WindowsServerSolutions.Storage.MoveData.MainClass.Main(String[] args)

    Read the article

  • Stdin to powershell script

    - by Stefan
    I have a service running that can invoke an external process to modify a text stream before it is returned to the service. The text stream is handed from the service to the external process on stdout and the modified result is read from the service on stdin. The external process (command) can in other words be used as a text "filter". I would like to use a powershell script to modify the text stream. I can successfully launch a script from the service on win 2008r2 using the command "powershell -executionpolicy bypass -noninteractive ./myscript.ps1". I can make the script return text to the service on stdout using the write-host cmdlet. My problem is that I can't find a way to read the text on stdin in the script. Read-host doesn't seem to work as it requires an interactive shell. I would like to avoid writing the stdout from the service to a tmp file and read that file in the script as the service is multithreaded (can launch more than one external command at a time) and tmp file management (locking, unique filenames etc) is not desired. Is this possible or should I use for example Perl for this? Powershell seems compelling as it is preinstalled on all my win 2008 machines.

    Read the article

  • Hive Based Registry in Flash

    - by Psychic
    To start with I'll say I've read the post here and I'm still having trouble. I'm trying to create a CE6 image with a hive-based registry that actually stores results through a reboot. I've ticked the hive settings in the catalog items. In common.reg, I've set the location of the hive ([HKEY_LOCAL_MACHINE\init\BootVars] "SystemHive") to "Hard Drive\Registry" (Note: the flash shows up as a device called "Hard Drive") In common.reg, I've set "Flags"=dword:3 in the same place to get the device manager loaded along with the storage manager I've verified that these settings are wrapped in "; HIVE BOOT SECTION" This is where it starts to fall over. It all compiles fine, but on the target system, when it boots, I get: A directory, called "Hard Disk" where a registry is put A device, name called "Hard Disk2" where the permanent flash is Any changes made to the registry are lost on a reboot What am I still missing? Why is the registry not being stored on the flash? Strangly, if I create a random file/directory in the registry directory, it is still there after a reboot, so even though this directory isn't on the other partition (where I tried to put it), it does appear to be permanent. If it is permanent, why don't registry settings save (ie Ethernet adapter IP addresses?) I'm not using any specific profiles, so I'm at a loss as to what the last step is to make this hive registry a permanent store.

    Read the article

  • WNetAddConnection2 in Windows 7 with Impersonation and no Error Code

    - by Adam Driscoll
    I'm doing some crazy impersonation stuff to get around UAC dialogs in Windows 7 so the user does not have to interact with the UI (I have the admin creds of course). I have a process running as the Administrator and elevated past UAC. The issue that I'm facing is that when I make a call to WNetAddConnection2, within this process, I am not getting a new mapped net drive. The function returns ERROR_SUCCESS but no net drive is visible. We have another method of adding network drives using 'subst' but this, again, returns successful does does not add a net drive. I have tried to use the default user (which is the Administrator because of process's security context) and I have tried using specific user credentials. I can map the drive just fine through Explorer. Of course the same functionality works fine in XP/2003. I haven't got around to testing on Vista because of issues with impersonation that are limiting my ability to spin up the process. Are there unique Windows 7 limits on this function? MSDN does not glean any that I can find. Any help would be greatly appreciated!

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >