Search Results

Search found 34595 results on 1384 pages for 'vendor id'.

Page 662/1384 | < Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >

  • Unable to add users to Microsoft Dynamics CRM 4.0 after database restore

    - by Wes Weeks
    Working with a client in our Multi-tenant CRM environment who was doing a database migration into CRM and as part of the process, a backup of their Organization_MSCRM database was taken just prior to starting the migration in case it needed to be restored and run a second time. In this case it did, so I restored the database and let the client know he should be good to go.  A few hours later I received a call that they were unable to add some new users, they would appear as available when using the add multiple user wizard, but anyone added would not be added to CRM.  It was also disucussed that these users had been added to CRM initally AFTER the database backup had been taken. I turned on tracing and tried to add the users through both the single user form and multiple user interface and was unable to do so.  The error message in the logs wasn't much help: Unexpected error adding user [email protected]: Microsoft.Crm.CrmException: INVALID_WRPC_TOKEN: Validate WRPC Token: WRPCTokenState=Invalid, TOKEN_EXPIRY=4320, IGNORE_TOKEN=False Searching on Google or bing didn't offer any assitance.  Apparently not a very common problem, or no one has been able to resolve. I did some searching in the MSCRM_CONFIG database and found that their are several user tables there and after getting my head around the structure found that there were enties here for users that were not part of the restored DB.  It seems that new users are added to both the Orgnaization_MSCRM and MSCRM_CONFIG and after the restore these were out of sync. I needed to remove the extra entries in order to address.  Restoring the MSCRM_CONFIG database was not an option as other clients could have been adding users at this point and to restore would risk breaking their instances of CRM.  Long story short, I was finally able to generate a script to remove the bad entries and when I tried to add users again, I was succesful.  In case someone else out there finds themselves in a similar situation, here is the script I used to delete the bad entries. DECLARE @UsersToDelete TABLE (   UserId uniqueidentifier )   Insert Into @UsersToDelete(UserId) Select UserId from [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where CrmuserId Not in (select systemuserid from Organization_MSCRM.dbo.SystemUserBase) And OrganizationId = '00000000-643F-E011-0000-0050568572A1' --Id From the Organization table for this instance   Delete From [MSCRM_CONFIG].[dbo].[SystemUserAuthentication]   Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUserOrganizations] Where UserId in (Select UserId From @UsersToDelete)   Delete From [MSCRM_CONFIG].[dbo].[SystemUser] Where Id in (Select UserId From @UsersToDelete)

    Read the article

  • When to use HTTP status code 404 in an API

    - by Sybiam
    I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. More info, I forgot to add the urls that are fetched. Organizations /OrgTree/Get Goals /GoalTree/GetByDate?versionDate=... /GoalTree/GetById?versionId=... My mistake, both parameters are required. If any versionDate that can be parsed to a date is provided, it will return the closes revision. If you enter something in the past, it will return the first revision. If by Id with a id that doesn't exists, I suspect it's going to return an empty response with 200. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present. also I got linked this (one similar but I can't find it) http://viswaug.files.wordpress.com/2008/11/http-headers-status1.png

    Read the article

  • How to implement an email unsubscribe system for a site with many kinds of emails?

    - by Mike Liu
    I'm working on a website that features many different types of emails. Users have accounts, and when logged in they have access to a setting page that they can use to customize what types of emails they receive. However, I'd like to also give users an easy way to unsubscribe directly in the emails they receive. I've looked into list unsubscribe headers as well as creating some type of one click link that would unsubscribe a user from that type of email without requiring login or further action. The later would probably require me to break convention and make changes to the database in response to a GET on the link. However, am I incorrect in thinking that either of these would require me to generate and permanently store a unique identifier in my database for every email I ever send, really complicating email delivery? Without that, I'm not sure how I would be able to uniquely identify a user and a type of email in order to change their email preferences, and this identifier would need to be stored forever as a user could have an email sitting in their inbox for a long time before they decide to act on it. Alternatively, I was considering having a no-login page for managing email preferences. In contrast to above where I would need one of these identifiers for each email, this would only need one identifier per user, with no generation or other action required on sending an email. All of these raise security issues, and they could potentially be used by people to tamper with others' email preferences. This could be mitigated somewhat by ensuring that the identifier is really difficult to guess. For the once per user identifier approach, I was considering generating the identifier by passing a user's ID through some type of encryption algorithm, is this a sound approach? For the per-email identifiers, perhaps I could use a user's ID appended to the time. However, even this would not eliminate the problem entirely, as this would really just be security through obscurity, and anyone with the URL could tamper, and in the end the main defense would have to be that most people aren't so bored as to tamper with other people's email preferences. Are there any other alternatives I've missed, or issues or solutions with these that anyone can provide insight on? What are best practices in this area?

    Read the article

  • Content Locking network with Microsoft Web Development tools? [closed]

    - by Jose Garcia
    I want my team to develop a content locking network for a client. But we dont know what is need to devlop such a network. We need to code it from Scratch. Please give us some advice. Content locking networks http://www.blamads.com/ adworkmedia.com These two networks are made with MS Tools it seems. FAQ http://support.blamads.com/index.php?pg=kb.book&id=5 https://www.adworkmedia.com/cpa-network-features.php

    Read the article

  • New Whitepaper: Oracle WebLogic Clustering

    - by ACShorten
    A new whitepaper is available that outlines the concepts and steps on implementing web application server clustering using the Oracle Utilities Application Framework and Oracle WebLogic Server. The whitepaper include the following: A short discussion on the concepts of clustering How to setup a cluster using Oracle WebLogic's utilities How to configure the Oracle Utilities Application Framework to take advantage of clustering How to deploy the Oracle Utilities Application based products in a clustered environment Common cluster operations The whitepaper is available from My Oracle Support at Doc Id: 1334558.1.

    Read the article

  • Imon RM200 remote (device 15c2:ffdc) won't work on Ubuntu 12.04 or 11.10

    - by skerit
    The device is recognized just fine: Bus 001 Device 005: ID 15c2:ffdc SoundGraph Inc. iMON PAD Remote Controller Found /sys/class/rc/rc0/ (/dev/input/event6) with: Driver imon, table rc-imon-mce Supported protocols: RC-6 Enabled protocols: RC-6 Repeat delay = 500 ms, repeat period = 125 ms But any testing results in nothing. I point the remote, I press a few button and nothing happens. Not in irw, not in ir-keytable, nothing. It's driving me insane.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 'Installing breakpad exception handler for appid(steam)' while trying to run Steam

    - by Star Diamond
    I installed steam for ubuntu , so I tried to launch it and i get this : ~$ steam Installing breakpad exception handler for appid(steam)/version(1352224866_client) ~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal ~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Whistler XT [AMD Radeon HD 6700M Series] (rev ff) What is the problem and how to fix it?

    Read the article

  • Najblizsze szkolenie z BI 11g dla partnerów

    - by michal.grochowski
    14-15 marca odbedzie sie szkolenie z OBI 11g Foundation przeznaczone dla partnerów Oracle. Wiecej informacji mozna znalezc na stronie : http://www.arrowecsservices.pl/www/news.nsf/id/BI_11g_Foundation Szkolenie to moze byc o tyle interesujace ze nie wymaga duzych nakladów finansowych! Oprócz tego oczywiscie mozna zaczerpnac wiedzy bezposrednio w Oracle Universtity, szczególy pod adresem : http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=4&dc=D63514GC10&p_org_id=10&lang=PL

    Read the article

  • It's called College.

    - by jeffreyabecker
    Today I saw yet another 'GUID vs int as your primary key' article. Like most of the ones I've read this was filled with technical misrepresentations and out-right fallices. Chef's famous line that "There's a time and a place for everything children" applies here. GUIDs have distinct advantages and disadvantages which should be considered when choosing a data type for the primary key. Fallacy 1: "Its easier" An integer data type(tinyint, smallint, int, bigint) is a better artifical key than a GUID because its easier to remember. I'm a firm believer that your artifical primary keys should be opaque gibberish. PK's are an implementation detail which should never be exposed to the user or relied on for business logic. If you want things to come back in an order, add and ORDER BY clause and SortOrder fields. If you want a human-usable look-up add a business key with a unique constraint. If you want to know what order things were inserted into a table add a timestamp. Fallacy 2: "Size Matters" For many applications, the size of the artifical primary key is going to be irrelevant. The particular article which kicked this post off stated repeatedly that joining against an int has better performance than joining against a GUID. In computer science the performance of your algorithm is always a function of the number of data points. This still holds true for databases. Unless your table is very large, the performance difference between an int and a guid probably isnt going to be mesurable let alone noticeable. My personal experience is that the performance becomes an issue when you start having billions of rows in the table. At this point, you should probably start looking to move from int to bigint so the effective space/performance gain isnt as much as you'd think. GUID Advantages: Insert-ability / Mergeability: You can reliably insert guids into tables without key collisions. Database Independence: Saving entities to the database often requires knowing ids. With identity based ids the id must be selected back after every insert. GUIDs can be generated application-side allowing much faster inserts. GUID Disadvantages: Generatability: You can calculate the next id for an integer pk pretty easily in your head but will need a program to generate GUIDs. Solution: "Select top 100 newid() from sysobjects" Fragmentation: most GUID generation algorithms generate pseudo random GUIDs. This can cause inserts into the middle of your clustered index. Solutions: add a default of newsequentialid() or use GuidComb in NHibernate.

    Read the article

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • Elastic PaaS with WebLogic and OpenStack, part I

    - by Jernej Kaše
    In my previous blog I described the steps to get OpenStack on Solaris up and running. Now we'll explore how WebLogic and OpenStack can work together to deliver truly elastic Middleware Platform as a Service. Middleware / Platform as a Service goals First, let's define what PaaS should be : PaaS offerings facilitate the deployment of applications without the complexity of managing the underlying hardware and software and provisioning hosting capabilities. To break it down: - PaaS provides a complete platform for hosting solutions (Java EE, SOA, BPM, ...) - Infrastructure provisioning (virtual machine, OS, platform) and managing is hidden from the PaaS user [administrator or developer] - Additionally, PaaS could / should define target SLAs, and the platform should ensure the SLAs are meet automatically. PaaS use case To make it more tangible, we have an IT Administrator who has the requirement to deploy a Java EE enterprise application. The application is used by external users who need to submit reports by the end of each month. As a result, the number of concurrent users will fluctuate, with expected huge spikes around the end of each month. The SLA agreed by the management is that no more than 100 requests should be waiting to be processes at any given time. In addition, the IT admin has no more than 3 days to have the platform and the application operational. The Challenges Some of the challenges the IT Administrator is facing are: - how are we going to ensure the processing power? - how are we going to provision the (virtual) machines, Java EE platform and deploy the application? - how are we going to monitor the SLA? - how are we going to react to SLA, and increase capacity?  The Ideal Solution Ideally, the whole process should be automated, "set it and forget" and require no human interaction: - The vendor packages the solution as deployable image(s) - The images are deployed to the IaaS - From there, automated processes take care of SLA  Solution Architecture with WebLogic 12c, Dynamic Clusters, OpenStack & Solaris OracleSolaris provides OS and virtualisation through Solaris Zones OpenStack is a part of Solaris 11.2 and provides Cloud Management (console and API) WebLogic 12c with Dynamic Clusters provides the Platform Trafic Manager provides load balancing On top of out that, we are going to implement a small control script - Cloud Manager - which is going to monitor SLA through WebLogic Diagnostic Framework. In case there are more than 100 pending requests, the script will: - provision a new virtual machine based on image which is configured for the WebLogic domain - add the machine to WebLogic domain - Increase the number of servers in dynamic cluster - Start the newly provisioned server  Stay tuned for part II The hole solution with working demo will be presented in one of our Partner WebCasts in June, exact date TBA. Jernej Kaše is a Fusion Middleware Specialist working closely with Oracle Partners in the ECEMEA region to grow their business by leveraging Oracle technology.

    Read the article

  • Hostapd - WLAN as AP

    - by BBK
    I'm trying to start hostapd but without success. I'm using Headless Ubuntu 11.10 oneiric 3.0.0-16-server x86_64. WLAN driver is rt2800usb and my wireless nic card TP-Link TL-WN727N supports AP mode as shows below: us0# ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:27:19:be:cd:b6 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) us0# lsusb Bus 003 Device 003: ID 148f:3070 Ralink Technology, Corp. RT2870/RT3070 Wireless Adapter us0# lshw -C network *-network:3 description: Wireless interface physical id: 4 bus info: usb@3:2 logical name: wlan0 serial: 00:27:19:be:cd:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.0.0-16-server firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn us0# hostapd /etc/hostapd/hostapd.conf Configuration file: /etc/hostapd/hostapd.conf Could not read interface wlan0 # The int flags: No such device nl80211 driver initialization failed. ELOOP: remaining socket: sock=4 eloop_data=0xd3e4a0 user_data=0xd3ecc0 handler=0x433880 ELOOP: remaining socket: sock=6 eloop_data=0xd411f0 user_data=(nil) handler=0x43cc10 us0# cat /etc/hostapd/hostapd.conf ssid=Home interface=wlan0 # The interface name of the card #driver=rt2800usb driver=nl80211 macaddr_acl=0 ieee80211n=1 channel=1 hw_mode=g auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_passphrase=88888888 wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP us0# iw list Wiphy phy0 Band 1: Capabilities: 0x172 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 1-stream Max AMSDU length: 7935 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-7, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (20.0 dBm) * 2417 MHz [2] (20.0 dBm) * 2422 MHz [3] (20.0 dBm) * 2427 MHz [4] (20.0 dBm) * 2432 MHz [5] (20.0 dBm) * 2437 MHz [6] (20.0 dBm) * 2442 MHz [7] (20.0 dBm) * 2447 MHz [8] (20.0 dBm) * 2452 MHz [9] (20.0 dBm) * 2457 MHz [10] (20.0 dBm) * 2462 MHz [11] (20.0 dBm) * 2467 MHz [12] (20.0 dBm) (passive scanning, no IBSS) * 2472 MHz [13] (20.0 dBm) (passive scanning, no IBSS) * 2484 MHz [14] (20.0 dBm) (passive scanning, no IBSS) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * Unknown command (68) * Unknown command (55) * Unknown command (57) * Unknown command (59) * Unknown command (67) * set_wiphy_netns * Unknown command (65) * Unknown command (66) * connect * disconnect The question is: Why the hostapd not starting?

    Read the article

  • Oracle Fusion Middleware with Oracle Database 12c

    - by Hiro
    Oracle Fusion Middleware ????????????????Oracle Database 12c ????????????·???????????????? Oracle WebLogic Server 11g (10.3.6) ??? Oracle Fusion Middleware 11g Release 1 (11.1.1.7.0) ?? Oracle Forms and Reports 11g Release 2 (11.1.2.1.0) ?? ??????????????·????????????????Oracle WebLogic Server ??????????My Oracle Support???Doc ID: 1564509.1 ??????????(My Oracle Support ?????????????????????) ?????

    Read the article

  • How to use ULS in SharePoint 2010 for Custom Code Exception Logging?

    - by venkatx5
    What is ULS in SharePoint 2010? ULS stands for Unified Logging Service which captures and writes Exceptions/Logs in Log File(A Plain Text File with .log extension). SharePoint logs Each and every exceptions with ULS. SharePoint Administrators should know ULS and it's very useful when anything goes wrong. but when you ask any SharePoint 2007 Administrator to check log file then most of them will Kill you. Because read and understand the log file is not so easy. Imagine open a plain text file of 20 MB in NotePad and go thru line by line. Now Microsoft developed a tool "ULS Viewer" to view those Log files in easily readable format. This tools also helps to filter events based on exception priority. You can read on this blog to know in details about ULS Viewer . Where to get ULS Viewer? ULS Viewer is developed by Microsoft and available to download for free. URL : http://code.msdn.microsoft.com/ULSViewer/Release/ProjectReleases.aspx?ReleaseId=3308 Note: Eventhought this tool developed by Microsoft, it's not supported by Microsoft. Means you can't support for this tool from Microsoft and use it on your own Risk. By the way what's the risk in viewing Log Files?! How to use ULS in SharePoint 2010 Custom Code? ULS can be extended to use in user solutions to log exceptions. In Detail, Developer can use ULS to log his own application errors and exceptions on SharePoint Log files. So now all in Single Place (That's why it's called "Unified Logging"). Well in this article I am going to use Waldek's Code (Reference Link). However the article is core and am writing container for that (Basically how to implement the code in Detail). Let's see the steps. Open Visual Studio 2010 -> File -> New Project -> Visual C# -> Windows -> Class Library -> Name : ULSLogger (Make sure you've selected .net Framework 3.5)   In Solution Explorer Panel, Rename the Class1.cs to LoggingService.cs   Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint"   Right Click on the Project -> Properties. Select "Signing" Tab -> Check "Sign the Assembly".   In the below drop down select <New> and enter "ULSLogger", uncheck the "Protect my key with a Password" option.   Now copy the below code and paste. (Or Just refer.. :-) ) using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint; using Microsoft.SharePoint.Administration; using System.Runtime.InteropServices; namespace ULSLogger { public class LoggingService : SPDiagnosticsServiceBase { public static string vsDiagnosticAreaName = "Venkats SharePoint Logging Service"; public static string CategoryName = "vsProject"; public static uint uintEventID = 700; // Event ID private static LoggingService _Current; public static LoggingService Current {  get   {    if (_Current == null)     {       _Current = new LoggingService();     }    return _Current;   } }private LoggingService() : base("Venkats SharePoint Logging Service", SPFarm.Local) {}protected override IEnumerable<SPDiagnosticsArea> ProvideAreas() { List<SPDiagnosticsArea> areas = new List<SPDiagnosticsArea>  {   new SPDiagnosticsArea(vsDiagnosticAreaName, new List<SPDiagnosticsCategory>    {     new SPDiagnosticsCategory(CategoryName, TraceSeverity.Medium, EventSeverity.Error)    })   }; return areas; }public static string LogErrorInULS(string errorMessage) { string strExecutionResult = "Message Not Logged in ULS. "; try  {   SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];   LoggingService.Current.WriteTrace(uintEventID, category, TraceSeverity.Unexpected, errorMessage);   strExecutionResult = "Message Logged"; } catch (Exception ex) {  strExecutionResult += ex.Message; } return strExecutionResult; }public static string LogErrorInULS(string errorMessage, TraceSeverity tsSeverity) { string strExecutionResult = "Message Not Logged in ULS. "; try  {  SPDiagnosticsCategory category = LoggingService.Current.Areas[vsDiagnosticAreaName].Categories[CategoryName];  LoggingService.Current.WriteTrace(uintEventID, category, tsSeverity, errorMessage);  strExecutionResult = "Message Logged";  } catch (Exception ex)  {   strExecutionResult += ex.Message;   } return strExecutionResult;  } } }   Just build the solution and it's ready to use now. This ULS solution can be used in SharePoint Webparts or Console Application. Lets see how to use it in a Console Application. SharePoint Server 2010 must be installed in the same Server or the application must be hosted in SharPoint Server 2010 environment. The console application must be set to "x64" Platform target.   Create a New Console Application. (Visual Studio -> File -> New Project -> C# -> Windows -> Console Application) Right Click on References -> Add Reference -> Under .Net tab select "Microsoft.SharePoint" Open Program.cs add "using Microsoft.SharePoint.Administration;" Right Click on References -> Add Reference -> Under "Browse" tab select the "ULSLogger.dll" which we created first. (Path : ULSLogger\ULSLogger\bin\Debug\) Right Click on Project -> Properties -> Select "Build" Tab -> Under "Platform Target" option select "x64". Open the Program.cs and paste the below code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.SharePoint.Administration; using ULSLogger; namespace ULSLoggerClient {  class Program   {   static void Main(string[] args)     {     Console.WriteLine("ULS Logging Started.");     string strResult = LoggingService.LogErrorInULS("My Application is Working Fine.");      Console.WriteLine("ULS Logging Info. Result : " + strResult);     string strResult = LoggingService.LogErrorInULS("My Application got an Exception.", TraceSeverity.High);     Console.WriteLine("ULS Logging Waring Result : " + strResult);      Console.WriteLine("ULS Logging Completed.");      Console.ReadLine();     }   } } Just build the solution and execute. It'll log the message on the log file. Make sure you are using Farm Administrator User ID. You can play with Message and TraceSeverity as required. Now Open ULS Viewer -> File -> Open From -> ULS -> Select First Option to open the default ULS Log. It's Uls RealTime and will show all log entries in readable table format. Right Click on a row and select "Filter By This Item". Select "Event ID" and enter value "700" that we used in the application. Click Ok and now you'll see the Exceptions/Logs which logged by our application.   If you want to see High Priority Messages only then Click Icons except Red Cross Icon on the Toolbar. The tooltip will tell what's the icons used for.

    Read the article

  • New Information Center - Reviewing Security For FMW 11g

    - by Daniel Mortimer
    Announcing ... Information Center: Reviewing Security For Oracle Fusion Middleware 11g [ID 1458051.2] has been published.  Screenshot of ID 1458051.2 What is an Information Center? Information Centers use widgets to aggregate knowledge content, such as support documents, product documentation, support community threads, which is pertinent to a given task or intent. Widgets either contain static lists or better still some widgets are dynamic. A dynamic widget uses a query criteria to present a list of support documents relevant to the title / subject matter of the widget. The content of a dynamic widget is refreshed automatically every 24 hours. Once you are in an Information Center, you can use the left hand menu to navigate to other Tasks / Intent Information Centers (e.g "Install and Configure", "Patch", "Troubleshoot", "Upgrade" which are available for the chosen product. Are Information Centers easy to find? You can go straight to the new "Reviewing Security" Information Center by using the hyperlink given above. There are, however, two other methods which make Information Centers easier to find. Browse Knowledge Refine Your Search Browse Knowledge The "Browse Knowledge" is currently found in the "Knowledge" Tab Page in My Oracle Support. As illustrated by the screenshots below, you can find Information Centers by choosing a product (e.g "Oracle Fusion Middleware"), a version and an action / intent. If an Information Center exists for your selection the "Advisor Found" button is enabled. Clicking on this button will take you straight to the desired Information Center.Screenshot - Browse Knowledge 1 Screenshot - Browse Knowledge 2 Screenshot - Browse Knowledge 3 Refine Your Search Refine your search is a dialogue which is triggered by certain keywords that you may enter into the Global Search field in the top right hand corner of My Oracle Support. The "Refine Your Search" works in a similar manner to "Browse Knowledge". Choose your product and version. The appropriate Task / Intent should already be selected for you. Thereafter, click the Go button. Screenshot - Refine Your Search 1 Screenshot - Refine Your Search 2 Screenshot - Refine Your Search 3

    Read the article

  • Returning null vs Throwing exceptions

    - by Svish
    Is in a bit of disagreement with a more experienced developer on this issue, and was wondering what you guys here think about this. Environment is Java, EJB 3, services, etc. The code I wrote calls a service to get things and to create things. Problem was that I got null pointer exceptions in places that didn't make sense. For example when I asked the service to create an object, I got null back. And when I tried to look up an object with an id I knew existed, I still got null back. Was like it was ignoring me. Spent some time trying to figure out what was wrong in my code (since I'm less experienced I usually assume I have messed up). Turns out the reason was security. If the user principal using my service didn't have the right permissions to use the service I called from my service, then that service simply returned null. The services that are here already are usually not documented either, so this is just something you have to know... somehow... So here is the thing: I mean that this is rather confusing as a developer interacting with this service. To me it would make much more sense if that service thew an exception which would tell me that hey, you don't have the proper permissions to get info about this thing or to create this new thing. I would then immediately know why my service wasn't working as expected. However, he argued that asking is not wrong. Exceptions should only be thrown when there is an error and asking for a thing is not an error. Even if you don't have permission to "see" that the thing you asked for. The things are often looked up in a GUI by users and for those users not having the right permissions, these things simply "do not exist". So, in short: Asking is not wrong, hence no exception. Get methods return null because to those users those things "doesn't exist". Create methods return null because nothing was created, since the user wasn't allowed to create anything. So, what do you guys think? Is this normal and/or good practice? I prefer exceptions as I prefer throwing and catching exceptions because I find it much easier to know what's going on. So I would for example also prefer to throw a NotFoundException if you asked for an id which didn't exist, rather than returning null. Anyways, just curious to what others think about this as I'm not the most experienced developer yet.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Oracle Leader in Transportation Management

    - by John Murphy
    Oracle Named a Leader in the Transportation Management Systems Market by Leading Analyst Firm Redwood Shores, Calif. – October 15, 2012 News Facts Gartner, Inc. has placed Oracle Transportation Management in the Leaders Quadrant of its 2012 report, “Magic Quadrant for Transportation Management Systems (TMS).” (1) Gartner Magic Quadrants position vendors within a particular market segment based on their completeness of vision and ability to execute on that vision. According to the report, “Multiple subcomponents make up a comprehensive TMS across planning (for example, load consolidation, routing, mode selection and carrier selection) and execution (for example, tendering loads to carriers, shipment track and trace, and freight audit and payment).” Built on modern, flexible, Internet based architecture, Oracle Transportation Management is a global transportation and logistics operations system that allows companies to minimize cost, optimize service levels, support sustainability initiatives, and create flexible business process automation within their transportation and logistics networks. With a share of 26% of worldwide software revenue for 2011, Oracle is also number one in TMS vendor share according to Gartner’s report, “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016.” (2) Supporting Quote “Shippers and logistics service providers face increasingly complex challenges as they try to reduce costs, secure capacity and improve overall freight efficiency,” said Derek Gittoes, vice president, logistics product strategy, Oracle. “We believe our high standing in both Gartner reports is a reflection of Oracle’s commitment to addressing these challenges by delivering the industry’s broadest and deepest transportation management platform. With a flexible and modern platform, we are able to support customers with both basic transportation needs, as well as those with highly complex logistics requirements.” Supporting Resources Magic Quadrant for Transportation Management Systems Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016 Oracle Transportation Management (1) Gartner, Inc., “Magic Quadrant for Transportation Management Systems,” by C. Dwight Klappich, August 23, 2012 (2) Gartner, Inc., “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016,” by Chad Eschinger and C. Dwight Klappich, September 24, 2012. About Oracle Applications Over 65,000 customers worldwide rely on Oracle's complete, open and integrated enterprise applications to achieve superior results. Oracle provides a secure path for customers to benefit from the latest technology advances that improve the customer software experience and drive better business performance. Oracle Applications Unlimited is Oracle's commitment to customer choice through continuous investment and innovation in current applications offerings. Oracle's next-generation Fusion Applications build upon that commitment, and are designed to work with and evolve Oracle's Applications Unlimited offerings. Oracle's lifetime support policy helps ensure customers will continue to have a choice in upgrade paths, based on their enterprise needs. For more information on the latest Oracle Applications releases go towww.oracle.com/applications About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. ###   Karen [email protected] Simon JonesBlanc & [email protected]

    Read the article

  • SQL Server 2008 R2 Service Pack 2 CTP is available

    - by AaronBertrand
    You can download the Service Pack 2 CTP from the following URL: http://www.microsoft.com/en-us/download/details.aspx?id=29848 The build # is 10.50.3720. This service pack contains all of the fixes from Service Pack 1 & Cumulative Updates 1 through 5, and a couple of other minor fixes (a couple of SSRS bugs and a bug about an ALTER TABLE batch not being cached correctly). It does not include fixes from Service Pack 1 Cumulative Update #6, which I mentioned recently . You should *NOT* install this...(read more)

    Read the article

  • Reading/Writing Promoted Properties from BRE

    - by Sean Feldman
    ESB Toolkit Extensions is an open-source library giving you an extended BRE/BRI provider to read and write promoted properties of a message within business rules engine. I’ve used it to achieve automated process for mapping to canonical schema and then back to destination schema based on receiver ID as a promoted property (will blog on this later). A very useful library!

    Read the article

  • Giving a Zone "More Power"

    - by Brian Leonard
    In addition to the traditional virtualization benefits that Solaris zones offer, applications running in zones are also running in a more secure environment. One way to quantify this is compare the privileges available to the global zone with those of a local zone. For example, there a 82 distinct privileges available to the global zone: bleonard@solaris:~$ ppriv -l | wc -l 82 You can view the descriptions for each of those privileges as follows: bleonard@solaris:~$ ppriv -lv contract_event Allows a process to request critical events without limitation. Allows a process to request reliable delivery of all events on any event queue. contract_identity Allows a process to set the service FMRI value of a process contract template. ... Or for just one or more privileges: bleonard@solaris:~$ ppriv -lv file_dac_read file_dac_write file_dac_read Allows a process to read a file or directory whose permission bits or ACL do not allow the process read permission. file_dac_write Allows a process to write a file or directory whose permission bits or ACL do not allow the process write permission. In order to write files owned by uid 0 in the absence of an effective uid of 0 ALL privileges are required. However, in a non-global zone, only 43 of the 83 privileges are available by default: root@myzone:~# ppriv -l zone | wc -l 43 The missing privileges are: cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_downgrade_sl file_flag_set file_upgrade_sl graphics_access graphics_map net_mac_implicit proc_clock_highres proc_priocntl proc_zone sys_config sys_devices sys_ipc_config sys_linkdir sys_dl_config sys_net_config sys_res_bind sys_res_config sys_smb sys_suser_compat sys_time sys_trans_label virt_manage win_colormap win_config win_dac_read win_dac_write win_devices win_dga win_downgrade_sl win_fontpath win_mac_read win_mac_write win_selection win_upgrade_sl xvm_control However, just like Tim Taylor, it is possible to give your zones more power. For example, a zone by default doesn't have the privileges to support DTrace: root@myzone:~# dtrace -l ID PROVIDER MODULE FUNCTION NAME The DTrace privileges can be added, however, as follows: bleonard@solaris:~$ sudo zonecfg -z myzone Password: zonecfg:myzone> set limitpriv="default,dtrace_proc,dtrace_user" zonecfg:myzone> verify zonecfg:myzone> exit bleonard@solaris:~$ sudo zoneadm -z myzone reboot Now I can run DTrace from within the zone: root@myzone:~# dtrace -l | more ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR 7115 syscall nosys entry 7116 syscall nosys return ... Note, certain privileges are never allowed to be assigned to a zone. You'll be notified on boot if you attempt to assign a prohibited privilege to a zone: bleonard@solaris:~$ sudo zoneadm -z myzone reboot privilege "dtrace_kernel" is not permitted within the zone's privilege set zoneadm: zone myzone failed to verify Here's a nice listing of all the privileges and their zone status (default, optional, prohibited): Privileges in a Non-Global Zone.

    Read the article

  • Allocating Entities within an Entity System

    - by miguel.martin
    I'm quite unsure how I should allocate/resemble my entities within my entity system. I have various options, but most of them seem to have cons associated with them. In all cases entities are resembled by an ID (integer), and possibly has a wrapper class associated with it. This wrapper class has methods to add/remove components to/from the entity. Before I mention the options, here is the basic structure of my entity system: Entity An object that describes an object within the game Component Used to store data for the entity System Contains entities with specific components Used to update entities with specific components World Contains entities and systems for the entity system Can create/destroy entites and have systems added/removed from/to it Here are my options, that I have thought of: Option 1: Do not store the Entity wrapper classes, and just store the next ID/deleted IDs. In other words, entities will be returned by value, like so: Entity entity = world.createEntity(); This is much like entityx, except I see some flaws in this design. Cons There can be duplicate entity wrapper classes (as the copy-ctor has to be implemented, and systems need to contain entities) If an Entity is destroyed, the duplicate entity wrapper classes will not have an updated value Option 2: Store the entity wrapper classes within an object pool. i.e. Entities will be return by pointer/reference, like so: Entity& e = world.createEntity(); Cons If there is duplicate entities, then when an entity is destroyed, the same entity object may be re-used to allocate another entity. Option 3: Use raw IDs, and forget about the wrapper entity classes. The downfall to this, I think, is the syntax that will be required for it. I'm thinking about doing thisas it seems the most simple & easy to implement it. I'm quite unsure about it, because of the syntax. i.e. To add a component with this design, it would look like: Entity e = world.createEntity(); world.addComponent<Position>(e, 0, 3); As apposed to this: Entity e = world.createEntity(); e.addComponent<Position>(0, 3); Cons Syntax Duplicate IDs

    Read the article

  • New Version of JMS Whitepaper

    - by ACShorten
    In the last post, I mentioned that the JMS functionality was extended to include support for JMS Topics and JMS Selectors etc. The Oracle WebLogic JMS Integration and Oracle Utilities Application Framework (Doc Id 1308181.1), from My Oracle Support, has been updated to include implementation details of each of these extensions as well as new advice in implementing JMS.  The whitepaper is now available.

    Read the article

  • Launch 2010 Technical Readiness Unofficial Q&amp;A.

    - by mbcrump
    I had an email from one of my readers about the 2010 Technical Readiness Series. Please read below: Hi Michael, I noticed you blogged a while back that you were going to attend MS 2010 Launch event. I’m going to the session in Seattle on May 27, is it worth it to attend? Also, I’m wondering if they give any free software away like VS2010 Pro? Any decent vendors? Looking forward to hearing back from you. I decided this information would probably benefit several instead of just responding back to the reader. In case you are not aware, MS has a 2010 launch event showing VS2010, Office 2010, SharePoint 2010 and SQL Server 2008R2.  You can sign up for an event here: https://microsoft.crgevents.com/Register2010/Content/Event_Selection.aspx I’ve answered the questions asked below. Q: Is it worth going? A: It is if you have had little or no exposure to the latest 2010 products. Most people are familiar with VS2010, but have not seen SharePoint 2010 Office 2010 or Windows Phone 7. It is designed to get you up to speed very quickly. If you have watched most of the MIX videos and keep up with .NET in general, you will benefit more by having the ability to ask questions.    Q: Did you get any free software? A: No, only demos including: VS2010/Office 2010 – They give you a link to the url where you can download the trial version. Windows 7 Enterprise – You get a DVD with the trial version loaded. Q: Do they give away any cool swag? A: Just a Microsoft T-Shirt (XL). Q: What about the vendor selection? A: At the event that I went to, most vendors were pushing SharePoint products. There wasn’t a lot of variety in the selection. Most vendors were giving away the typical pens, buttons and stickers and trial software. If you have any other questions, feel free to contact me. I will answer and add to this un-official FAQ.

    Read the article

< Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >