Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 176/222 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • PASS 13 Dispatches: moving to the cloud

    - by Tony Davis
    PASS Summit 13, Day 1 keynote by Quentin Clarke and we're hearing about “redefiniing mission critical in the cloud”. With a move to the Windows Azure cloud comes the promise of capacity on demand, automatic HA, backups, patching and so on, as well as passing responsibility to MS for managing hardware, upgrades and so on. However, for many databases and applications the best route to the cloud is not necessarily obvious. For most, the path of least resistance is IaaS – SQL Server in a Azure VM. It removes the hardware burden but you still have to manage your databases and implementing HA for SQL Server is your responsibility. Also, scaling up comes at quite a cost – the biggest VM (8 CPU cores, 56 GB RAM, 16 1TB drives with 500 IOPS each) weighs in at over over $4500 per month. With PaaS, in the form of Windows SQL Database, you get a “3-copies replica set” so HA comes out-of the box, and removes the majority of the administration burden, but you are moving your database into a very different environment. For a start, it's a shared environment, with other customers using the same compute nodes in the cluster, and potentially even sharing the same database (multi-tenancy). Unless you pay for SQL DB Premium edition, the resources available for your workload will depends on how nicely others “play” in the shared environment. You'll potentially need to do a lot of tuning, and application rewriting to avoid throttling issues, optimising application-database communication to deal with increased latency between the two, and so on. You'll need aggressive application caching. You'll also need retry logic and to deal with (expected) node failure and the need to reconnect. In Tuesday's PASS Summit pre-con from the SQLCAT team, they spent a lot of time covering some of the telemetric techniques (collect into Azure storage the necessary monitoring data) to perform capacity planning, work out the hotspots and bottlenecks in your cloud applications. Tools like WAD (Windows Azure Diagnostics), performance counters SQL Database DMVs, and others, will be essential. Of course, to truly exploit the vast horizontal scaling that is available from the existence of thousands of compute nodes, you'll also need to need to consider how to “shard” your data so Azure can move it between nodes at will. Finding the right path to the Cloud isn't easy, but it's coming. I spoke to people one year ago who saw no real benefit in trying to move their infrastructure and databases to the cloud, but now at their company, it's the conversation that won't go away. Tony.  

    Read the article

  • A brief introduction to BRM and architecture

    - by Yani Miguel
    Oracle Communications Billing and Revenue Management (Oracle BRM) is the telcos industry´s leading solution intended for communications service providers. This post encourages to know BRM starting with the basics. History Portal was a billing and revenue managament solution to communications industry created by Portal Software. In 2006 Oracle acquired Portal Software and the solution was renamed BRM. Today Oracle BRM is the first end-to-end packaged enterprise software suite for the communications industry, however BRM is just one more product in the catalog of OSS solutions that Oracle offers. BRM can bill and manage all communications services including wireline, wireless, broadband, cable, voice over IP, IPTV, music, and video. BRM Architecture BRM´s architecture consists of 4 layers or tiers. Through these layers are the data, bussines logic and interfaces to connect graphical client tools.Application tier This layer provides GUI client tools enabling communication to other layers through open APIs. Some BRM client applications are: Customer Center Pricing Center Universal Event Loader Web Server BRM Billing Application Collections Center Permissioning Center Furthermore, this layer is where are provided real-time external events. Bussines Process Tier Although all layers are equally important, I think it deserves more atention because in this tier BRM functionality is implemented. All functions that give life to BRM are in this layer coded in C language called Opcodes (System Processes in the image). Any changes or additional functionality should be made here, so when we try to customize the product, we will most of the time programming in this layer (Business Policies in the image).Bussines Process Tier Features: Implements Portal system functionalityValidates data from the application tierModifies Portal behavior through business policies. Business policies can by customized.Triggers external systems using event notification. Object Tier This layer is responsible for transfer the BRM requests into database language and translate BRM requests into external system requests. Without it, the business logic (data from Bussines Process Tier) could not be understood by the relational database. Data tier Data tier is responsable for the storage of BRM database and other external systems databases. External systems include credit card, tax, and directory servers. Finally, It's important to note that BRM is designed to easily integrate with the following solutions:AIA 2.4 Siebel CRM E-Business Suite - G/L onlyCommunications Services Gatekeeper Oracle BI Publisher. Personally, I think that BRM could improve migrating client-server architecture to a fully web platform that works with Oracle Middleware like any product of the Fusion Middleware family. Hopefully there are already initiatives in this area.

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • Partner Webcast – Oracle Coherence Applications on WebLogic 12c Grid - 21st Nov 2013

    - by Thanos Terentes Printzios
    Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence 12c comes with great improvements in ease of use, integration and RASP (Reliability, Availability, Scalability, and Performance) areas. In addition it features an innovating approach to build and deploy Coherence Application as an integral part of typical JEE Enterprise Application. Coherence GAR archives and Coherence Managed Servers are now first-class citizens of all JEE applications and Oracle WebLogic domains respectively. That enables even easier development, deployment and management of complex multi-tier enterprise applications powered by data grid rich features. Oracle Coherence 12c makes your solution ready for the future of big data and always-on-line world. This webcast is focused on demonstrating How to create a Coherence Application using Oracle Enterprise Pack for Eclipse 12.1.2.1.1 (Kepler release). How to package the application in form of GAR archive inside the EAR deployable application. How to deploy the application to multi-tier WebLogic clusters. How to define and configure the WebLogic domain for the tiered clusters hosting both data grid and client JEE applications.  Finally we will expose the data in grid to external systems using REST services and create a simple web interface to the underlying data using Oracle ADF Faces components. Join us on this technology webcast, to find out more about how Oracle Cloud Application Frameworks brings together the key industry leading technologies of Oracle Coherence and Weblogic 12c, delivering next-generation applications. Agenda: Introduction to Oracle Coherence What's new in 12c release POF annotations Live Events Elastic Data (Flash storage support) Managed Coherence Servers for Oracle WebLogic Coherence Applications (Grid Archive) Live Demonstration Creating and configuring Coherence Servers forming the data tier cluster Creating a simple Coherence Grid Application in Eclipse Adding REST support and creating simple ADF Faces client application Deploying the grid and client applications to separate tiers in WebLogic topology HA capabilities of the data tier Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour REGISTER NOW For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Stay Connected Oracle Newsletters

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • SQL SERVER – Iridium I/O – SQL Server Deduplication that Shrinks Databases and Improves Performance

    - by Pinal Dave
    Database performance is a common problem for SQL Server DBA’s.  It seems like we spend more time on performance than just about anything else.  In many cases, we use scripts or tools that point out performance bottlenecks but we don’t have any way to fix them.  For example, what do you do when you need to speed up a query that is already tuned as well as possible?  Or what do you do when you aren’t allowed to make changes for a database supporting a purchased application? Iridium I/O for SQL Server was originally built at Confio software (makers of Ignite) because DBA’s kept asking for a way to actually fix performance instead of just pointing out performance problems. The technology is certified by Microsoft and was so promising that it was spun out into a separate company that is now run by the Confio Founder/CEO and technology management team. Iridium uses deduplication technology to both shrink the databases as well as boost IO performance.  It is intriguing to see it work.  It will deduplicate a live database as it is running transactions.  You can watch the database get smaller while user queries are running. Iridium is a simple tool to use. After installing the software, you click an “Analyze” button which will spend a minute or two on each database and estimate both your storage and performance savings.  Next, you click an “Activate” button to turn on Iridium I/O for your selected databases.  You don’t need to reboot the operating system or restart the database during any part of the process. As part of my test, I also wanted to see if there would be an impact on my databases when Iridium was removed.  The ‘revert’ process (bringing the files back to their SQL Server native format) was executed by a simple click of a button, and completed while the databases were available for normal processing. I was impressed and enjoyed playing with the software and encourage all of you to try it out.  Here is the link to the website to download Iridium for free. . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • ObjectStorageHelper<T> now available for Windows 8 RTM

    - by jamiet
    In October 2011 I wrote a blog post entitled ObjectStorageHelper<T> – A WinRT utility for Windows 8 where I introduced a little utility class called ObjectStorageHelper<T> that I had been working on while noodling around on the Developer Preview of Windows 8. ObjectStorageHelper<T> makes it easy for anyone building apps for Windows 8 to save data to files. How easy? As easy as this: var myPoco = new Poco() { IntProp = 1, StringProp = "one" }; var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); await objectStorageHelper.SaveAsync(myPoco); Compare that to the plumbing code that you would have to write otherwise: var Obj = new Poco() { IntProp = 1, StringProp = "one" }; StorageFile file = null; StorageFolder folder = GetFolder(storageType); file = await folder.CreateFileAsync(FileName(Obj), CreationCollisionOption.ReplaceExisting); IRandomAccessStream writeStream = await file.OpenAsync(FileAccessMode.ReadWrite); using (Stream outStream = Task.Run(() => writeStream.AsStreamForWrite()).Result) {     serializer.Serialize(outStream, Obj);     await outStream.FlushAsync(); } and you can see how ObjectStorageHelper<T> can help save a Windows 8 developer quite a few headaches. ObjectStorageHelper<T> simply requires you to pass it an object to be saved, tell it where to save it (Roaming, Local or Temporary), and you’re done. Retrieving an object from storage is equally as simple: var objectStorageHelper = new ObjectStorageHelper<Poco>(StorageType.Local); var myPoco = await objectStorageHelper.LoadAsync(); Please check the homepage for the project at http://winrtstoragehelper.codeplex.com/ for (much) more info. A number of people have used and tested ObjectStorageHelper<T> since those early days and one of those folks in particular, David Burela, was good enough to report a couple of bugs: Saving Asynchronously Save fails when class is in another project As a result of David’s bug reports and some more extensive testing on my side I have overhauled the initial code that I wrote last October and am confident that it is now much more robust and ready for primetime (check the commit history if you’re interested). The source code (which, again, you can find on Codeplex at http://winrtstoragehelper.codeplex.com/) includes a suite of unit tests to test all of the basic use cases (if you can think of any more please let me know). If you use this in any of your Windows 8 projects then please let me know. I love getting feedback and I’d also love to know if this is actually being used anywhere. @Jamiet

    Read the article

  • Mounting a Microsoft Azure CloudDrive in a VMRole

    - by SeanBarlow
    Mounting a Drive in a VMRole is a little more complicated then a web or worker.  The Web and Worker roles offer OnStart and OnStop events, which you can use to mount or unmount your drives. The VMRole does not have these same events so you have to provide another way for the drives to be mounted or unmounted. The problem I have run into is what if you have multiple drives and you only want to mount certain drives. How do you let your user mount the drive. I am not going to go into details on what kind of GUI to present to the user. I have done this in a simple WPF application as well as a console application. We are going to need to get the storage account details. One thing to note when you are mounting cloud drives you cannot use https and have to use http. We force the use of http by using false when we create the CloudStorageAccount.   StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("AccountName", "AccountKey"); CloudStorageAccount storageAccount = new CloudStorageAccount(credentials, false);   Next we need to get a reference to the container.   var blobClient = storageAccount.CreateCloudBlobClient(); var container = blobClient.GetContainerReference("ContainerName");   Now we need to get a list of the drives in the container   var drives = container.ListBlobs();   Now that we have a list of the drives in the container we can let the user choose which drive they want to mount. I am just selecting the 1st drive in the list for the example and getting the Uri of the drive.   var driveUri = drives.First().Uri;   Now that we have the Uri we need to get the reference to the drive. var drive = new CloudDrive(driveUri, storageAccount.Credentials);   Now all that is left is to mount the drive.   var driveLetter = drive.Mount(0, DriveMountOptions.None);   To unmount the drive all you have to do is call unmount on the drive. drive.Unmount();   You do need to make sure you unount the drives when you are done with them. I have run into issues with the drives being locked until the VMRole is rebooted. I have also managed to have a drive be permanently locked and I was forced to delete it and upload it again. I have been unable to reproduce the permanent lock but I am still trying. The CloudDrive class provides a handy method to retrieve all the mounted drives in the Role. foreach (var drive in CloudDrive.GetMountedDrives()) {          var mountedDrive = Account.CreateCloudDrive(drive.Value.PathAndQuery);          mountedDrive.Unmount(); }

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • PRUEBAS DE ESPECIALIZACION GRATUITAS 2013

    - by agallego
    Consigue  tu Certificado de Especialista Oracle  de forma GRATUITA , 27 y 28 de Noviembre de 2013  Ahora puedes realizar los exámenes de implementación de las especializaciones de Oracle y convertirte en especialista. Podrás realizar cualquiera de los exámenes de implementación de la siguiente lista: Oracle Fusion Customer Relationship Management 11g Sales Certified Implementation Specialist (1Z0-456) Oracle Fusion Financials 11g General Ledger Implementation Specialist(1Z0-508) Oracle Fusion Financials 11g Accounts Payable Implementation Specialist(1Z0-507) Oracle Fusion Financials 11g Accounts Receivable Implementation Specialist(1Z0-506) Oracle Fusion Human Capital Management 11g Human Resources Certified Implementation Specialist (1Z0-584) Oracle Fusion Human Capital Management 11g Talent Management Certified Implementation Specialist (1Z0-585) Oracle ATG Web Commerce 10 Implementation Developer Certified Implementation Specialist (1Z0-510) Oracle Hyperion Planning 11 Essentials (1Z0-533) Oracle Business Intelligence Applications 7.9.6 for ERP Essentials (1Z0-525) Oracle Hyperion Financial Management 11 Essentials (1Z0-532) Oracle Business Intelligence Applications 7.9.6 for CRM Essentials (1Z0-524) Oracle Hyperion Data Relationship Management 11.1.2 Essentials (1Z1-588) Oracle Business Intelligence Foundation Suite 11g Essentials (1Z0-591) Oracle Essbase 11 Essentials (1Z0-531) SPARC T4-Based Server Installation Essentials (1Z0-597) Oracle Solaris 11 Installation and Configuration Essentials (1Z0-580) Sun ZFS Storage Appliance Certified Implementation Specialist Exalogic Elastic Cloud X2-2 Certified Implementation Specialist Oracle Exadata 11g Essentials (1Z0-536) Oracle VM 3 for x86 Essenstials (1Z0-590) Oracle Linux Fundamentals (1Z0-402) Oracle Linux System Administration (1Z0-403) Oracle Service-Oriented Architecture Certified Implementation Specialist (1Z0-451) Oracle Unified Business Process Management Suite 11g Certified Implementation Specialist (1Z0-560) Oracle WebLogic Server 12c Certified Implementation Specialist (1Z0-599) Oracle WebCenter Portal 11g Essentials (1Z0-541) Oracle WebCenter Content 11g Essentials (1Z0-542) Oracle Application Development Framework 11g Certified Implementation Specialist Oracle Identity Analytics 11g Certified Implementation Specialist (1Z0-545) Oracle Enterprise Manager 12c Essentials (1Z0-457) Puedes consultar la información acerca de los examenes en cada uno de los enlaces. Para prepararte los examenes sigue la Guia de estudio que encontrarás en la página de cada examen. Requisitos: ser  Partner Gold, Platinum o Diamond de Oracle y tener un usuario de Oracle Pearson Vue.  ¿Cuándo?: 27 y 28 de noviembre  a las (9:00, 12:00, 16:00)  ¿Dónde?: Core Networks, C.E.Parque Norte, Edificio Olmo, Planta 1 Serrano Galvache 56 | 28033, Madrid Para inscribirte: Create una cuenta en Pearson Vue (www.pearsonvue.com/oracle). Para Registrarte aquí. Para más información sobre el programa de especializaciones, haz clic aquí. No pierdas esta oportunidad e inscríbete hoy.  Para cualquier duda contactar con [email protected]. Ana María Gallego Partner Enablement Manager Spain and Portugal      

    Read the article

  • Top 5 Sites and Activities in San Francisco to Experience During Oracle OpenWorld

    - by kgee
    While Oracle OpenWorld may provide solutions and information on topics like how to simplify your IT, the importance of cloud, and what types of storage may satisfy your enterprise needs, who is going to tell you more about San Francisco? Here are some suggested sites and activities to experience after OpenWorld that aren’t too far from the Moscone Center. It is recommended to take a cab for the sake of time, but the 6 square miles that make up San Francisco will make for a quick trek to any of the following destinations: The Golden Gate BridgeAn image often associated with San Francisco, this bridge is one of the most impressive in the world. Take a walk across it, or view it from nearby Crissy Field, it is a sight that floors even the most veteran of San Franciscans. The Ferry BuildingLocated at the end of Market Street in the Embarcadero, the Ferry Building once served as a hub of water transport and trade. The building has a bay front view and an array of food choices and restaurants. It is easily accessible via the Muni, BART, trolley or by cab. It is a must-see in San Francisco, and not too far from the Moscone Center. Ride the Trolley to the CastroFor only $2, you can get go back in history for a moment on the Trolley. Take the F-line from the Embarcadero and ride it all the way to the Castro district. During the ride, you will get an overview of the landscape and cultures that are prevalent in San Francisco, but be wary that some areas may beg for an open mind more than others. Golden Gate ParkWhen you tire of the concrete jungle, the lucky part of being in San Francisco is that you can escape to a natural refuge, this park being one of the favorites. This park is known for its hiking trails, cultural attractions, monuments, lakes and gardens. It is one good reason to bring your sneakers to San Francisco, and is also a great place to picnic. Please be wary that it is easy to get lost, and it is advisable to bring a map (just in case) if you go. Haight AshburyFor a complete change of scenery, Haight Ashbury is known as one of the places hippies used to live and the location of "The Summer of Love." It is now a more affluent neighborhood with boutique shops and the occasional drum circle. While it may be perceived as grungy in certain spots, it is one of the most photographed places in San Francisco and an integral part of San Franciscan history.

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Silverlight Cream for March 07, 2011 -- #1055

    - by Dave Campbell
    In this Issue: Max Paulousky, Chris Rouw, David Anson, Jesse Liberty, Shawn Wildermuth, Simon Guindon, and Dhananjay Kumar. Above the Fold: Silverlight: "Faster Databinding in WPF and Silverlight using OptimizedObservableCollection" Simon Guindon WP7: "Phoney Tools Updated (WP7 Open Source Library)" Shawn Wildermuth From SilverlightCream.com: Problems With Sharing Windows Phone 7 Applications Within A Large Group Of Beta Testers Max Paulousky has a post up discussing the issues surrounding beta testing a WP7 app with a large group of testers... and how to pull it all off. WP7 Insights #1: Consuming REST APIs within a WP7 app Chris Rouw is beginning a WP7 series based on his recent experience of getting a client's app into the marketplace. This first in his series is on consuming REST APIs ... lots of good code and explanations. Improving Windows Phone 7 application performance is even easier with these LowProfileImageLoader and DeferredLoadListBox updates David Anson has an update to his LowProfileImageLoader and DeferredLoadListBox after issues brought up by readers... so we all win with the great feedback from alert devs. When Isolated Storage Isn’t Enough Jesse Liberty started looking at Jeremy Likness' Sterling with this post in the WP7 From Scratch series. He starts with downloading it from CodePlex ... great way to get into Sterling if you haven't already. Phoney Tools Updated (WP7 Open Source Library) Shawn Wildermuth has the latest drop of his Phoney Tools up... this is the last Alpha. I've added a tag for it as well. He's fixed some things, added others... check out the post and go grab the code. Faster Databinding in WPF and Silverlight using OptimizedObservableCollection Simon Guindon is a blogger I've not been following, but this post on an OptimizedObservableCollection caught my eye. He added an AddRange() to the ObservableCollection to get a speed enhancement when adding items... and a pretty good speed enhancement it is. Reading files asynchronously using WebClient class in Silverlight Dhananjay Kumar is another prolific blogger that I've not been following, so we'll start with his latest... a step-by-step guide to reading an XML file asynchronously. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Silverlight Cream for November 17, 2011 -- #1168

    - by Dave Campbell
    In this Issue: Colin Eberhardt, Lazar Nikolov, WindowsPhoneGeek, Jesse Liberty, Peter Kuhn, Derik Whittaker, Chris Koenig, and Jeff Blankenburg(-2-). Above the Fold: Silverlight: "Facebook Graph API and Silverlight Part 2 – Publishing data" Lazar Nikolov WP7: "Suppressing Zoom and Scroll interactions in the Windows Phone 7 WebBrowser Control" Colin Eberhardt Metro/WinRT/W8: "Tip/Trick when working with the Application Bar in WinRT/Metro (C#)" Derik Whittaker Shoutouts: Michael Palermo's latest Desert Mountain Developers is up Michael Washington's latest Visual Studio #LightSwitch Daily is up Pete Brown announced the completion of his book: It’s a wrap! I’ve completed writing Silverlight 5 in Action From SilverlightCream.com: Suppressing Zoom and Scroll interactions in the Windows Phone 7 WebBrowser Control Colin Eberhardt's latest post is all about a helper class he wrote to suppress scrolling and pinch zoom of the WP7 browser control, which you might want to do if the browser is placed inside another control. Facebook Graph API and Silverlight Part 2 – Publishing data In this part 2 of his Facebook and Silverlight series, Lazar Nikolov shows how to post data to your profile or your friends' profiles Localizing a Windows Phone app Step by Step WindowsPhoneGeek's latest post is on Localizing a WP7 app .. another great tutorial with plenty of discussion, pictures, and a project to load up and follow Background Audio Part II: Copying Audio Files To Isolated Storage Continuing his WP7 series, Jesse Liberty has Part 2 of a mini-series on Background Audio up... in this episode he's using local audio and to do so, it must be in ISO Silverlight: Bugs in the multicast client In a Q/A session, Peter Kuhn was presented a nasty bug in the multicast client that he has verified exists in not only Silverlight 4 but also Silverlight 5 Beta, including a link to his entry on Connect. Tip/Trick when working with the Application Bar in WinRT/Metro (C#) Derik Whittaker offers up some good information about the Metro Application Bar and how to keep it where you want it New! Windows Phone Starter Kit for Podcasts Chris Koenig announced the release of a new starter kit for WP7... a starter kit for podcasts. Check out the links on Chris' site and the other two starter kits that are available 31 Days of Mango | Day #4: Compass Jeff Blankenburg continues with Day 4 of his Mango series with this post on the Compass and a cool app to demonstrate it 31 Days of Mango | Day #5: Gyroscope In Day 5, Jeff Blankenburg is talking about and discussing the gyroscope, of course if you have a phone as old as mine, you won't have a gyroscope and it's not on the emulator Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
    TUESDAY, APRIL 27, 2010 How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager.2. Click on Actions ? Add and select the .VDI fileClick "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine(Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2.Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button. 9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before)Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine12. Associated more storage disk to the Virtual machine, if you have more VDI files.(Not our case)The disk must be selected as IDE Primary Master.13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Clone an Azure VM using Powershell

    - by jamiet
    In a few months time I will, in association with Technitrain, be running a training course entitled Introduction to SQL Server Data Tools. I am currently working on putting together some hands-on lab material for the course delegates and have decided that in order to save time in asking people to install software during the course I am simply going to prepare a virtual machine (VM) containing all the software and lab material for each delegate to use. Given that I am an MSDN subscriber it makes sense to use Windows Azure to host those VMs given that it will be close to, if not completely, free to do so. What I don’t want to do however is separately build a VM for each delegate, I would much rather build one VM and clone it for each delegate. I’ve spent a bit of time figuring out how to do this using Powershell and in this blog post I am sharing a script that will: Prompt for some information (Azure credentials, Azure subscription name, VM name, username & password, etc…) Create a VM on Azure using that information Prompt you to sysprep the VM and image it (this part can’t be done with Powershell so has to be done manually, a link to instructions is provided in the script output) Create three new VMs based on the image Remove those three VMs Simply download the script and execute it within Powershell, assuming you have an Azure account it should take about 20minutes to execute (spinning up VMs and shutting the down isn’t instantaneous). If you experience any issues please do let me know. There are additional notes below. Hope this is useful! @Jamiet  Notes: Obviously there isn’t a lot of point in creating some new VMs and then instantly deleting them. However, this demo script does provide everything you need should you want to do any of these operations in isolation. The names of the three VMs that get created will be suffixed with 001, 002, 003 but you can edit the script to call them whatever you like. The script doesn’t totally clean up after itself. If you specify a service name & storage account name that don’t already exist then it will create them however it won’t remove them when everything is complete. The created image file will also not be deleted. Removing these items can be done by visiting http://manage.windowsazure.com. When creating the image, ensure you use the correct name (the script output tells you what name to use): Here are some screenshots taken from running the script: When the third and final VM gets removed you are asked to confirm via this dialog: Select ‘Yes’

    Read the article

  • SSAS: Utility to check you have the correct data types and sizes in your cube definition

    - by DrJohn
    This blog describes a tool I developed which allows you to compare the data types and data sizes found in the cube’s data source view with the data types/sizes of the corresponding dimensional attribute.  Why is this important?  Well when creating named queries in a cube’s data source view, it is often necessary to use the SQL CAST or CONVERT operation to change the data type to something more appropriate for SSAS.  This is particularly important when your cube is based on an Oracle data source or using custom SQL queries rather than views in the relational database.   The problem with BIDS is that if you change the underlying SQL query, then the size of the data type in the dimension does not update automatically.  This then causes problems during deployment whereby processing the dimension fails because the data in the relational database is wider than that allowed by the dimensional attribute. In particular, if you use some string manipulation functions provided by SQL Server or Oracle in your queries, you may find that the 10 character string you expect suddenly turns into an 8,000 character monster.  For example, the SQL Server function REPLACE returns column with a width of 8,000 characters.  So if you use this function in the named query in your DSV, you will get a column width of 8,000 characters.  Although the Oracle REPLACE function is far more intelligent, the generated column size could still be way bigger than the maximum length of the data actually in the field. Now this may not be a problem when prototyping, but in your production cubes you really should clean up this kind of thing as these massive strings will add to processing times and storage space. Similarly, you do not want to forget to change the size of the dimension attribute if your database columns increase in size. Introducing CheckCubeDataTypes Utiltity The CheckCubeDataTypes application extracts all the data types and data sizes for all attributes in the cube and compares them to the data types and data sizes in the cube’s data source view.  It then generates an Excel CSV file which contains all this metadata along with a flag indicating if there is a mismatch between the DSV and the dimensional attribute.  Note that the app not only checks all the attribute keys but also the name and value columns for each attribute. Another benefit of having the metadata held in a CSV text file format is that you can place the file under source code control.  This allows you to compare the metadata of the previous cube release with your new release to highlight problems introduced by new development. You can download the C# source code from here: CheckCubeDataTypes.zip A typical example of the output Excel CSV file is shown below - note that the last column shows a data size mismatch by TRUE appearing in the column

    Read the article

  • Unrated Easy iOS 6.1.4/6.1.3 Unlock/Jailbreak iPhone 5/4S/4/3GS Untehtered System

    - by user171772
    Popular jailbreak tool Unlock-Jailbreak.net – compiled by the iPhone Team – has just been updated with full support for Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1.4 and 6.1.3/6.0.1 Untethered. You may have caught our tutorial, which detailed how one could jailbreak their device tethered using Redsn0w, although since it was a pre-iOS 6.1.1 release, users needed to "point" the tool to the older firmware. Team Unlock-Jailbreak was established few years ago, combines some of the jailbreak and unlock community’s most talented developers all known for producing reliable jailbreaks in the past. This team was assembled in order to develop a reliable untethered jailbreak and unlock iphone 5,4S,4 iOS 6.1 for post-A5 devices, including the iPhone 5, the iPad mini and the latest-generation iPad. This has now been achieved with the just-released userland jailbreak tool, known as Unlock-Jailbreak.net. To Jailbreak and Unlock your iPhone 5/4/4S/3GS iOS 6.1.4 and 6.1.3 visit the official website http://www.Unlock-Jailbreak.net http://www.Unlock-Jailbreak.net was formed in mid 2008 and have successfully jailbroken over 250,000 iPhones worldwide. This is unparalleled by any other service in the industry. They have achieved this by combining a very simple solution with a fantastic customer service department that is available 24/7 through many forms of contact, including telephone. Unlock-Jailbreak from Unlock-Jailbreak.nethas been downloaded by over 250,000 customers located in over 145 countries. To further ensure customers of its products usability, Unlock-Jailbreak offers a 100% full money back guarantee on all orders. Customers dissatisfied with the company’s product will be given a full refund, no questions asked. One good advantage of the software is that the jailbreaking and unlocking process is coampletely reversible and there will be no evidence that the iPhone has been jailbroken and unlocked . iOS 6.1/6.1.4 and 6.1.3 comes with many new features and updates for multitasking and storage. By unlocking and jailbreaking the iPhone,Unlock/Jailbreak iPhone 5/4S/4/3GS iOS 6.1/6.1.4 and 6.1.3/6.0.1 Untethered unleash unlimited possibilities to improve this already fantastic experience and the iPhone FULL potential. Before going through any jailbreak process with Unlock-Jailbreak it is always good housekeeping to perform a full backup of all information on the device. It is unlikely that anything will go wrong during the process but when undertaking any process that modifies the internals of a file system it is always prudent to err on the side of caution.

    Read the article

  • ZTE USB Modem AC2736, connection not possible in Ubuntu 12.04.1 LTS

    - by Fredo
    It's a long post but nearly covers all my experiments and changes I did to my NM. Hope the information is complete and if there are still question, more information can be provided. I've a ZTE AC2736 USB Modem (CDMA modem) which worked fine in Ubuntu 10.04 /11.10. I recently switched to 12.04.1 (precise pangolin). after the switch the first issue I faced was to connect to internet using my USB modem (ISP: Reliance Brand: Netconnect). Tried to run the drivers provided by Reliance but they are old and won't support Kernel 2.6.30 above. since the code was not downloaded with ISO image (of 12.04) i couldn't compile the files provided in such driver. lsusb does detect it as Modem with output similar to 19d2:fff1 ZTE CDMA technologies inc. (or similar as i didn't note it down) If it is detected as USB storage it shows 19d2:fff5 (as per few online forums, i may be wrong here). I used the network Manager and configured the modem to dial #777 (default) and the ISP provided username:password combination. It tries to connect to internet (3-4 times automatically)but fails to get online. once I was able to connect in the monring hours and the message was flashed 'registered on CDMA home network'. I was able to run an update. the kernel was updated with 3.0.2 -pae OR something similar (can find out if required). I surfed the net for about 2 hours later before restarting. After the restart, again the Modem was not able to get me online. I kept trying for many times. I tried changing the setting in NM. One evening after dark I was able to connect to network with same message flash 'registered on CDMA home network' (the message was similar, i'm not precise here,sorry). I was able to surf the net for nearly 3 hours before I switched off my Laptop. I'm not able to get online after that day, It's been 3 days now. I'll try the observered theory of late/early hours sometime soon as mentioned below. Laptop configuration : Make: Lenovo Model: B480 Processor: Intel B950 RAM: 4G DDR3 HDD: 500 G Broadcom Wireless/Bluetooth/Ethernet LAN OS: FreeDOS / Ubuntu 12.04 LTS (dualboot) Kernel 3.0.2-pae Obeservation : I'm able to connect to internet in those hours when generally the speed is high (low usage by other network (wireless) users). like in early mornings or late nights. This is strange as connection should not be dependent on bandwidth usage. Any help would be appraciated to fix this issue. before I decide to cahnge the OS or ISP.

    Read the article

  • Monitoring Your Servers

    - by Grant Fritchey
    If you are the DBA in a large scale enterprise, you’re probably already monitoring your servers for up-time and performance. But if you work for a medium-sized business, a small shop, or even a one-man operation, chances are pretty good that you’re not doing that sort of monitoring. You know that you’re supposed to be doing it, but other things, more important at-the-moment things, keep getting in the way. After all, which is more important, some monitoring or backup testing?  Backup testing, of course. Monitoring is frequently one of those things that you do when can get around to it.  Well, as you can see at the right, I have your round tuit ready to go. What if I told you that you could get monitoring on your servers for up-time, job completion, performance, all the standard stuff? And what if I told you that you wouldn’t need to install and configure another server in your environment to get it done? And what if I told you that you’d be able to set up and customize your alerts so you could know if your server was offline or a drive was full? Almost nothing for you to do, and you’ll have a full-blown monitoring process. Sounds to good to be true doesn’t it? Well, it’s coming. We’re creating an online, remote, monitoring system here at Red Gate. You’ll be able to use our SQL Monitor tool (which you can see here, monitoring SQL Server Central in real time) to keep track of your systems, but without having to set up a server and a database for storing the information collected. Instead, we’re taking advantage of services available through the internet to enable collection and storage of this information remotely, off your systems. All you have to do is install a piece of software that will communicate between our service and your servers and you’ll be off and running. It’s that easy. Before you get too excited, let me break the news that this is the near future I’m talking about. We’re setting up the program and there’s a sign-up you can use to get in on the initial tests.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >