Search Results

Search found 6438 results on 258 pages for 'layer groups'.

Page 200/258 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • How to get around the Circular Reference issue with JSON and Entity

    - by DanScan
    I have been experimenting with creating a website that leverages MVC with JSON for my presentation layer and Entity framework for data model/database. My Issue comes into play with serializing my Model objects into JSON. I am using the code first method to create my database. When doing the code first method a one to many relationship (parent/child) requires the child to have a reference back to the parent. (Example code my be a typo but you get the picture) class parent { public List<child> Children{get;set;} public int Id{get;set;} } class child { public int ParentId{get;set;} [ForeignKey("ParentId")] public parent MyParent{get;set;} public string name{get;set;} } When returning a "parent" object via a JsonResult a circular reference error is thrown because "child" has a property of class parent. I have tried the ScriptIgnore attribute but I lose the ability to look at the child objects. I will need to display information in a parent child view at some point. I have tried to make base classes for both parent and child that do not have a circular reference. Unfortunately when I attempt to send the baseParent and baseChild these are read by the JSON Parser as their derived classes (I am pretty sure this concept is escaping me). Base.baseParent basep = (Base.baseParent)parent; return Json(basep, JsonRequestBehavior.AllowGet); The one solution I have come up with is to create "View" Models. I create simple versions of the database models that do not include the reference to the parent class. These view models each have method to return the Database Version and a constructor that takes the database model as a parameter (viewmodel.name = databasemodel.name). This method seems forced although it works. NOTE:I am posting here because I think this is more discussion worthy. I could leverage a different design pattern to over come this issue or it could be as simple as using a different attribute on my model. In my searching I have not seen a good method to overcome this problem. My end goal would be to have a nice MVC application that heavily leverages JSON for communicating with the server and displaying data. While maintaining a consistant model across layers (or as best as I can come up with).

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Entity Framework 5, separating business logic from model - Repository?

    - by bnice7
    I am working on my first public-facing web application and I’m using MVC 4 for the presentation layer and EF 5 for the DAL. The database structure is locked, and there are moderate differences between how the user inputs data and how the database itself gets populated. I have done a ton of reading on the repository pattern (which I have never used) but most of my research is pushing me away from using it since it supposedly creates an unnecessary level of abstraction for the latest versions of EF since repositories and unit-of-work are already built-in. My initial approach is to simply create a separate set of classes for my business objects in the BLL that can act as an intermediary between my Controllers and the DAL. Here’s an example class: public class MyBuilding { public int Id { get; private set; } public string Name { get; set; } public string Notes { get; set; } private readonly Entities _context = new Entities(); // Is this thread safe? private static readonly int UserId = WebSecurity.GetCurrentUser().UserId; public IEnumerable<MyBuilding> GetList() { IEnumerable<MyBuilding> buildingList = from p in _context.BuildingInfo where p.Building.UserProfile.UserId == UserId select new MyBuilding {Id = p.BuildingId, Name = p.BuildingName, Notes = p.Building.Notes}; return buildingList; } public void Create() { var b = new Building {UserId = UserId, Notes = this.Notes}; _context.Building.Add(b); _context.SaveChanges(); // Set the building ID this.Id = b.BuildingId; // Seed 1-to-1 tables with reference the new building _context.BuildingInfo.Add(new BuildingInfo {Building = b}); _context.GeneralInfo.Add(new GeneralInfo {Building = b}); _context.LocationInfo.Add(new LocationInfo {Building = b}); _context.SaveChanges(); } public static MyBuilding Find(int id) { using (var context = new Entities()) // Is this OK to do in a static method? { var b = context.Building.FirstOrDefault(p => p.BuildingId == id && p.UserId == UserId); if (b == null) throw new Exception("Error: Building not found or user does not have access."); return new MyBuilding {Id = b.BuildingId, Name = b.BuildingInfo.BuildingName, Notes = b.Notes}; } } } My primary concern: Is the way I am instantiating my DbContext as a private property thread-safe, and is it safe to have a static method that instantiates a separate DbContext? Or am I approaching this all wrong? I am not opposed to learning up on the repository pattern if I am taking the total wrong approach here.

    Read the article

  • Silverlight and WCF caching

    - by subodhnpushpak
    There are scenarios where Silverlight client calls WCF (or REST) service for data. Now, if the data is cached on the WCF layer, the calls can take considerable resources at the server if NOT cached. Keeping that in mind along with the fact that cache is an cross-cutting aspect, and therefore it should be as easy as possible to put Cache wherever required. The good thing about the solution is that it caches based on the inputs. The input can be basic type of any complex type. If input changes the data is fetched and then cached for further used. If same input is provided again, data id fetched from the cache. The cache logic itself is implemented as PostSharp aspect, and it is as easy as putting an attribute over service call to switch on cache. Notice how clean the code is:        [OperationContract]       [CacheOnArgs(typeof(int))] // based on actual value of cache        public string DoWork(int value)        {            return string.Format("You entered: {0} @ cached time {1}", value, System.DateTime.Now);        } The cache is implemented as POST Sharp as below 1: public override void OnInvocation(MethodInvocationEventArgs eventArgs) 2: { 3: try 4: { 5: object value = new object(); 6: object[] args = eventArgs.GetArgumentArray(); 7: if (args != null || args.Count() > 0) 8: { 9:   10: string key = string.Format("{0}_{1}", eventArgs.Method.Name, XMLUtility<object>.GetDataContractXml(args[0], null));// Compute the cache key (details omitted). 11:   12: 13: value = GetFromCache(key); 14: if (value == null) 15: { 16: eventArgs.Proceed(); 17: value = XMLUtility<object>.GetDataContractXml(eventArgs.ReturnValue, null); 18: value = eventArgs.ReturnValue; 19: AddToCache(key, value); 20: return; 21: } 22:   23:   24: Log(string.Format("Data returned from Cache {0}",value)); 25: eventArgs.ReturnValue = value; 26: } 27: } 28: catch (Exception ex) 29: { 30: //ApplicationLogger.LogException(ex.Message, Source.UtilityService); 31: } 32: } 33:   34: private object GetFromCache(string inputKey) { if (ServerConfig.CachingEnabled) { return WCFCache.Current[inputKey]; } return null; }private void AddToCache(string inputKey,object outputValue) 35: { 36: if (ServerConfig.CachingEnabled) 37: { 38: if (WCFCache.Current.CachedItemsNumber < ServerConfig.NumberOfCachedItems) 39: { 40: if (ServerConfig.SlidingExpirationTime <= 0 || ServerConfig.SlidingExpirationTime == int.MaxValue) 41: { 42: WCFCache.Current[inputKey] = outputValue; 43: } 44: else 45: { 46: WCFCache.Current.Insert(inputKey, outputValue, new TimeSpan(0, 0, ServerConfig.SlidingExpirationTime), true); 47:   48: // _bw.DoWork += bw_DoWork; 49: //string arg = string.Format("{0}|{1}", inputKey,outputValue); 50: //_bw.RunWorkerAsync(inputKey ); 51: } 52: } 53: } 54: }     The cache class can be extended to support Velocity / memcahe / Nache. the attribute can be used over REST services as well. Hope the above helps. Here is the code base for the same.   Please do provide your inputs / comments.

    Read the article

  • Practices for domain models in Javascript (with frameworks)

    - by AndyBursh
    This is a question I've to-and-fro'd with for a while, and searched for and found nothing on: what're the accepted practices surrounding duplicating domain models in Javascript for a web application, when using a framework like Backbone or Knockout? Given a web application of a non-trivial size with a set of domain models on the server side, should we duplicate these models in the web application (see the example at the bottom)? Or should we use the dynamic nature to load these models from the server? To my mind, the arguments for duplicating the models are in easing validation of fields, ensuring that fields that expected to be present are in fact present etc. My approach is to treat the client-side code like an almost separate application, doing trivial things itself and only relying on the server for data and complex operations (which require data the client-side doesn't have). I think treating the client-side code like this is akin to separation between entities from an ORM and the models used with the view in the UI layer: they may have the same fields and relate to the same domain concept, but they're distinct things. On the other hand, it seems to me that duplicating these models on the server side is a clear violation of DRY and likely to lead to differing results on the client- and server-side (where one piece gets updated but the other doesn't). To avoid this violation of DRY we can simply use Javascripts dynamism to get the field names and data from the server as and when they're neeed. So: are there any accepted guidelines around when (and when not) to repeat yourself in these situations? Or this a purely subjective thing, based on the project and developer(s)? Example Server-side model class M { int A DateTime B int C int D = (A*C) double SomeComplexCalculation = ServiceLayer.Call(); } Client-side model function M(){ this.A = ko.observable(); this.B = ko.observable(); this.C = ko.observable(); this.D = function() { return A() * C(); } this.SomeComplexCalculation = ko.observalbe(); return this; }l M.GetComplexValue = function(){ this.SomeComplexCalculation(Ajax.CallBackToServer()); }; I realise this question is quite similar to this one, but I think this is more about almost wholly untying the web application from the server, where that question is about doing this only in the case of complex calculation.

    Read the article

  • Data Mining Email with Thunderbird

    - by user554629
    Oracle has many formal, searchable locations:  Service Requests, BugIDs, Technical Documents. These contain the results of an investigation for a customer crash situation;  they're created after the intense work of resolution is over, and typically contain the "root cause" of the failure ... but not the methods for identifying that cause. Email is still the standby for interacting with quickly formed groups of specialists, focusing on a particular incident.Customer BI, Network and System specialists;  Oracle Tech Support, Development, Consultants; OEM Database, OS technical support.   It is a chaotic, time-oriented set of configuration, call stacks, changes, techniques to discover and repair the failure. I needed to organize that information into something cohesive to prepare the blog entry on Teradata.  My corporate email client of choice is Thunderbird.   My original (flawed) search technique: R-Click on Inbox in Thunderbird left pane, and choose Search Messages Subject:  [ teradata ] Results: A new window titled "Search Messages"Single pane of selected messagesColumn headings:  Subject  From  Date  LocationNo preview window for messages There are 673 email entries in the result ( too many )  R-click icon just above the vertical scroll bar on the rightCheck [x] Tags Click on the Tags header to sort by "Important" View contents of message by double-clickingOpens in the Thunderbird Main Window in a new Tab Not what I was looking for, close the tab and try again. There has to be a better way.  ( and there is ) I need to be more productive, eliminating duplicate-chained messages, for example.   Even the Tag "Important" that was added during the investigation phase, is "not so much" for my current task. In the "Search Messages" window, click [ Save as Search Folder ] [ teradata ]  Appears as a new folder in my Inbox. Focus on that folder and the results appear with a list of messages like every other folder in the Inbox.Only the results of the search are shown A preview window is now available for each message Sort, Select message, Cursor Down ... navigates quickly through the messages. But wait, there's more ... Click Find ( Ctrl-F) Enter a search term for the message body, like.[ LIBPATH ] The search is "sticky" ... each message you cycle through wil focus ( and highlight) the LIBPATH search term. And still more .... Reset the Tag"Important" message.   Press "1" and the tag is removed Press "4" and a new Tag "ToDo" is applied After applying all of the tags, sort by Tag for a new message order Adjust the search criteria ... R-click on the [ teradata ] search folder, and choose Properties Add additional criteria to narrow the search Some of the information I'm looking for did not contain "teradata" in the subject line. + Body  [ contains ] [ Best Practices ] That's it.  Much more efficient search.   Thank you Thunderbird.

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • Active directory over SSL Error 81 = ldap_connect(hLdap, NULL);

    - by Kossel
    I have been several day to getting AD over SSL (LDAPS) I followed exactly this guide. I have Active Directory Certifica Service installed (stand alone Root CA), I can request cert, install certs. but whenever I want to test the connection using LDP.exe I got this famous error ld = ldap_sslinit("localhost", 636, 1); Error 0 = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, 3); Error 81 = ldap_connect(hLdap, NULL); Server error: <empty> Error <0x51>: Fail to connect to localhost. I have been searching, I know there are many thing can cause of this error, I tried most thing I can then I decided to post it here. I tried to look if any error in system log, but nothing :/ (but I could be wwrong) can anyone tell me what else to look? UPDATE: I restarted AD service following error showed in event viewer: LDAP over Secure Sockets Layer (SSL) will be unavailable at this time because the server was unable to obtain a certificate. Additional Data Error value: 8009030e No credentials are available in the security package

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: RMAN run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraRacDBName=BBDB, CVOraRACDBClientName=BBDB)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; }2 3 4 allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • Inkscape: what are "line" objects?

    - by Peter Mortensen
    What is a "line" object in Inkscape? Drawing lines in Inkscape is by using the tool "Draw Bezier curves and straight lines (Shift+F6)". This creates objects of another type, "path". Using Inkscape: is there a way to convert an object of type "line" into an object of the more general type "path"? I have imported a drawing (mostly lines, rectangles and text) that has been through Adobe Illustrator: originally made in Inkscape, imported into Illustrator, edited, saved from Illustrator as SVG, imported into Inkscape. Sample from the imported SVG file: <path id="path5855" stroke="#000000" d=" M320.198,275.935" /> <line fill="none" stroke="#000000" x1="348.553" y1="45.097" x2="348.553" y2="185.346" id="line3368" /> Update 1: I have inspected the original XML (SVG) file from 2006 and it does not contain any "line" XML tags. Thus it must be a crime of Adobe Illustrator. When a line is selected in this imported SVG file the bottom panel displays: "Line in root. Click selection to toggle scale/rotation handles.". When a line is selected that was drawn in Inkscape the bottom panel displays: "Path (2 nodes) in Layer 1. Click selection to toggle scale/rotation handles." What is the difference between "line" and "path"? Is "line" some kind of read-only/non-editable object? A generic term like "line" is not easy to use in search, but I have now found the definitions for "line" and "path": SVG line: http://www.w3schools.com/svg/svg_line.asp SVG path: http://www.w3schools.com/svg/svg_path.asp Platform: Inkscape v0.46 (2008-03-10), Windows XP 64 bit, 8 GB RAM.

    Read the article

  • Commvault Oracle RMAN Restore to new host

    - by Glenn Stauffer
    We use Commvault Simpana 8 and I have a situation where I have backups of an Oracle database on tape that were taken from Host A. Host A suffered a disk failure (lost its raid configuration) and the sys admins are trying to restore it; in the meantime, I'd working to bring the database back up on another host - Host B. I'm running into problems and am trying to sort out the parameters that need to be passed to the Commvault media agent to get this to work. Unfortunately, I do not have access to Commvault support and the backup person is unavailable. Any one have a clue? The backups are there and the media agent reported a successful write when they ran last night. This is what fails: run { allocate channel t1 device type sbt_tape parms='SBT_LIBRARY=/usr/local/galaxy/Base/libobk.so,BLKSIZE=262144, ENV=(CvClientName=dbsrv2,CvInstanceName=Instance001, CVOraSID=BBPROD)'; restore spfile to pfile '/tmp/bbdb.ora' from autobackup; } allocated channel: t1 channel t1: sid=34 devtype=SBT_TAPE channel t1: CommVault Systems for Oracle: Version 7.0.0(Build76) Starting restore at 09-MAY-10 channel t1: looking for autobackup on day: 20100509 channel t1: autobackup found: c-3941155360-20100509-01 released channel: t1 RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of restore command at 05/09/2010 18:01:35 ORA-19870: error reading backup piece c-3941155360-20100509-01 ORA-19507: failed to retrieve sequential file, handle="c-3941155360-20100509-01", parms="" ORA-27029: skgfrtrv: sbtrestore returned error ORA-19511: Error received from media manager layer, error text: sbtrestore: Job[0] thread[26316]: InitializeCLRestore() failed.

    Read the article

  • Problems with apache svn server (403 Forbidden)

    - by mrlanrat
    Iv recently setup a SVN server on my papache webserver. I installed USVN http://www.usvn.fr/ to help manage the repositories from a web interface. When I create a repository and try to import code into it from netbeans i get the following error: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to PROPFIND request for '/svn/python1' I know i have the username and password correct (and I have tried different users) I have done some research and it seems that it is most likely an Apache svn error. Below is the config file for this virtualhost. <VirtualHost *:80> ServerName svn.domain.com ServerAlias www.svn.domain.com ServerAlias admin.svn.domain.com DocumentRoot /home/mrlanrat/domains/svn.domain.com/usvn/public ErrorLog /var/log/virtualmin/svn.domain.com_error_log CustomLog /var/log/virtualmin/svn.domain.com_access_log combined DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory "/home/mrlanrat/domains/svn.domain.com/usvn"> Options +SymLinksIfOwnerMatch AllowOverride All Order allow,deny Allow from all </Directory> <Location /svn/> ErrorDocument 404 default DAV svn Require valid-user SVNParentPath /home/mrlanrat/domains/svn.domain.com/usvn/files/svn SVNListParentPath on AuthType Basic AuthName "USVN" AuthUserFile /home/mrlanrat/domains/svn.domain.com/usvn/files/htpasswd AuthzSVNAccessFile /home/mrlanrat/domains/svn.domain.com/usvn/files/authz </Location> </VirtualHost> Can anyone point out what I may have done wrong and how to fix it? I have tested with changing file permissions and changing the configuration with no luck. Thanks in advance!

    Read the article

  • Legacy non-dpi-aware application resolution scaling?

    - by Miles Erickson
    Our environment prominently featuers an outdated but absolutely mission-critical Win32 application that is not dpi-aware. It is optimized for an 800x600 display. Most of our users now have 17"-20" displays with native resolutions ranging from 1280x1024 to 1680x1050. However, they still operate these displays at 800x600 because the text in this legacy application is otherwise too small. Of course, it also means that nothing quite fits on the screen in Office 2007. Most of our workstations still run Windows XP, but some are on Windows 7 and there are more to come. About one-third of our users run the app remotely via MS Terminal Services, and the remainder run it locally. Is anyone aware of any method that could be used to scale this specific application to about 170%, so that it would fill a 1280x1024 screen, without affecting other applications that work best at the display's native resolution? I know how to do this in Mac OS X, but I have never found a way to do it in Windows. Of course, this ideally would be something that we could push out via Group Policy. I suppose we even could create a custom MSI package to re-deploy the legacy application with some sort of display virtualization layer, if such a thing exists.

    Read the article

  • Find a non-case-sensitive text string within a range of cells

    - by Iszi
    I've got a bit of a problem to solve in Excel, and I'm not quite sure how to go about doing it. I've done a few searches online, and haven't really found any formulas that seem to be useful. Here's the situation (simplified just a bit, for the purpose of this question): I have data in columns A-E. I need to match data in the cells in A and B, with data in C-E, and return TRUE or FALSE to column F. Return TRUE if: - The string in A is found within any string in C-E. OR - The string in B is found within any string in C-E. Otherwise, return FALSE. The strings must be exact matches for whole or partial strings within the range, but the matching function must be case-insensitive. I've taken a screenshot of an example sheet for reference. I'm fairly sure I'll need to use IF or on the outermost layer of the formula, probably followed by OR. Then, for the arguments to OR, I'm expecting there will be some use of IFERROR involved. But what I'm at a loss for is the function I could most efficiently use to handle the text string searches. VLOOKUP is very limited in this regard, I think. It may be workable to do whole-string against whole-string comparisons, but I'm fairly certain it won't return accurate results for partial string matches. FIND and SEARCH appear limited to only single-target searches, and are also case-sensitive. I suppose I could use UPPER or LOWER to force case-insensitivity in the search, but I still need something that can do accurate partial matching and search a specified range of cells. Is there any function, or combination of functions, that could work here? Ideally, I want to do this with a straight Excel formula. I'm not at all familiar with VBScript or similar tools, nor do I have time to learn it for this project.

    Read the article

  • Make Thundirbird use and sync with Gmail Spam facility

    - by Senthil
    I am using Thundirbird 3.1.7. I am using Gmail with it. When I mark a message as spam in Thunderbird, I want it to go to Gmail's spam folder. But it is not happening. The mail is only marked as spam and stays in Gmail's inbox. How can I setup Thunderbird so that when I mark a message as spam in thundirbird, it goes to spam folder in both Thunderbird and Gmail? FYI: I have disabled Thunderbird's own adaptive spam controls so that it doesn't interfere. I am perfectly fine with Gmail's spam facility and don't want Thunderbird to add another layer of spam filtering. Still Thunderbird doesn't send emails to Gmail's spam. Update I figured that dragging and dropping the email to the spam folder in Thunderbird does the job. But is it the same as marking the email as spam in Gmail's interface? i.e. Gmail will do stuff when you mark an email as spam so that it can filter out similar messages in the future. Does it happen when I do this drag and drop thing too? Are they the same?

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records? Additional information: The query works fine from tsql. I only get this error through Perl's DBD::Sybase interface. (Should I test through something else? I don't have the expertise yet to install PHP or Python. I've got jTDS and can use it, but I think that's a completely different implementation, not an interface to FreeTDS.) Adding client charset = UTF-8 to my freetds.conf file results in "Out of memory!" printed to STDERR.

    Read the article

  • DCOM configuration: accounts with same name but different passwords problem

    - by archimed7592
    Hello, everybody! I'm experiencing troubles with DCOM configuration. Here is the case: I'm using some product which supports client-server interaction through DCOM, but the client won't get any access to the server if the attempt is being done from an account with a name which exists at the server as well, but has different password. Basically, if we try to access the server from the Administrator account which obviously present on the server machine, we will fail if client's Administrator password doesn't match server's one. After actively collaborating with the product's developer in attempts to localize the issue, he come across with resolution "can't be fixed" or, if you prefer to call a pikestaff a pikestaff than it's more likely a "don't know how to fix" resolution :). I believe there is a solution for this problem and I'm asking you, IT professionals, to help me out with this one. I do realize that the problem may be caused by the way the developer interact with DCOM and if so it can't be fixed be means of pure system configuration and the question should be asked at SO, but since I've bumped into the same behavior while working with file/printer sharing - Windows tried to simplify everything and used currently impersonated credentials to access the share, I hope the solution lies at system configuration layer. P.S. I believe that the actual software product I'm talking about is entirely irrelevant however my experience tell me that there always would be somebody who will think that it on the contrary is very relevant. Here it is: SpRecord.

    Read the article

  • How to inspect remote SMTP server's TLS certificate?

    - by Miles Erickson
    We have an Exchange 2007 server running on Windows Server 2008. Our client uses another vendor's mail server. Their security policies require us to use enforced TLS. This was working fine until recently. Now, when Exchange tries to deliver mail to the client's server, it logs the following: A secure connection to domain-secured domain 'ourclient.com' on connector 'Default external mail' could not be established because the validation of the Transport Layer Security (TLS) certificate for ourclient.com failed with status 'UntrustedRoot. Contact the administrator of ourclient.com to resolve the problem, or remove the domain from the domain-secured list. Removing ourclient.com from the TLSSendDomainSecureList causes messages to be delivered successfully using opportunistic TLS, but this is a temporary workaround at best. The client is an extremely large, security-sensitive international corporation. Our IT contact there claims to be unaware of any changes to their TLS certificate. I have asked him repeatedly to please identify the authority that generated the certificate so that I can troubleshoot the validation error, but so far he has been unable to provide an answer. For all I know, our client could have replaced their valid TLS certificate with one from an in-house certificate authority. Does anyone know a way to manually inspect a remote SMTP server's TLS certificate, as one can do for a remote HTTPS server's certificate in a web browser? It could be very helpful to determine who issued the certificate and compare that information against the list of trusted root certificates on our Exchange server.

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • Can one config LDAP to accept auth from ssh-agent instead of from Kerberos?

    - by Alex North-Keys
    [This question is not about getting your LDAP password to authenticate you for SSH logins. We have that working just fine, thank you :-) ] Let's suppose you're on a Linux network (Ubuntu 11.10, slapd 2.4.23), and you need to write a set of utilities that will use ldapmodify, ldapadd, ldapdelete, and so on. You don't have Kerberos, and don't want to deal with its timeouts (most users don't know how to get around this), quirks, etc. This resolves the question to one of where else to get credentials to feed to LDAP, probably through GSSAPI - which technically doesn't require Kerberos despite its dominance there - or something like it. However, nearly everyone seems to have an SSH agent program, complete with its key cache. I'd really like an ssh-add to be sufficient to allow passwordless LDAP command use. Does anyone know of a project working on using the SSH agent as the source of authentication to LDAP? It might be through an ssh-aware GSSAPI layer, or some other trick I haven't thought of. But it would be wonderful for making LDAP effortless. Assuming I haven't just utterly missed a way to use ldapmodify and kin without having to type my LDAP passwords - using -x is NOT acceptable. At my site, the LDAP server only accepts ldaps connections, and requires authentication for modifying operations. Those are requirements, of course. Any ideas would be greatly appreciated. :-)

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >