Search Results

Search found 1675 results on 67 pages for 'basis vasis'.

Page 30/67 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Implement service layer in MVC

    - by Dan H
    We have a defined service layer hosted in WCF. We are now building a website that will need to use the services functionality. The website is being written in ASP.NET MVC 4 and I'm trying to decide how to reference the WCF service from the MVC app. It's a large complex website and it will be changing on a weekly basis. My first reaction is to abstract out the service references (About 7 services on this one WCF host) and create a service ref facade library with which the website interacts. But, I don't know exactly how to use the service facade in MVC. I'm starting to think the Models will be responsible for it because when the controller gets a model, that model should call the service (if needed) and return what the controller asked. I'm trying to avoid having the MVC app know details of the service references. So, I could have a model factory that creates whatever model the controllers need and they can use the service facade to accomplish it. Is this a good plan, or am I off track?

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • Why does modx-based site start using different domains for some content?

    - by naxa
    situation I have a modx site on a VPS with multiple domain and subdomain names. The modx site should use what I call the 'primary' domain name's 'primary' subdomain, ie www.intendedname.tld . The problem is that as time pass, the site mysteriously starts using another subdomain for links to content like videos, images, and even pages and (internal) links. The other subdomains doesn't serve this content of course. If I clear the modx cache, the original state is restored. However, the problem comes back again later. The VPS has a domain registered and multiple A records pointing to the VPS's IP, as subdomains. There is the 'primary' whan which is intended to be used as the public content server, the other ones are like docs. and test., etc. On top of that, I have dynamic-dns service client installed from no-ip on the machine and a dynamic domain-name bound. It gives a completely different domain name. I originally used it for ssh login and to serve a completely different site. An nginx server is put into good use to do rewrite the different subdomains to the right places. edit The modx templates use Templates use <base href="[[++site_url]]" />. current attempt to fix The current 'solution' to the problem is to also use the rewrite to rewrite everything to the 'primary' domain and subdomain. In the nginx config file for the site, it utilizes (unsurprisingly) the rewrite directive to rewrite the unexpected server_name entries (ie. the other subdomains) in a server block dedicated to this task. So with this, the main site basically works (sort of) but this renders all the other functions (docs) useless. Before this rewrite was set, the 'solution' was to clear the modx cache on a regular basis. The original modx content is not getting corrupted, only the files in cache are. What can I do to find out what actual the problem is and fix it?

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • Organizing Connections with Folders in Oracle SQL Developer

    - by thatjeffsmith
    How many Oracle databases do you work with on a regular basis? I’m guessing the answer for most of you lies between 1 and 500. This post is really geared for those of you who deal with more than just a handful (5) of database connections. Filters are nice when you need to work with a subset of table data, or even a list of tables. So why wouldn’t they be just as useful for organizing your connections? Here’s my complete list of databases: The folders aren’t there by default, you add them as you need them. Now this isn’t an overly large connection list. But when I need to fire up an impromptu demo for a customer, it’s very nice to be able to drill down into JUST those ‘safe’ environments. This actually saves me a few seconds every time I need to connect to one of my databases. So while it’s a very simple feature, it’s one of those things that I recommend EVERYONE take advantage of as it will save them hours of time over the long haul. Easier to find means I get to work a few seconds faster. This also helps me from making mistakes in ‘production’ environments! How to Add a Connection Folder Select a connection you want to organize. Mouse-right-click, and choose ‘Add to folder.’ You can throw it into a new container or an existing one. Lather, rinse, and repeat as necessary. The only trick is remembering to right-click! Special thanks to @dresendi for today’s topic! He asked how to do this and I realized I hadn’t blogged the topic yet

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • My New Job

    - by Stuart Brierley
    Last year I started a new job with a logistics company in the North of England, where I was responsible for the management, design and development of IT Integration strategies, architectures and solutions using BizTalk Server 2009.  This included the design and implementation of the BizTalk Server 2009 infrastructure, the definition of development standards, mentoring a fellow developer in the ways of BizTalk and migrating a number of existing solutions from Softshare over to BizTalk 2009. Unfortunately I then realised that, following this initial set up, there didn't actually seem to be that much BizTalk work for me to get stuck into and reluctantly I have now moved on from this role to a very similar role with the country's largest office supplies company.  Based in Sheffield, we distribute office supplies on a UK wide basis and computer supplies across Europe. The situation here is slightly different than when I first joined my previous employer.  Whereas that was a green field installation with no previous BizTalk solutions in place, my new employer currently has a number of live BizTalk 2000 (!) and BizTalk 2006 solutions in place.  Unfortunately the infrastructure around these is less than ideal; with no clear distinction between development and test environments and no source control what so ever! We are currently building a proposal for a new BizTalk Server 2010 implementation, where I am hopeful of being able to implement fully independent development, test and pseudo-live environments, alongside an enterprise level live installation.  We should also be introducing Team Foundation Server to the development process, thereby giving us some much needed source control capabilities. Following this is likely to be a period of migration for the existing BizTalk Solutions, along with the onward development of new projects and initiatives - I'm hoping to be a busy man for the forseeable future :o)

    Read the article

  • ADF & ADF Mobile Bootcamps in UK, Austria, Spain, Switzerland

    - by JuergenKress
    You want to learn more about innovative features ADF and ADF mobile? You want to get answers how to build modern & mobile applications? Join our 2 days ADF & ADF mobile hands-on Bootcamp. UK Spain Switzerland Austria (one day class) Details Training Audience: Developers, Project Managers, Architects & ADF beginners Trainer: Mireille Duroussaud Language: English Cost: Free of charge, cancelation fee 50€ or no-show fee 2.000€ Note: Bootcamp is limited to 20 persons on first come first serve basis. Preparation:  Attend the free online training ADF 11g Presales Specialist. Follow-up  Pass the free online assessment ADF 11g Presales Specialist. Highlights of the Workshop Oracle Fusion Middleware Development Platform Developing Reusable Business Service Developing Rich Web User Interface and Portals Developing Mobile Apps for iOS and Android with Oracle ADF Mobile For details and registration please visit our registration page. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: adf,adf training,education,training,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • Wolkig und heiter

    - by A&C Redaktion
    Mit Solaris 11 bringt Oracle das erste Betriebssystem für die Cloud auf den Markt. Wir hatten es angekündigt, das Thema Cloud wird uns hier im Blog so schnell nicht loslassen: Am Freitag hat nun Oracle sein erstes Unix-Betriebssystem auf den Markt gebracht, das explizit für die Cloud designt wurde: Solaris 11 heißt es und hier ist die Assoziation zu Stanislaw Lems gleichnamigem Science Fiction-Roman durchaus angebracht: Schließlich gilt Cloud Computing als die Technologie der Zukunft schlechthin. Was bietet Solaris 11, welchen Nutzen können die Oracle Partner daraus ziehen? "Kunden können ihre Betriebsabläufe vereinfachen, die Kapazitäten ihrer Rechenzentren erhöhen und Unternehmensanwendungen von Oracle und anderen in einer sicheren, skalierbaren Cloud oder in einer klassischen Unternehmensumgebung laufen lassen“, fasst John Fowler, Executive Vice President, Systems zusammen. Darf‘s auch etwas konkreter sein? Bitte: Oracle Solaris 11 dient ganz einfach dazu, selbst anspruchsvollste Unternehmensanwendungen in privaten, hybriden und Public Clouds zu betreiben. Als vollständig virtualisiertes Betriebssystem verfügt es über integrierte Virtualisierungs-Funktionen, sowohl für Betriebssystem- als auch für Netzwerk- und Speicher-Ressourcen. Die Server-Virtualisierung sorgt für sichere Live-Migration und flexible Einsatzmöglichkeiten – basierend auf Oracle VM für x86- und SPARC-Systeme. Höchste Verfügbarkeit wird erreicht, indem Solaris 11 ein umfassendes Management über die gesamte Infrastruktur hinweg ermöglicht. Oracle Solaris 11 bietet bereits im Standardbetrieb aktive Sicherheit „by default“, wie rollenbasierten Root-Zugriff und Überwachungsfunktionen. Die Daten- und Speichermanagement-Basis für Oracle Solaris 11 ist Oracle Solaris ZFS. Neben garantierter Datenintegrität erlaubt das Tiered Storage das Einrichten von Pools mit Flash-Speicher und zudem Hochgeschwindigkeitsverschlüsselung. Ein Blick in die Presse zeigt, dass Solaris 11 durchaus für Aufsehen in der Fachwelt sorgt, so äußert sich etwa die Netzwelt sehr positiv: „Die neue Paketverwaltung IPS trägt wie die genannten Neuerungen dazu bei, dass Solaris es in Version 11 durchaus wieder mit den etablierten Linux-Distributionen aufnehmen kann. Das Betriebssystem macht nicht nur im Server-, sondern auch im Desktop-Einsatz eine sehr gute Figur und glänzt mit hoher Stabilität.“ Hier die direkten Links zu weiteren Berichten über Solaris11 in der Fachpresse: Heise onlineZDNetGolem.deSilicon.deAll about SECURITYIT DirectorPro-Linux.deTech ChannelLinux Magazin

    Read the article

  • Sentence Tree v/s Words List

    - by Rohit Jose
    I was recently tasked with building a Name Entity Recognizer as part of a project. The objective was to parse a given sentence and come up with all the possible combinations of the entities. One approach that was suggested was to keep a lookup table for all the know connector words like articles and conjunctions, remove them from the words list after splitting the sentence on the basis of the spaces. This would leave out the Name Entities in the sentence. A lookup is then done for these identified entities on another lookup table that associates them to the entity type, for example if the sentence was: Remember the Titans was a movie directed by Boaz Yakin, the possible outputs would be: {Remember the Titans,Movie} was {a movie,Movie} directed by {Boaz Yakin,director} {Remember the Titans,Movie} was a movie directed by Boaz Yakin {Remember the Titans,Movie} was {a movie,Movie} directed by Boaz Yakin {Remember the Titans,Movie} was a movie directed by {Boaz Yakin,director} Remember the Titans was {a movie,Movie} directed by Boaz Yakin Remember the Titans was {a movie,Movie} directed by {Boaz Yakin,director} Remember the Titans was a movie directed by {Boaz Yakin,director} Remember the {the titans,Movie,Sports Team} was {a movie,Movie} directed by {Boaz Yakin,director} Remember the {the titans,Movie,Sports Team} was a movie directed by Boaz Yakin Remember the {the titans,Movie,Sports Team} was {a movie,Movie} directed by Boaz Yakin Remember the {the titans,Movie,Sports Team} was a movie directed by {Boaz Yakin,director} The entity lookup table here would contain the following data: Remember the Titans=Movie a movie=Movie Boaz Yakin=director the Titans=Movie the Titans=Sports Team Another alternative logic that was put forward was to build a crude sentence tree that would contain the connector words in the lookup table as parent nodes and do a lookup in the entity table for the leaf node that might contain the entities. The tree that was built for the sentence above would be: The question I am faced with is the benefits of the two approaches, should I be going for the tree approach to represent the sentence parsing, since it provides a more semantic structure? Is there a better approach I should be going for solving it?

    Read the article

  • We Found 100 Manufacturing Heros That Focus on Innovation!

    - by Stephen Slade
    There’s a good piece written by Ann Grackin of ChainLink Research on the Manufacturing Leadership 100 Awards program held recently in Palm Beach Fl, Apr 30-May 3, 2012.  This article (link below) highlights the summary of the Summit with specific focus on manufacturing innovation.  There were several informative keynotes and sessions from industrial leaders who are leveraging the latest tools and technologies to make better decisions. Ann writes that she was a panelist with Cindy Reese, SVP, Worldwide Operations, Oracle; and Steven Tungate, VP/GM, Supply Chain & Innovation, Toshiba America Business Solutions about Factories and Supply Networks of the Future. Ann writes “So what are these manufacturers doing? Significant rationalization of the supply base (Toshiba, especially, has this issue since they have a long history of many acquisitions), streamlining production to increase productivity, and looking for lower-cost countries for manufacturing….  No doubt firms have global customer bases, so they need to be present in these markets. However, a low-cost-country manufacturing source does introduce more risk in the supply chain. And that was discussed. Quality, security, and intellectual property protection were the critical global manufacturing issues also discussed. “Cindy (Reese) told a fascinating story about Oracle’s acquisition of Sun and the supply chain that was subsequently created. Here was one of the key points: Although Oracle sells on a global basis, they now do their own factory-installed software. This keeps potential ‘factory-installed malware’ from getting into the servers at contract manufacturers, and prevents pirated software. In this way, Oracle ensures that they deliver the quality and security people expect”. Learn more about the Manufacturing Leadership 100 program from Manufacturing Executive at: http://www.mlsummit.com/ Full Article Link:  http://www.clresearch.com/research/detail.cfm?guid=52327213-3048-79ED-99D4-E433DA64D4F0

    Read the article

  • Free ADF Training Event in the UK

    - by Grant Ronald
    At the UKOUG conference back in December, at the Tools Roundtable session someone told me that they hadn't chosen Oracle ADF as their development environement for their future projects.  When I asked why, the response I got was "no one told us about it".  I was pretty astounded (even gobsmacked!) that the technology that is the foundation of Oracle's future applications strategy wasn't on someone's radar. There and then I promised the audience that if the UKOUG was to fill a room, I would deliver a full day of free training on Oracle ADF and JDeveloper. And here it is!  On the 11th May 2011 at the UK Oracle office in Reading I will deliver a day packed from start to finish with all the best bits of Oracle ADF and JDeveloper.  I'll build an application from start to finish, business services, validation, web services, UI, page flow, maps, graphs and show you tips and techniques.  The event in primarily focused on those who are new to JDeveloper and Oracle ADF and is aimed at getting you up to speed as quickly as possible (so others don't make the mistake of not choosing ADF ;o) ). Places are limited and are open on a first come first served basis to UKOUG members, so get registered NOW!

    Read the article

  • How should Code Review be Carried Out?

    - by Graviton
    My previous question has to do with how to advance code review among the developers. Here I am interested in how the code review session should be carried out, so that both the reviewer and reviewed are feeling comfortable about it. I have done some code review before, but the experience sucks big time. My previous manager would come to us-- on an ad hoc basis-- and tell us to explain our code to him. Since he wasn't very familiar with the code base, I spent a huge amount of times explaining just the most basic structure of my code to him. This took a long time and by the time we were done, we were both exhausted. Then he would raise issues with my code. Most issues he raised were cosmetic in nature ( e.g, don't use region for this code block, change the variable name from xxx to yyy even though the later makes even less sense, and so on). We did this a few rounds, and the review session didn't derive much benefits for us, and we stopped. What do you have to do, in order to make code review a natural, enjoyable, thought stimulating, bug-fixing and mutual-learning experience?

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • Branching strategy for frequent releases

    - by Technext
    We have very frequent releases and we use Git for version control. When i am mentioning about frequency, please assume it to include bug-fixes and feature release too. All releases are eventually merged into ‘mainline’. When a release is deployed on production and if a bug is identified, people start fixing the bug on the same branch from which the latest release was deployed on production. They do not create a new bug-fix branch for the same. I feel that’s not the right way to go for. There are several components and each component has a different owner, and thus, different perspective. Though I have not initiated talks with them, I am sure there will be a lot of resistance. Main issue that they might cite would be, “There’s a lot of work involved in creating and tracking branches especially when there are so frequent deployments on production. This will consume a lot of dev effort.” Do you think that fixing bug on the same branch from which release was done, a good idea? If yes, how do you manage it? Using tags? I know that best practices may not always be applicable due to several factors but still I would like to know what might be a good approach for branching in a scenario where releases/bug-fixes happen almost on a daily basis.

    Read the article

  • How To Change Window Transparency in Windows 7 with a Hotkey

    - by YatriTrivedi
    Linux has a lot of eye-candy because of Compiz, my favorite of which is the window opacity plugin. Using a short AutoHotKey script, you can add that same functionality to Windows 7. I used this AHK script as a basis for changing opacity. It uses a single hotkey to change the active window’s transparency by 25% each time until it resets to 0. I wanted functionality similar to Compiz, so I modified the script to use the mouse wheel and shortened the increments to get more variety. Just hold down the Windows key and scroll down to see through the window. This decreases opacity and makes windows more transparent. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • Would this be viewed poorly amongst the programming community?

    - by Eric P
    So one of my responsibilities at work is to build an internal tool that helps the workers enter in all their information. It's an enterprise application that is similar to a Windows forms database tool. So it's not much different than like developing a Word + Excel combo application, but the average person in this workgroup is a 20-40 year old woman or a random chatty male type. Plus I know all of these people are heavily involved with Facebook on a daily basis. How bad would it be if I styled my new interface to be similar to what Facebook does. People could get award points and stuff when they fill out different types of forms and basically compete against each other like it was a game. When people had completed one, it would be posted on their wall and everyone could comment/like stuff just like in Facebook. And it would be like they are doing peer reviewing for fun. The rewards would be outstanding I would imagine. These people are so into Facebook and Facebook games that productivity would rise due to them trying to compete and earn points and achievements. Would this be taking advantage of the people by 'tricking them into working harder by giving them a game' or would it be viewed as something that would improve happiness at work?

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • 10 Innovations in PeopleSoft 9.2 - #2 Lower TCO With The Peoplesoft Update Manager

    - by John Webb
    With the new PeopleSoft Update Manager in PeopleSoft 9.2 the way you manage updates to your PeopleSoft systems puts you in control of all changes on your schedule.   You can selectively apply patches with reduced time, effort, and cost.    Bundles and Maintenance Packs are no longer used.      Instead, a tailored custom package is automatically generated based on the parameters you select from the latest PeopleSoft source image.   You have access to all updates from Oracle on a cumulative basis and can select and search for specific updates such as new features, legal and regulatory changes, or a patch related to a specific issue, process or object.    Any prerequisites are automatically identified.  The  process of generating a change package is enabled through a new wizard with easy to follow steps and options.     As changes are introduced to your test environment the PeopleSoft Test Framework provides a closed loop process to run regression tests scripts against your changes.  For a quick overview of the PeopleSoft Update Manager check out the Video Feature Overview here: PeopleSoft Update Manager Video Feature Overview

    Read the article

  • Learn How to Integrate Social Media into Your Customer Service - December 12 Webcast

    - by Tuula Fai
    Are you interested in learning more about social media customer service strategies? Then register for CRM Magazine's Roundtable Webcast, Four Social Media Support Strategies, being held Wednesday, December 12 from 11 AM - 12 PM PT (2 - 3 PM ET). The webcast features Oracle's Charlie Knapp, Director of CRM/CX Applications, Product Marketing who will speak on best practices for social enabling your contact center and customer support. Here is a brief overview of the webinar: Today's customers reveal an incredible amount of valuable information through social media on a daily basis. How well is your organization able to listen and repond? Join Parature, Verint Systems, KANA, and Oracle in this free webinar and learn how to: Enable collaboration across the enterprise to provide service and support in social media. Enhance loyalty, drive voice of the customer listening, and reduce costs. Intelligently identify, route, and engage directly with your customers through social media. Integrate social media into contact center workflows to solve customer issues, protect your brand, and improve satisfaction. Register now to join us for this free web event.  

    Read the article

  • What functionality of an iPad can I use with Ubuntu?

    - by Andrew Ferrier
    What functionality of an iPad can I use with Ubuntu? I'm thinking about buying an iPad, but I only have Ubuntu PCs these days (no Windows, no Mac), and I'm nervous that it may be too reliant on the existence of iTunes. I'm less concerned about getting media onto and off it (and I have read that I can do this with libimobiledevice), but will I be able to activate it with iTunes? Does it need to be USB synced on a regular basis, or can I do most anything I want through the cloud (i.e. over wifi)? Edit: In answer to jrgifford's question, in theory, I'd like to do everything I would want to do with it were I using Windows/Mac. But more specifically, I think I'd want to: "Activate" it, if that's necessary Copy music / video onto it Install apps How much of this can I do without a Windows/Mac PC? Is it necessary to activate it? I don't think I'm that interested in backing it up (what would I back up?), but then again, maybe I'm confused as to how easy it is to get data on and off. Can I get a regular SFTP app for the iPad to copy data to my Ubuntu machine over wi-fi, for example?

    Read the article

  • What is a Data Warehouse?

    Typically Data Warehouses are considered to be non-volatile in comparison to traditional databasesdue to the fact that data within the warehouse does not change that often.  In addition, Data Warehouses typically represent data through the use of Multidimensional Conceptual Views that allow data to be extracted based on the view and the current position within the view. Common Data Warehouse Traits Relatively Non-volatile Data Supports Data Extraction and Analysis Optimized for Data Retrieval and Analysis Multidimensional Views of Data Flexible Reporting Multi User Support Generic Dimensionality Transparent Accessible Unlimited Dimensions of Data Unlimited Aggregation levels of Data Normally, Data Warehouses are much larger then there traditional database counterparts due to the fact that they store the basis data along with derived data via Multidimensional Conceptual Views. As companies store larger and larger amounts of data, they will need a way to effectively and accurately extract analysis information that can be used to aide in formulating current and future business decisions. This process can be done currently through data mining within a Data Warehouse. Data Warehouses provide access to data derived through complex analysis, knowledge discovery and decision making. Secondly, they support the demands for high performance in regards to analyzing an organization’s existing and current data. Data Warehouses provide support for an organization’s data and acquired business knowledge.  Within a Data Warehouse multiple types of operations/sub systems are supported. Common Data Warehouse Sub Systems Online Analytical Processing (OLAP) Decision –Support Systems (DSS) Online Transaction Processing (OLTP)

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >