Search Results

Search found 6438 results on 258 pages for 'layer groups'.

Page 46/258 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Mix metrics for April 5, 2010

    - by tim.bonnemann
    Our latest numbers... Registered Mix users (weekly growth) 61,374 (+0.6%) Active users (percent of total) Last 30 days: 4,317 (7.0%) Last 60 days: 8,638 (14.1%) Last 90 days: 12,481 (20.3%) Traffic (30-day) Visits: 11,893 Page views: 65,880 Twitter Followers: 3,169 List mentions: 146 User-generated content (30-day) New ideas: 36 New questions: 57 New comments: 394 Groups There are currently 1,402 Mix groups (requires login).

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • Mix metrics for June 14, 2010

    - by tim.bonnemann
    We've been busy working on a few improvements to Mix which we plan to roll out over the coming weeks. In the meantime, here are our latest community metrics once again: Registered Mix users (weekly growth) 64,769 (+0.9%) Active users (percent of total) Last 30 days: 4,682 (7.2%) Last 60 days: 8,251 (12.7%) Last 90 days: 11,936 (18.4%) Traffic (30-day) Visits: 13,674 Page views: 77,808 Twitter Followers: 3,451 List mentions: 205 User-generated content (30-day) New ideas: 29 New questions: 38 New comments: 167 Groups There are currently 1,440 Mix groups (requires login).

    Read the article

  • Count unique visitors by group of visited places

    - by Mathieu
    I'm facing the problem of counting the unique visitors of groups of places. Here is the situation: I have visitors that can visit places. For example, that can be internet users visiting web pages, or customers going to restaurants. A visitor can visit as much places as he wishes, and a place can be visited by several visitors. A visitor can come to the same place several times. The places belong to groups. A group can obviously contain several places, and places can belong to several groups. Given that, for each visitor, we can have a list of visited places, how can I have the number of unique visitors per group of places? Example: I have visitors A, B, C and D; and I have places x, y and z. I have these visiting lists: [ A -> [x,x,y,x], B -> [], C -> [z,z], D -> [y,x,x,z] ] Having these number of unique visitors per place is quite easy: [ x -> 2, // A and D visited x y -> 2, // A and D visited y z -> 2 // C and D visited z ] But if I have these groups: [ G1 -> [x,y,z], G2 -> [x,z], G3 -> [x,y] ] How can I have this information? [ G1 -> 3, // A, C and D visited x or y or z G2 -> 3, // A, C and D visited x or z G3 -> 2 // A and D visited x or y ] Additional notes : There are so many places that it is not possible to store information about every possible group; It's not a problem if approximation are made. I don't need 100% precision. Having a fast algorithm that tells me that there were 12345 visits in a group instead of 12543 is better than a slow algorithm telling the exact number. Let's say there can be ~5% deviation. Is there an algorithm or class of algorithms that addresses this type of problem?

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • Is there a recommended approach for using SQL Server as an Authorization store and extending AD properties using .Net? [closed]

    - by Jim
    We are going to be using SQL Server as an authorization store for our .Net windows services and WCF services as well as storing additional metadata about users and groups to extend the AD properties. Doing this will make this self service and not require IT to change anything for our department (for users or groups). What if any are the existing recommended stategies or technologies that do this function?

    Read the article

  • Mix metrics for June 20, 2011

    - by tbonnema
    One of the busiest week's ever for Mix this past week, thanks to Suggest-a-Session contest, which just wrapped up at midnight Pacific last night. See for yourself: Registered Mix users (weekly growth) 76,378 (+1.7%) Active users (percent of total) Last 30 days: 4,383 (5.7%) Last 60 days: 5,232 (6.9%) Last 90 days: 6,240 (8.2%) Traffic (30-day) Visits: 17,368 Page views: 148,426 Twitter Followers: 7,116 List mentions: 380 User-generated content (30-day) New ideas: 10 New questions: 11 New comments: 30 New group messages: 34 New direct messages: 1,661 Groups There are currently 1,603 Mix groups (requires login).

    Read the article

  • What is the difference between the 'sudo' and 'admin' group?

    - by ændrük
    I noticed that two groups are granted similar-looking permissions in /etc/sudoers: # Members of the admin group may gain root privileges %admin ALL=(ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL My user account with "Administer the system" privileges is in the admin group, and there don't appear to be any users in the sudo group. What are these two groups for?

    Read the article

  • Deleted Myself from Admin Group - Now Getting Error usermod: cannot lock /etc/passwd; try again later

    - by BubbaJ
    I have a laptop with Ubuntu 11.10 that is shared between myself and two other family members. My user id was setup as the only "Administrator" on the laptop. The other users were setup as "Standard" users. In my attempt to try to add myself to the user groups for the other users, I somehow deleted myself from the admin groups. I used the "usermod" command from the terminal. I must have neglected to include the proper switches or syntax for the update. It looks like I successfully added my userid to the group associated with my wife's account. When I use the "groups" command, I can see only my id and my wife's id in the list. I no longer see the "admin" or "adm" groups, and others that used to be listed. When I go into System Settings User Accounts it looks like my ID is now listed as a "Standard" user. I would like to change my account back to "Administrator", but now I can't. I did some searches for solutions and found that I would need to boot into Recovery Mode and execute the usermod command from the root session. I was able to successfully boot into Recovery Mode and get to the root session. I was trying to execute the command "usermod -a -G admin user1" to add my id (user1) back to the admin group. When I execute the command from the root session, I get the error message "usermod: cannot lock /etc/passwd; try again later". I tried preceding the usermod command with "sudo", but it didn't make a difference, same error. I then tried adding a new user using adduser, thinking I would try to create a new userid and make the new userid part of the admin group. I get the same error using the adduser command. I saw some posts that recommend looking for and deleting files that end in ".lock" in the etc directory. The only file I found was .pwd.lock which I haven't touched. I am at a loss as to what to try next. I am relatively inexperienced with Ubuntu and Linux, so alot of this is new to me. Any help you can provide would be much appreciated.

    Read the article

  • How do you uninstall wine 1.5?

    - by jeff
    I installed Wine through the terminal using these commands: sudo add-apt-repository ppa:ubuntu-wine/ppa sudo apt-get update sudo apt-get install wine1.5 I know that I've removed at least part of wine. I removed the .wine folder in my home folder and I managed to remove the repository via this command: sudo apt-add-repository --remove ppa:ubuntu-wine/ppa Now when I try to install wine 1.4 via Ubuntu software center, it tells me I must remove these items to install wine: Microsoft windows compatibility layer (binary emulator and library) wine1.5 Microsoft windows compatibility layer (64-bit support) wine1.5-amd64 Microsoft windows compatibility layer (32-bit support) wine1.5-i386:i386 I've already tried this command: sudo apt-get --purge remove wine but it said that wine wasn't installed. Some assistance would be appreciated.

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • how to re-use a sprite in cocos2d-x

    - by zinking
    some times it takes time to create the sprite structures in the scene, I might need to setup structures inside this sprite to meet requirement, thus I would hope to reuse such structures with the game again and again. I tried that, remove the child from parent, detach it from parent , clean parent with the sprite. but when I try to add the sprite to another scene, it's just wont pass the assertion that the sprite already have parent did I miss some step ? add an example: I have a sprite A which involves of quite a few steps to construct, so I used it in scene A layer A, and then I want to use it in scene A layer B, scene B layer A1 etc..... generally speaking I don't want to reconstruct the sprte again.

    Read the article

  • Design pattern for isomorphic trees

    - by Peregring-lk
    I want to create a data structure to work with isomorphic tree. I don't search for a "algorithms" or methods to check if two or more trees are isomorphic each other. Just to create various trees with the same structure. Example: 2 - - - - - - - 'a' - - - - - - - 3.5 / \ / \ / \ 3 3 'f' 'y' 1.0 3.1 / \ / \ / \ 4 7 'e' 'f' 2.3 7.7 The first "layer" or tree is the "natural tree" (a tree with natural numbers), the second layer is the "character tree" and the third one is the "float tree". The data structure has a method or iterator to traverse the tree and to make diferent operations with its values. These operations could change the value of nodes, but never its structure (first I create the structure and then I configure the tree with its diferent layers). In case of that I add a new node, this would be applied to each layer. Which known design pattern fits with this description or is related with it?

    Read the article

  • Collision filtering techniques

    - by Griffin
    I was wondering what efficient techniques are out there for mapping collision filtering between various bodies, sub-bodies, and so forth. I'm familiar with the simple idea of having different layers of 2D bodies, but this is not sufficient for more complex mapping: (Think of having sub-bodies of a body, such as limbs, collide with each other by placing them on the same layer, and then wanting to only have the legs collide with the ground while the arms would not) This can be solved with a multidimensional layer setup, but I would probably end up just creating more and more layers to the point where the simplicity and efficiency of layer filtering would be gone. Are there any more complex ways to solve even more complex situations than this?

    Read the article

  • Craft a Drinkable Density Column

    - by Jason Fitzpatrick
    Earlier this month we shared a clever 9-layer density column demonstration you’d most certainly not want to drink. This smaller demonstration, however, is a delicious column of fruit flavors. The secret sauce? In the previous experiment we shared the secret was using fluids with naturally varying densities (such as lamp oil and vegetable oil); in this experiment you’ll be relying on varying amounts of sugar in each layer to change the density of the water and keep them separate (and edible). You’ll need some Skittles, a few drinking glasses, water, and for best effect, a tall and narrow glass or graduated cylinder. Hit up the link below for the full details on the experiment and tips on how to carefully layer the liquids. Make a Drinkable Rainbow in a Glass [i09] Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Approach on Software Development Architecture

    - by john ryan
    Hi i am planning to standardize our way of creating project for our new projects. Currently we are using 3tier architecture where we have our ClassLibrary Project where it includes our Data Access Layer and Business Layer Something like: Solution ClassLibrary >ClassLibrary Project : >DAL(folder) > DAL Classes >BAL(folder) > BAL Classes And this Class Library dll was reference on our presentation Layer Project which are the Application(web/desktop) Something like: Solution WebUniversitySystem >Libraries(folder) > ClassLibrary.dll >WebUniversitySystem(Project): >Reference ClassLibrary.dll >Pages etc... Now i am planning to do is something like: Solution WebUniversitySystem >DataAccess(Project) >BusinesLayer(Project) >Reference DAL >WebUniversitySystem(Project): >Reference BAL >Pages etc... Is this Ok ? Or there is a good Approach that we can follow? Thanks In Regards

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • Internal and external API architecture

    - by Tacomanator
    The company I work for maintains a successful SaaS product that grew "organically" over the years. We are planning to expand the line with a suite of new products that will share data with the existing product. To support this, we are looking to consolidate business logic into a single place: a web service layer. The WS layer will be used by: The web applications A tool to import data A tool to integrate with other client software (not an API per se) We also want to create an API that can be used by our customers that are capable of using it to create their own integrations. We are struggling with the following question: Should the internal API (aka the WS layer) and the external API be one in the same, with security and permission settings to control what can be done by who, or should they be two separate applications where the external API just calls the internal API like any other application? So far in our debate it seems that separating them may be more secure, but will add overhead. What have others done in a similar situation?

    Read the article

  • Ripping MP3s in Rhythmbox Ubuntu 12.10 (64 bit)?

    - by James Fellows Yates
    I installed a couple of days ago Ubuntu 12.10 (64 bit). I today tried ripping a CD in the MP3 format. However, whenever I try to rip, it says it is missing an extra multimedia plugin "Gstreamer extra plug-ins (i386)". I then try to install the :i386 version of the gstreamer-ugly plugins, but then I get the same problem but with the id3-demuxer (or something similar) The Terminal output I get from both problems (but replace the "MPEG-1 Layer 3 (MP3) encoder" with the "ID3-demuxer" name) is: james@clefairy:~$ rhythmbox (rhythmbox:24122): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed Rhythmbox-Message: Missing plugin: gstreamer|0.10|rhythmbox|MPEG-1 Layer 3 (MP3) encoder|encoder-audio/mpeg, mpegversion=(int)1, layer=(int)3 /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject It doesn't help that each time I have to install/remove the entire Gstreamer-ugly collection each time - I can't find that specific file. The CD plays fine, it's the ripping plugin that doesn't seem to work. I didn't have this problem previously on 12.04 (64 bit).

    Read the article

  • Updating query results

    - by Francisco Garcia
    Within a DDD and CQRS context, a query result is displayed as table rows. Whenever new rows are inserted or deleted, their positions must be calculated by comparing the previous query result with the most recent one. This is needed to visualize with an animation new or deleted rows. The model of my view contains an array of the displayed query results. But I need a place to compare its contents against the latest query. Right now I consider my model view part of my application layer, but the comparison of two query result sets seems something that must be done within the domain layer. Which component should cache a query result and which one compare them? Are view models (and their cached contents) supposed to be in the application layer?

    Read the article

  • Processing a stream. Must layers be violated?

    - by Lord Tydus
    Theoretical situation: One trillion foobars are stored in a text file (no fancy databases). Each foobar must have some business logic executed on it. A set of 1 trillion will not fit in memory so the data layer cannot return a big set to the business layer. Instead they are to be streamed in 1 foobar at a time, and have business logic execute on 1 foobar at a time. The stream must be closed when finished. In order for the stream to be closed the business layer must close the stream (a data operation detail), thus violating the separation of concerns. Is it possible to incrementally process data without violating layers?

    Read the article

  • Entity framework architecture

    - by user1741807
    I want to make a entity framework application in Winforms C#. I'm new to entity framework, and don't know how to make the architecture. I want to have the model in a class library, and a GUI layer, and maybe a controller layer. I'm used to that architecture, but don't know have to handle the objects in other layers than the model. Have do I manage objects in the gui layer, when I can't have a reference to the model? I'm used to have some kind of dto, but what's the best way?

    Read the article

  • directory services group query changing randomly

    - by yamspog
    I am receiving an unusual behaviour in my asp.net application. I have code that uses Directory Services to find the AD groups for a given, authenticated user. The code goes something like ... string username = "user"; string domain = "LDAP://DC=domain,DC=com"; DirectorySearcher search = new DirectorySearcher(domain); search.Filter = "(SAMAccountName=" + username + ")"; And then I query and get the list of groups for the given user. The problem is that the code was receiving the list of groups as a list of strings. With our latest release of the software, we are starting to receive the list of groups as a byte[]. The system will return string, suddenly return byte[] and then with a reboot it returns string again. Anyone have any ideas? code sample: DirectoryEntry dirEntry = new DirectoryEntry("LDAP://" + ldapSearchBase); DirectorySearcher userSearcher = new DirectorySearcher(dirEntry) { SearchScope = SearchScope.Subtree, CacheResults = false, Filter = ("(" + txtLdapSearchNameFilter.Text + "=" + userName + ")") }; userResult = userSearcher.FindOne(); ResultPropertyValueCollection valCol = userResult.Properties["memberOf"]; foreach (object val in valCol) { if (val is string) { distName = val.ToString(); } else { distName = enc.GetString((Byte[])val); } }

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >