Search Results

Search found 20864 results on 835 pages for 'account management'.

Page 136/835 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Which built-in account is used for Anonymous Authentication in IIS 7?

    - by smwikipedia
    We know that Microsoft IIS 7.0 offer a slew of authentication methods such as Anonymous Authentication, Form Based Authentication, Digest Authentication, etc. I read from Professional IIS 7 published by Wrox that: When we use Anonymous Authentication, the end-user does not supply credentials, effectively mak- ing an anonymous request. IIS 7.0 impersonates a fixed user account when attempting to process the request (for example, to read the file off the hard disk). So, what is the fixed user account impersonated by IIS? Where can I see it? If I don't know what this account is, how could I assign proper permissions for the clients who are authenticated as anonymous users? Thanks.

    Read the article

  • How to manage several Linux workstation like a cluster?

    - by Richard Zak
    How does one go about managing a lab of Linux workstations? I'd like for users to be able to log in, run their GUI apps (LibreOffice, Firefox, Eclipse, etc), and for the computers to be able to be used as compute nodes (OpenMPI). This part I'm fine with. But how can I centrally deploy a new software package or upgrade an installed package? How can I reload the entire OS on a given node, as if these workstations were part of a super computing cluster? Is there a nice program to help with setting up PXE booting and image management, and remotely managing packages? Ideally such a system would work with Ubuntu. If there isn't a nice package, how could this be set up manually?

    Read the article

  • How can I automate my Linux computer to power off (and on preferably) under certain circumstances?

    - by Ashimema
    OK, So a little background; I've been using Windows Home Server as a Backup Appliance, Media Server and Share Server at home for some time. I decided it was costing me allot of juice so very early on added the "Lights Out" add-on to ensure it was only running as and when needed. I'm now looking to switch to a Linux based server and I'm looking for a similar tool/set of tools for advanced power management. Now the question; Anyone got any all-in-one suggestions (i.e with client parts for both Windows and Linux and a server part for the Linux server), or can anyone simply verify that I'll need to set-up all the individual bits for this myself separately? (A tool similar to "[SmartPower][2]" but for linux would be a great start)

    Read the article

  • How to display password policy information for a user (Ubuntu)?

    - by C.W.Holeman II
    Ubuntu Documentation Ubuntu 9.04 Ubuntu Server Guide Security User Management states that there is a default minimum password length for Ubuntu: By default, Ubuntu requires a minimum password length of 4 characters Is there a command for displaying the current password policies for a user (such as the chage command displays the password expiration information for a specific user)? > sudo chage -l SomeUserName Last password change : May 13, 2010 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 This is rather than examining various places that control the policy and interpreting them since this process could contain errors. A command that reports the composed policy would be used to check the policy setting steps.

    Read the article

  • How to install ported Linux software on a Mac? (MacPorts, Fink, anything better?)

    - by Ben Alpert
    On my Mac OS X machine, how would you recommend I install various software that's been ported from Linux? I don't install such software very frequently, but I've been using MacPorts and it always seems quite slow, presumably because it has to compile the packages on-the-fly. I'd much prefer a package management system that has binary packages, saving me the need to compile things every time I want to download something new. I think Fink has binaries for some of the packages, but I usually see MacPorts recommended as the system to use. Which do you think works better and why? (Or is there another system that I haven't heard of?)

    Read the article

  • Is there a simpler way to log into a local account on a domain workstation?

    - by David
    Occasionally I need to log into a local machine account on a Windows workstation joined to our domain. The syntax for specifying a domain account looks like this: DOMAINNAME\myusername whereas the syntax for logging into a local account is HOSTNAME\myusername The problem is that I often don't know the host name off the top of my head. It is possible to find out by clicking the "How do I log onto another domain" link, but this requires me to memorize or write down an often cryptic hostname. Is there another, simpler way to do this?

    Read the article

  • Lenovo tools for windows 7: can't re-enable wireless

    - by pcampbell
    Consider a netbook - Lenovo S10e with Windows 7 and the S10 Lenovo power management tools. Machine has factory BIOS. Fn+F5 is the key combo to toggle the wireless radio on/off. The tool allows the disabling fine; works as expected. The problem is that the re-enable doesn't work, or is confusing on how to re-enable. Previously tried without success: Fn-F5 Fn-Ctrl-F5 Fn-Shift-F5 Fn-Alt-F5 Here's the onscreen display: Question: How can you re-enable the wireless radio using the Function key on a Lenovo netbook?

    Read the article

  • Easily manage vsftpd virtual users?

    - by Phil
    I have a vsftpd server configured with many virtual users. logins are stored in a Berkeley DB file One configuration file exists for each user to define his permissions (read-only or read-write, home directory, etc.). To do that, I use the user_config_dir parameter (set in vsftpd.conf). I am wondering if it would be possible to manage these virtual users from a simple GUI (such as web interface). I have found some tools but they are limited to generic vsftpd configuration, not virtual users management. Otherwise, PAM-MySQL seems to be a good way to manage users efficiently but only username/password and logs can be stored in database, not permissions. Finally, I've found this thread, but the solution is a bit awkward... Is there any way to easily manage the vsftpd users ?

    Read the article

  • Icon on user account desktop before the user has logged in.

    - by JHamill
    Currently working on a Windows 7 deployment project and I have a requirement to place an RDP icon on a specific users desktop, lets call this user 'Guest'. The image itself will be completely vanilla and all user accounts will be created using commands in the Unattend file. The 'Guest' account will not be a local admin and so it will not be the account used for autologon during the application of the unattend file. As a result of this, the 'Guest' profile will not have been created so I'm unable to simply place the icon at C:\Users\Guest\Desktop. Is there a way to place an icon on this specific users desktop prior to logging in with it? I know there are ways around this i.e. include this account in the base image and log in with it in order to create the profile but I'd like to keep the base image as vanilla as possible. Any ideas or pointers would be greatly appreciated. Thanks in advance.

    Read the article

  • Manage Dell workstations with OpenManage Essentials (OME)

    - by Jonathan Rioux
    How can I manage Dell workstations with OpenManage Essentials ? First, is it possible? Because iv read that only Dell servers can be managed with OME. I would like to inventory each Dell workstations I have in my environment, and be able to see their service tag with warranty expiration, etc. Or which product must I use to do this? There are so much Dell management products like OMCI, OMCC, ITA, etc!! I am so lost with all these products.

    Read the article

  • Close lid option "Do nothing" absent

    - by Jaap
    I just unpacked my new Sony Vaio, with windows 8 pro installed. Everything is nice, so I tried setting my power management options. The "When I close the lid" option only list: Hibernate Sleep Shutdown The "Do nothing" option is not present. I've seen loads of stuff on google where people ask or explain how to set this option to "Do nothing", but in all my power plans this option is absent... Can I use a tool to prevent this, or is there a way to force windows to show me this option (and that it actually works)? UPDATE powercf /q guid gives me this output: Subgroup GUID: 4f971e89-eebd-4455-a8de-9e59040e7347 (Power buttons and lid) GUID Alias: SUB_BUTTONS Power Setting GUID: 5ca83367-6e45-459f-a27b-476b1d01c936 (Lid close action) GUID Alias: LIDACTION Possible Setting Index: 000 Possible Setting Friendly Name: Sleep Possible Setting Index: 001 Possible Setting Friendly Name: Hibernate Possible Setting Index: 002 Possible Setting Friendly Name: Shut down Current AC Power Setting Index: 0x00000000 Current DC Power Setting Index: 0x00000000 Other sections have different enumerations where 000 stands for No action

    Read the article

  • Expire windows user (license) after some time according to first login instead of a solid expiration date

    - by smhnaji
    In a project, we have lots of Windows user that have bought licenses for 1 month, 2 month,... 1 year and so... CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after first login. The user might not need his account right when purchases the license. In another words: When a license is purchased at Jan 1, he should use the license until Feb 1. No matter whether he really logs in or not. He cannot come Feb 5 and begin using his license because that has expired then. What I want is that when he comes at Feb 5 and begins using, the license update until March 5. Environment: Windows Server 2012

    Read the article

  • How to install software packages on a Mac? (MacPorts, Fink, anything better?)

    - by Ben Alpert
    On my Mac OS X machine, how would you recommend I install command line software and other packages? I've been using MacPorts and it always seems quite slow, presumably because it has to compile the packages on-the-fly. I'd much prefer a package management system that has binary packages, saving me the need to compile things every time I want to download something new. I think Fink has binaries for some of the packages, but I usually see MacPorts recommended as the system to use. Which do you think works better and why? (Or is there another system that I haven't heard of?)

    Read the article

  • Ultimate way to use Picasa in a home network

    - by luisfarzati
    I've been trying a lot of approaches but still didn't find any effective solution. I want gigs of photos in a network drive (a IOMega Home Media Network Drive, plugged to my wifi router). I'd like to do 2 things: Do a Picasa import process of all the photos in the drive, making Picasa organize all the files in a year/month folder structure physically. Ideally, the import target directory should be the same network drive, otherwise I should move all the imported files in my local computer back to the drive myself. Share the Picasa database over the network, by uploading it to the network drive. Have me and other members of the family point our Picasas to the network database, and see the photos as well as make changes (tag faces, create logical albums, etc) into it. Is ANY possibility to accomplish this? Or should I be looking for another photo management app, and in that case do you know such one? Thank you!

    Read the article

  • Automatically changing a power profile when a laptop (un)docks?

    - by Dan
    I'm looking for a way to automatically change what happens when I close my laptop's lid depending on if it's in its docking station or not. In an idea world, the on-close behavior would be nothing (when docked) and sleep (when un-docked), but there only seems to be options for behavior when plugged-in and when on battery (when it's plugged in but not docked, I'd like it to sleep when closed). My initial idea would be to create a new power profile with this behavior, but I can't find a way to have it switch when docked (or for the power system to take its docked status into account at all). Any idea?

    Read the article

  • What PowerShell/WSMan clients or queries are consuming more than 1000 requests per 2 seconds?

    - by makerofthings7
    Exchange 2010 remote administration tools are complaining with the following error [txexmb02.ibm.com] Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. The system load quota of 1000 requests per 2 seconds has been exceeded. Send future requests at a slower rate or raise the system quota. The next request from this user will not be approved for at least 558475776 milliseconds. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed VERBOSE: Connecting to TXEXHC02.ibm.com The help document this error referrers to says this is a WS-Man error. We're running SCOM 2007 R2 and am thinking that is increasing the query count, but I need to prove it.

    Read the article

  • Managing client passwords

    - by HurkNburkS
    I am just starting up a small website development business and one of the issues I am having is remembering passwords and account information for clients hosting, cpanel, ftp accounts etc. I was wondering what is the most suitable system / industry standard for controlling such information? Pretty marginal on the close there... I read the FAQ and I felt list this could be a common issue for webmasters, its defiantly not a coding questions so stackoverflow is out of the question and its not a broad question its focused on one particular aspect of being a webmaster.

    Read the article

  • How do I set up a working GUEST USER account in Win XP Pro?

    - by user6501
    I have two user accounts within my WinXP Pro PC. One I'd like to erase. But I'd also like to setup a GUEST user account. I've already gotten 2-step instructions on how to get rid of the extraneous account: a) use an MS tool called delprof.msi b) manually delete the former users files in Documents & Settings. But I guess my original question was too complex -- kind of like a bill in Congress. So now I am just asking the final part of the question: How do I create a GUEST ACCOUNT -- then define what it will authorize/grant access to? i.e. internet browser(s), specific programs and files etc

    Read the article

  • win 8.1 with two startup logins. One local and one microsoft account

    - by lantonis
    On my win 8 pc when the pc was turned on i disabled the login password and it autogone to the start. After i updated to win 8.1 it first takes me to a login and its not set to me microsoft account but as other user. Thus is fails to login and i have to click below login as a microsoft account and reenter everytime my email and password. In windows in users there is only one user and thats the one i had before. No local account set or something. Same to accounts in control panel. How do i disable this double accounts?

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

  • Sharing Bandwidth and Prioritizing Realtime Traffic via HTB, Which Scenario Works Better?

    - by Mecki
    I would like to add some kind of traffic management to our Internet line. After reading a lot of documentation, I think HFSC is too complicated for me (I don't understand all the curves stuff, I'm afraid I will never get it right), CBQ is not recommend, and basically HTB is the way to go for most people. Our internal network has three "segments" and I'd like to share bandwidth more or less equally between those (at least in the beginning). Further I must prioritize traffic according to at least three kinds of traffic (realtime traffic, standard traffic, and bulk traffic). The bandwidth sharing is not as important as the fact that realtime traffic should always be treated as premium traffic whenever possible, but of course no other traffic class may starve either. The question is, what makes more sense and also guarantees better realtime throughput: Creating one class per segment, each having the same rate (priority doesn't matter for classes that are no leaves according to HTB developer) and each of these classes has three sub-classes (leaves) for the 3 priority levels (with different priorities and different rates). Having one class per priority level on top, each having a different rate (again priority won't matter) and each having 3 sub-classes, one per segment, whereas all 3 in the realtime class have highest prio, lowest prio in the bulk class, and so on. I'll try to make this more clear with the following ASCII art image: Case 1: root --+--> Segment A | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment B | +--> High Prio | +--> Normal Prio | +--> Low Prio | +--> Segment C +--> High Prio +--> Normal Prio +--> Low Prio Case 2: root --+--> High Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Normal Prio | +--> Segment A | +--> Segment B | +--> Segment C | +--> Low Prio +--> Segment A +--> Segment B +--> Segment C Case 1 Seems like the way most people would do it, but unless I don't read the HTB implementation details correctly, Case 2 may offer better prioritizing. The HTB manual says, that if a class has hit its rate, it may borrow from its parent and when borrowing, classes with higher priority always get bandwidth offered first. However, it also says that classes having bandwidth available on a lower tree-level are always preferred to those on a higher tree level, regardless of priority. Let's assume the following situation: Segment C is not sending any traffic. Segment A is only sending realtime traffic, as fast as it can (enough to saturate the link alone) and Segment B is only sending bulk traffic, as fast as it can (again, enough to saturate the full link alone). What will happen? Case 1: Segment A-High Prio and Segment B-Low Prio both have packets to send, since A-High Prio has the higher priority, it will always be scheduled first, till it hits its rate. Now it tries to borrow from Segment A, but since Segment A is on a higher level and Segment B-Low Prio has not yet hit its rate, this class is now served first, till it also hits the rate and wants to borrow from Segment B. Once both have hit their rates, both are on the same level again and now Segment A-High Prio is going to win again, until it hits the rate of Segment A. Now it tries to borrow from root (which has plenty of traffic spare, as Segment C is not using any of its guaranteed traffic), but again, it has to wait for Segment B-Low Prio to also reach the root level. Once that happens, priority is taken into account again and this time Segment A-High Prio will get all the bandwidth left over from Segment C. Case 2: High Prio-Segment A and Low Prio-Segment B both have packets to send, again High Prio-Segment A is going to win as it has the higher priority. Once it hits its rate, it tries to borrow from High Prio, which has bandwidth spare, but being on a higher level, it has to wait for Low Prio-Segment B again to also hit its rate. Once both have hit their rate and both have to borrow, High Prio-Segment A will win again until it hits the rate of the High Prio class. Once that happens, it tries to borrow from root, which has again plenty of bandwidth left (all bandwidth of Normal Prio is unused at the moment), but it has to wait again until Low Prio-Segment B hits the rate limit of the Low Prio class and also tries to borrow from root. Finally both classes try to borrow from root, priority is taken into account, and High Prio-Segment A gets all bandwidth root has left over. Both cases seem sub-optimal, as either way realtime traffic sometimes has to wait for bulk traffic, even though there is plenty of bandwidth left it could borrow. However, in case 2 it seems like the realtime traffic has to wait less than in case 1, since it only has to wait till the bulk traffic rate is hit, which is most likely less than the rate of a whole segment (and in case 1 that is the rate it has to wait for). Or am I totally wrong here? I thought about even simpler setups, using a priority qdisc. But priority queues have the big problem that they cause starvation if they are not somehow limited. Starvation is not acceptable. Of course one can put a TBF (Token Bucket Filter) into each priority class to limit the rate and thus avoid starvation, but when doing so, a single priority class cannot saturate the link on its own any longer, even if all other priority classes are empty, the TBF will prevent that from happening. And this is also sub-optimal, since why wouldn't a class get 100% of the line's bandwidth if no other class needs any of it at the moment? Any comments or ideas regarding this setup? It seems so hard to do using standard tc qdiscs. As a programmer it was such an easy task if I could simply write my own scheduler (which I'm not allowed to do).

    Read the article

  • Spotlight on Oracle Social Relationship Management. Social Enable Your Enterprise with Oracle SRM.

    - by Pat Ma
    Facebook is now the most popular site on the Internet. People are tweeting more than they send email. Because there are so many people on social media, companies and brands want to be there too. They want to be able to listen to social chatter, engage with customers on social, create great-looking Facebook pages, and roll out social-collaborative work environments within their organization. This is where Oracle Social Relationship Management (SRM) comes in. Oracle SRM is a product that allows companies to manage their presence with prospects and customers on social channels. Let's talk about two popular use cases with Oracle SRM. Easy Publishing - Companies now have an average of 178 social media accounts - with every product or geography or employee group creating their own social media channel. For example, if you work at an international hotel chain with every single hotel creating their own Facebook page for their location, that chain can have well over 1,000 social media accounts. Managing these channels is a mess - with logging in and out of every account, making sure that all accounts are on brand, and preventing rogue posts from destroying the brand. This is where Oracle SRM comes in. With Oracle Social Relationship Management, you can log into one window and post messages to all 1,000+ social channels at once. You can set up approval flows and have each account generate their own content but that content must be approved before publishing. The benefits of this are easy social media publishing, brand consistency across all channels, and protection of your brand from inappropriate posts. Monitoring and Listening - People are writing and talking about your company right now on social media. 75% of social media users have written a negative post about a brand after a poor customer service experience. Think about all the negative posts you see in your Facebook news feed about delayed flights or being on hold for 45 minutes. There is so much social chatter going on around your brand that it's almost impossible to keep up or comprehend what's going on. That's where Oracle SRM comes in. With Social Relationship Management, a company can monitor and listen to what people are saying about them on social channels. They can drill down into individual posts or get a high level view of trends and mentions. The benefits of this are comprehending what's being said about your brand and its competitors, understanding customers and their intent, and responding to negative posts before they become a PR crisis. Oracle SRM is part of Oracle Cloud. The benefits of cloud deployment for customers are faster deployments, less maintenance, and lower cost of ownership versus on-premise deployments. Oracle SRM also fits into Oracle's vision to social enable your enterprise. With Oracle SRM, social media is not just a marketing channel. Social media is also mechanism for sales, customer support, recruiting, and employee collaboration. For more information about how Oracle SRM can social enable your enterprise, please visit oracle.com/social. For more information about Oracle Cloud, please visit cloud.oracle.com.

    Read the article

  • Are VMWare ESXi 5 patches cumulative?

    - by ewwhite
    It seems basic, but there's confusion about the patching strategy needed to manually update standalone VMWare ESXi hosts. The VMWare vSphere blog attempts to explain this, but it's still not clear. From the blog: Say Patch01 includes updates for the following VIBs: "esxi-base", "driver10" and "driver 44". And then later Patch02 comes out with updates to "esxi-base", "driver20" and "driver 44". P2 is cumulative in that the "esxi-base" and "driver44" VIBs will include the updates in Patch01. However, it's important to note that Patch02 not include the "driver 10" VIB as that module was not updated. Many of my ESXi installations are standalone and do not make use of Update Manager. It is possible to update an individual host using the patches make available through the VMWare patch download portal. The process is quite simple, and that part makes sense. The bigger issue is determining what to actually download and install. In my case, I have a good number of HP-specific ESXi builds that incorporate sensors and management for HP ProLiant hardware. Let's say that those servers start at ESXi build #474610 from 9/2011. Looking at the patch portal screenshot below, there is a patch for ESXi update01, build #623860. There are also patches for builds #653509 and #702118. Coming from the old version of ESXi, what is the proper approach to bring the system fully up-to-date? Which patches are cumulative and which need to be applied sequentially? Perhaps the download size is the confusing factor, but is installing the newest build the right approach, or do I need to step back and patch incrementally?

    Read the article

  • How to use Salt Stack with minions all behind NAT (not publicly accessible, default salt ports not open)?

    - by MountainX
    Can Salt Stack minions communicate with the salt master from behind NAT/Firewalls, etc., using standard ports that would be open be default in all consumer NAT routers (and without the minions having a public DNS record or static IP)? I'm working my way through my first salt tutorial, and this is where I'm stuck. I am able to configure iptables on the Ubuntu salt-master. But I have no control over the routers/NAT that the minions will sit behind. So far I tried these settings: /etc/salt/master: publish_port: 465 ret_port: 443 /etc/salt/minion: master_port: 465 That did not work. Background: I have a custom developed application presently running on about 40 Kubuntu laptops (& more planned). Every few months I have to update the application. (Often this just amounts to replacing a .jar file, which requires root permissions.) I also have to run Ubuntu updates and a few other minor things. I've been doing it manually, one by one, using Team Viewer to log into each client. I would like to dramatically improve this process. The two options I'm aware of are either: use reverse ssh tunnels and bash scripts. I tested this and it works. But I don't get any of the reporting, etc., I would get with Salt Stack. use Salt Stack (or similar) management tool. But I need a really simple tool. I can't invest any time in a big learning curve. I looked at Puppet and a bunch of related tools. The only one I found that looked simple enough for me (so far) was Salt Stack. But I'm stuck now because my minion can't reach the salt-master, as stated above. I appreciate suggestions.

    Read the article

  • How can the little guys effectively learn and use puppet?

    - by drumfire
    Six months ago, in our not-for-profit project we decided to start migrating our system management to a Puppet controlled environment because we are expecting our number of servers to grow substantially between now and a year from now. Since the decision has been made our IT guys have become a bit too annoyed a bit too often. Their biggest objections are: "We're not programmers, we're sysadmins"; Modules are available online but many differ from one another; wheels are being reinvented too often, how do you decide which one fits the bill; Code in our repo is not transparent enough, to find how something works they have to recurse through manifests and modules they might have even written themselves a while ago; One new daemon requires writing a new module, conventions have to be similar to other modules, a difficult process; "Let's just run it and see how it works" Tons of hardly known 'extensions' in community modules: 'trocla', 'augeas', 'hiera'... how can our sysadmins keep track? I can see why a large organisation would dispatch their sysadmins to puppet courses to become puppet masters. But how would smaller players get to learn puppet to a professional level if they do not go to courses and basically learn it via their browser and editor?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >