Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 149/489 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Songbird 4-pane filter panel library view

    - by wonea
    I love Songbird but I was already whether it's possible add an additional column to the Songbird layout, I love categorising my music by dates as well, and find right clicking above the column header to change a little to messy and want even more flexibility. So here's my mockup of the 4-pane panel library, I think I've made Songbird do this by mistake in the past through a bug. Question is, is this possible?

    Read the article

  • how can I get rid of duplicate administrators on 2 macbook pro's

    - by BCITY
    I got a new MacBook Pro (insurance payout) and passed my old one (previous generation MacBookPro 2008) on to a staff member whose MacBook Pro died. I used carbon copy to move my data to the new laptop but couldn't do the same with her move as the older Macbook wouldn't allow it as it was on an old OS. Ended up just migrating her software onto my old MBP. Now however, they both have the same admin identities and her computer is synching with my mobile me account. Can I change the administrator identity and save all her applications? I am hoping that I don't have to wipe and reinstall everything to change her computers identity.

    Read the article

  • How do I run a search query based on recipient in PowerShell for Exchange 2010?

    - by LucidLuniz
    I've tried to run the following commands but neither work and I'm not sure how I should setup the query. I tried to find a full list of available search strings but couldn't locate one online or using help. I did find the list here (http://technet.microsoft.com/en-us/library/bb232132.aspx#AQS) but it doesn't include the search queries I am looking for (based on the fact that it doesn't even list "Received:" which I know is an option because I use it all the time. Search-Mailbox -Identity -SearchQuery 'Received:' 'To:' -LogLevel Full –DeleteContent Search-Mailbox -Identity -SearchQuery 'Received:' 'Recipient:' -LogLevel Full –DeleteContent Thanks in advance!

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote host

    - by disusered
    I am running Ubuntu 10.10 on a remote box. I ssh to it everyday without issues but today out of the blue, I get the following error: ssh_exchange_identification: Connection closed by remote host If I connect with -vv, I get the following: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/bla/.ssh/config debug1: Applying options for ubuntu-server debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ubuntu-server.com [123.123.123.123] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/bla/.ssh/id_rsa type -1 debug1: identity file /Users/bla/.ssh/id_rsa-cert type -1 ssh_exchange_identification: Connection closed by remote host If I remove the key, I get the exact same output (sans "debug2: key_type_...). I've managed to log in physically and checked my hosts.allow and hosts.deny but they have no entries. I tried removing and reinstalling OpenSSH, checked authorized_keys and ~/.ssh permissions and tried connecting from other computers only to get the same error. I'm at my wits end, any help would be greatly appreciated.

    Read the article

  • How can I get the comment of the current authorized_keys ssh key ?

    - by krosenvold
    Edit: What I really need to know WHICH ssh key from authorized_keys has been used to identify the currently logged on user. According to "man sshd": Protocol 2 public key consist of options, keytype, base64-encoded key, comment. I see that when I use ssh-keygen, the comment is usually the local identity of the user. Is there any way to access this value when I'm on the remote computer ? (Kind of like the SSH_CLIENT shell variable) (Assuming I enforce the comment to be a remote identity of some sort, I would like to log this from a shell-script! This is on ubuntu)

    Read the article

  • Excel 2007 - combination of If and vlookup formula

    - by Neo
    i have a cell that refer to more than 1 worksheet and display the result (value) when it found the product from the 2 sheets. SheetA has 2 columns which column A is the value and column B is that product name, SheetB only has product name. Below is my formula but it failed to display result of product value, instead it always display Not Found even though the product is found from the sheets, is there anything wrong with the formula ? =IFERROR(VLOOKUP(A35,'SheetA'!A:B,1,FALSE),IFERROR(VLOOKUP(A35,'SheetB'!D19:D115,1,FALSE),"Not Found"))

    Read the article

  • How can I get "Calculated Columns" to work in excel?

    - by Shawn Persels
    Using Excel 2010, I want a single formula to apply to all cells in a column. I see documentation for a feature called "Calculated Columns". That is exactly what I want, but when I follow the instructions I only end up creating a formulate in a single cell - not the whole column. I don't want to use "Fill" or "Copy" because the number or rows in the sheet changes periodically and maintaining the formulas would be very tedious.

    Read the article

  • How to exempt rows from being hidden/filtered in Excel 2010?

    - by tarheel
    Consider a spreadsheet that starts looking like this: I want to be able to filter for Name 1 on the left column and have it look like this: Yes, I realize that the simple answer is to filter for Name 1 and Header, but I have other people using this spreadsheet that don't seem to get that. So, how can I make it foolproof for them and make it impossible to filter out the rows that have Header in the left column?

    Read the article

  • Output of free -m on a Linux server

    - by cat pants
    I can see from this page here: http://www.linuxatemyram.com/ That the correct amount of free ram is on the "-/+ buffers/cache" line. The extra ram being used is for disk caching. However, I noticed that the total amount of memory used listed in "-/+ buffers/cache" line is significantly less than the sum total of the "RES" column of the processes shown in top. And AFAIK, the "RES" column is how much physical memory is being used by a process. How do you explain this discrepancy?

    Read the article

  • mexTcpBinding in WCF - IMetadataExchange errors

    - by David
    I'm wanting to get a WCF-over-TCP service working. I was having some problems with modifying my own project, so I thought I'd start with the "base" WCF template included in VS2008. Here is the initial WCF App.config and when I run the service the WCF Test Client can work with it fine: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="http://localhost:8731/Design_Time_Addresses/WcfTcpTest/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> This works perfectly, no issues at all. I figured changing it from HTTP to TCP would be trivial: change the bindings to their TCP equivalents and remove the httpGetEnabled serviceMetadata element: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service name="WcfTcpTest.Service1" behaviorConfiguration="WcfTcpTest.Service1Behavior"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1337/Service1/" /> </baseAddresses> </host> <endpoint address="" binding="netTcpBinding" contract="WcfTcpTest.IService1"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexTcpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WcfTcpTest.Service1Behavior"> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> But when I run this I get this error in the WCF Service Host: System.InvalidOperationException: The contract name 'IMetadataExchange' could not be found in the list of contracts implemented by the service Service1. Add a ServiceMetadataBehavior to the configuration file or to the ServiceHost directly to enable support for this contract. I get the feeling that you can't send metadata using TCP, but that's the case why is there a mexTcpBinding option?

    Read the article

  • Stumped by "The remote server returned an error: (403) Forbidden" with WCF Service in https

    - by RJ
    I have a WCF Service that I have boiled down to next to nothing because of this error. It is driving me up the wall. Here's what I have now. A very simple WCF service with one method that returns a string with the value, "test". A very simple Web app that uses the service and puts the value of the string into a label. A web server running IIS 6 on Win 2003 with a SSL certificate. Other WCF services on the same server that work. I publish the WCF service to it's https location I run the web app in debug mode in VS and it works perfectly. I publish the web app to it's https location on the same server the WCF service resides under the same SSL certificate I get, "The remote server returned an error: (403) Forbidden" I have changed almost every setting in IIS as well as the WCF and Web apps to no avail. I have compared setting in the WCF services that work and everything is the same. Below are the setting in the web.config for the WCF Service and the WEB app: It appears the problem has to do with the Web app but I am out of ideas. Any ideas: WCF Service: <system.serviceModel> <bindings> <client /> <services> <service behaviorConfiguration="Ucf.Smtp.Wcf.SmtpServiceBehavior" name="Ucf.Smtp.Wcf.SmtpService"> <host> <baseAddresses> <add baseAddress="https://test.net.ucf.edu/webservices/Smtp/" /> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="Ucf.Smtp.Wcf.ISmtpService" bindingConfiguration="SSLBinding"> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="Ucf.Smtp.Wcf.SmtpServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" httpsHelpPageEnabled="True"/> </behavior> </serviceBehaviors> </behaviors> Web App: <system.serviceModel> <bindings><wsHttpBinding> <binding name="WSHttpBinding_ISmtpService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true" /> </security> </binding> <client> <endpoint address="https://net228.net.ucf.edu/webservices/smtp/SmtpService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_ISmtpService" contract="SmtpService.ISmtpService" name="WSHttpBinding_ISmtpService"> <identity> <dns value="localhost" /> </identity> </client> </system.serviceModel>

    Read the article

  • Connecting WCF from Webpart

    - by Bhaskar
    I am consuming a WCF Service from a webpart in Sharepoint 2007. But its giving me the following error: There was no endpoint listening at http://locathost:2929/BusinessObjectService that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. --- System.Net.WebException: The remote server returned an error: (404) Not Found. My Binding Details in the WCF web.config is: <system.serviceModel> <diagnostics performanceCounters="All"> <messageLogging logEntireMessage="true" logMessagesAtServiceLevel="false" maxMessagesToLog="4000" /> </diagnostics> <services> <service behaviorConfiguration="MyService.IBusinessObjectServiceContractBehavior" name="MyService.BusinessObjectService"> <endpoint address="http://localhost:2929/BusinessObjectService.svc" binding="wsHttpBinding" contract="MyService.IBusinessObjectServiceContract"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyService.IBusinessObjectServiceContractBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> My binding details in the Sharepoint site web.config is: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IBusinessObjectServiceContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:2929/BusinessObjectService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IBusinessObjectServiceContract" contract="BusinessObjectService.IBusinessObjectServiceContract" name="WSHttpBinding_IBusinessObjectServiceContract"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> I am able to view the WCF (and its wsdl) in browser, using the URL given in the end point. So, I guess the URL is definately correct. Please help !!!

    Read the article

  • WCF and Firewall

    - by Jim Biddison
    I have written a very simple WCF service (hosted in IIS) and web application that talks to it. If they are both in the same domain, it works fine. But when I put them in different domains (on different sides of a firewall), then the web applications says: The request for security token could not be satisfied because authentication failed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ServiceModel.FaultException: The request for security token could not be satisfied because authentication failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. The revelant part of the service web.config is: <system.serviceModel> <services> <service behaviorConfiguration="MigrationHelperBehavior" name="MigrationHelper"> <endpoint address="" binding="wsHttpBinding" contract="IMigrationHelper"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <endpoint binding="httpBinding" contract="IMigrationHelper" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MigrationHelperBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> The web appliation (client) web.config says: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMigrationHelper" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://mydomain.com/MigrationHelper.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMigrationHelper" contract="MyNewServiceReference.IMigrationHelper" name="WSHttpBinding_IMigrationHelper"> <identity> <dns value="localhost"/> </identity> </endpoint> </client> </system.serviceModel> I believe both these are just the default that VS 2008 created for me. So my question is, how does one go about configurating the service and client, when they are not in the same domain? Thanks .Jim Biddison

    Read the article

  • WCF over http settings needed

    - by Crishna
    Hello, We currently have a WCF Service that works over https. But we want to change it to make it work over just http. Could any one tell me what all do I need to change to make the the wcf service work over http. Below is my config file values. Is there anything I else I need to cahnge other than the web.config?? ANy help greatly appreciated <bindings> <basicHttpBinding> <binding name="basicHttpBinding_Windows" maxReceivedMessageSize="500000000" maxBufferPoolSize="500000000" messageEncoding="Mtom"> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Windows" /> </security> <readerQuotas maxDepth="500000000" maxArrayLength="500000000" maxBytesPerRead="500000000" maxNameTableCharCount="500000000" maxStringContentLength="500000000"/> </binding> </basicHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="myproject_Behavior"> <dataContractSerializer /> <synchronousReceive /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WebService.WSBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> <behavior name="WebService.Forms_WSBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="WebService.WSBehavior" name="IMMSWebService.mywebservice_WS"> <endpoint address="myproject_WS" binding="basicHttpBinding" bindingConfiguration="basicHttpBinding_Windows" bindingName="basicHttpBinding" contract="WebService.ICommand"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> <host> <timeouts closeTimeout="00:10:00" openTimeout="00:10:00" /> </host> </service> <service behaviorConfiguration="WebService.Forms_WSBehavior" name="WebService.Forms_WS"> <endpoint address="" binding="wsHttpBinding" contract="WebService.IForms_WS"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • XNA: Camera's Rotation and Translation matrices seem to interfere with each other

    - by Danjen
    I've been following the guide here for how to create a custom 2D camera in XNA. It works great, I've implemented it before, but for some reason, the matrix math is throwing me off. public sealed class Camera2D { public Vector2 Origin { get; set; } public Vector2 Position { get; set; } public float Scale { get; set; } public float Rotation { get; set; } } It might be easier to just show you a picture of my problem: http://i.imgur.com/H1l6LEx.png What I want to do is allow the camera to pivot around any given point. Right now, I have the rotations mapped to my shoulder buttons on a gamepad, and if I press them, it should rotate around the point the camera is currently looking at. Then, I use the left stick to move the camera around. The problem is that after it's been rotated, pressing "up" results in it being used relative to the rotation, creating the image above. I understand that matrices have to be applied in a certain order, and that I have to offset the thing to be rotated around the world origin and move it back, but it just won't work! public Matrix GetTransformationMatrix() { Matrix mRotate = Matrix.Identity * Matrix.CreateTranslation(-Origin.X, -Origin.Y, 0.00f) * // Move origin to world center Matrix.CreateRotationZ(MathHelper.ToRadians(Rotation)) * // Apply rotation Matrix.CreateTranslation(+Origin.X, +Origin.Y, 0.00f); // Undo the move operation Matrix mTranslate = Matrix.Identity * Matrix.CreateTranslation(-Position.X, Position.Y, 0.00f); // Apply the actual translation return mRotate * mTranslate; } So to recap, it seems I can have it rotate around an arbitrary point and lose the ability to have "up" move the camera straight up, or I can rotate it around the world origin and have the camera move properly, but not both.

    Read the article

  • Insufficient Permissions Problems with MSDeploy and TFS Build 2010

    - by jdanforth
    I ran into these problems on a TFS 2010 RC setup where I wanted to deploy a web site as part of the nightly build: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed.(An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'.)  An error occurred when reading the IIS Configuration File 'MACHINE/REDIRECTION'. The identity performing the operation was 'NT AUTHORITY\NETWORK SERVICE'. Filename: \\?\C:\Windows\system32\inetsrv\config\redirection.config Error: Cannot read configuration file due to insufficient permissions  As you can see I’m running the build service as NETWORK SERVICE which is quite usual. The first thing I did then was to give NETWORK SERVICE read access to the whole directory where redirection.config is sitting; C:\Windows\system32\inetsrv\config. That gave me a new error: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets (3481): Web deployment task failed. (Attempted to perform an unauthorized operation.) The reason for this problem was that NETWORK SERVICE didn’t have write permission to the place where I’ve told MSDeploy to put the web site physically on the disk. Once I’d given the NETWORK SERVICE the right permissions, MSDeploy completed as expected! NOTE! I’ve not had this problem with TFS 2010 RTM, so it might be just a RC issue!

    Read the article

  • SQL SERVER – Create Primary Key with Specific Name when Creating Table

    - by pinaldave
    It is interesting how sometimes the documentation of simple concepts is not available online. I had received email from one of the reader where he has asked how to create Primary key with a specific name when creating the table itself. He said, he knows the method where he can create the table and then apply the primary key with specific name. The attached code was as follows: CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL) GO ALTER TABLE [dbo].[TestTable] ADD  CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED ([ID] ASC) GO He wanted to know if we can create Primary Key as part of the table name as well, and also give it a name at the same time. Though it would look very normal to all experienced developers, it can be still confusing to many. Here is the quick code that functions as the above code in one single statement. CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL CONSTRAINT [PK_TestTable] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • polkit: disable all users except those in group wheel?

    - by John Nash
    Is it possible to do the following using 1 polkit .pkla file? Disable all users except those in the wheel group from using polkit. The users in the wheel group will need to provide the root password when using polkit. /etc/polkit-1/localauthority/50-local.d/wheel-only.pkla [disable all users except the wheel group] Identity=unix-group:wheel Action=* ResultAny=??? ResultInactive=??? ResultActive=??? The following file works but you need to provide all the users in /etc/group: [disable all users except those in the wheel group: root and myuser] Identity=unix-user:daemon;unix-user:bin;unix-user:sys;unix-user:adm;unix-user:tty;unix-user:disk;unix-user:lp;unix-user:mail;unix-user:news;unix-user:uucp;unix-user:man;unix-user:proxy;unix-user:kmem;unix-user:dialout;unix-user:fax;unix-user:voice;unix-user:cdrom;unix-user:floppy;unix-user:tape;unix-user:sudo;unix-user:audio;unix-user:dip;unix-user:www-data;unix-user:backup;unix-user:operator;unix-user:list;unix-user:irc;unix-user:src;unix-user:gnats;unix-user:shadow;unix-user:utmp;unix-user:video;unix-user:sasl;unix-user:plugdev;unix-user:staff;unix-user:games;unix-user:users;unix-user:nogroup;unix-user:libuuid;unix-user:crontab;unix-user:messagebus;unix-user:Debian-exim;unix-user:mlocate;unix-user:avahi;unix-user:netdev;unix-user:bluetooth;unix-user:lpadmin;unix-user:ssl-cert;unix-user:fuse;unix-user:utempter;unix-user:Debian-gdm;unix-user:scanner;unix-user:saned;unix-user:i2c;unix-user:haldaemon;unix-user:powerdev Action=* ResultAny=no ResultInactive=no ResultActive=no

    Read the article

  • Standards Corner: OAuth WG Client Registration Problem

    - by Tanu Sood
    Phil Hunt is an active member of multiple industry standards groups and committees (see brief bio at the end of the post) and has spearheaded discussions, creation and ratifications of  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt This afternoon, the OAuth Working Group will meet at IETF88 in Vancouver to discuss some important topics important to the maturation of OAuth. One of them is the OAuth client registration problem.OAuth (RFC6749) was initially developed with a simple deployment model where there is only monopoly or singleton cloud instance of a web API (e.g. there is one Facebook, one Google, on LinkedIn, and so on). When the API publisher and API deployer are the same monolithic entity, it easy for developers to contact the provider and register their app to obtain a client_id and credential.But what happens when the API is for an open source project where there may be 1000s of deployed copies of the API (e.g. such as wordpress). In these cases, the authors of the API are not the people running the API. In these scenarios, how does the developer obtain a client_id? An example of an "open deployed" API is OpenID Connect. Connect defines an OAuth protected resource API that can provide personal information about an authenticated user -- in effect creating a potentially common API for potential identity providers like Facebook, Google, Microsoft, Salesforce, or Oracle. In Oracle's case, Fusion applications will soon have RESTful APIs that are deployed in many different ways in many different environments. How will developers write apps that can work against an openly deployed API with whom the developer can have no prior relationship?At present, the OAuth Working Group has two proposals two consider: Dynamic RegistrationDynamic Registration was originally developed for OpenID Connect and UMA. It defines a RESTful API in which a prospective client application with no client_id creates a new client registration record with a service provider and is issued a client_id and credential along with a registration token that can be used to update registration over time.As proof of success, the OIDC community has done substantial implementation of this spec and feels committed to its use. Why not approve?Well, the answer is that some of us had some concerns, namely: Recognizing instances of software - dynamic registration treats all clients as unique. It has no defined way to recognize that multiple copies of the same client are being registered other then assuming if the registration parameters are similar it might be the same client. Versioning and Policy Approval of open APIs and clients - many service providers have to worry about change management. They expect to have approval cycles that approve versions of server and client software for use in their environment. In some cases approval might be wide open, but in many cases, approval might be down to the specific class of software and version. Registration updates - when does a client actually need to update its registration? Shouldn't it be never? Is there some characteristic of deployed code that would cause it to change? Options lead to complexity - because each client is treated as unique, it becomes unclear how the clients and servers will agree on what credentials forms are acceptable and what OAuth features are allowed and disallowed. Yet the reality is, developers will write their application to work in a limited number of ways. They can't implement all the permutations and combinations that potential service providers might choose. Stateful registration - if the primary motivation for registration is to obtain a client_id and credential, why can't this be done in a stateless fashion using assertions? Denial of service - With so much stateful registration and the need for multiple tokens to be issued, will this not lead to a denial of service attack / risk of resource depletion? At the very least, because of the information gathered, it would difficult for service providers to clean up "failed" registrations and determine active from inactive or false clients. There has yet to be much wide-scale "production" use of dynamic registration other than in small closed communities. Client Association A second proposal, Client Association, has been put forward by Tony Nadalin of Microsoft and myself. We took at look at existing use patterns to come up with a new proposal. At the Berlin meeting, we considered how WS-STS systems work. More recently, I took a review of how mobile messaging clients work. I looked at how Apple, Google, and Microsoft each handle registration with APNS, GCM, and WNS, and a similar pattern emerges. This pattern is to use an existing credential (mutual TLS auth), or client bearer assertion and swap for a device specific bearer assertion.In the client association proposal, the developer's registration with the API publisher is handled by having the developer register with an API publisher (as opposed to the party deploying the API) and obtaining a software "statement". Or, if there is no "publisher" that can sign a statement, the developer may include their own self-asserted software statement.A software statement is a special type of assertion that serves to lock application registration profile information in a signed assertion. The statement is included with the client application and can then be used by the client to swap for an instance specific client assertion as defined by section 4.2 of the OAuth Assertion draft and profiled in the Client Association draft. The software statement provides a way for service provider to recognize and configure policy to approve classes of software clients, and simplifies the actual registration to a simple assertion swap. Because the registration is an assertion swap, registration is no longer "stateful" - meaning the service provider does not need to store any information to support the client (unless it wants to). Has this been implemented yet? Not directly. We've only delivered draft 00 as an alternate way of solving the problem using well-known patterns whose security characteristics and scale characteristics are well understood. Dynamic Take II At roughly the same time that Client Association and Software Statement were published, the authors of Dynamic Registration published a "split" version of the Dynamic Registration (draft-richer-oauth-dyn-reg-core and draft-richer-oauth-dyn-reg-management). While some of the concerns above are addressed, some differences remain. Registration is now a simple POST request. However it defines a new method for issuing client tokens where as Client Association uses RFC6749's existing extension point. The concern here is whether future client access token formats would be addressed properly. Finally, Dyn-reg-core does not yet support software statements. Conclusion The WG has some interesting discussion to bring this back to a single set of specifications. Dynamic Registration has significant implementation, but Client Association could be a much improved way to simplify implementation of the overall OpenID Connect specification and improve adoption. In fairness, the existing editors have already come a long way. Yet there are those with significant investment in the current draft. There are many that have expressed they don't care. They just want a standard. There is lots of pressure on the working group to reach consensus quickly.And that folks is how the sausage is made.Note: John Bradley and Justin Richer recently published draft-bradley-stateless-oauth-client-00 which on first look are getting closer. Some of the details seem less well defined, but the same could be said of client-assoc and software-statement. I hope we can merge these specs this week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About the Writer: Phil Hunt joined Oracle as part of the November 2005 acquisition of OctetString Inc. where he headed software development for what is now Oracle Virtual Directory. Since joining Oracle, Phil works as CMTS in the Identity Standards group at Oracle where he developed the Kantara Identity Governance Framework and provided significant input to JSR 351. Phil participates in several standards development organizations such as IETF and OASIS working on federation, authorization (OAuth), and provisioning (SCIM) standards.  Phil blogs at www.independentid.com and a Twitter handle of @independentid.

    Read the article

  • Stage3D Camera problem

    - by Thomas Versteeg
    I am trying to create a 2D Stage3D game where you can move the camera around the level in an RTS style. I thought about using Orthographic Matrix3D functions for this but when I try to scroll the whole "stage" also scrolls. This is the Camera code: public function Camera2D(width:int, height:int, zoom:Number = 1) { resize(width, height); _zoom = zoom; } public function resize(width:Number, height:Number):void { _width = width; _height = height; _projectionMatrix = makeMatrix(0, width, 0, height); _recalculate = true; } protected function makeMatrix(left:Number, right:Number, top:Number, bottom:Number, zNear:Number = 0, zFar:Number = 1):Matrix3D { return new Matrix3D(Vector.<Number>([ 2 / (right - left), 0, 0, 0, 0, 2 / (top - bottom), 0, 0, 0, 0, 1 / (zFar - zNear), 0, 0, 0, zNear / (zNear - zFar), 1 ])); } public function get viewMatrix():Matrix3D { if (_recalculate) { _recalculate = false; _viewMatrix.identity(); _viewMatrix.appendTranslation( -_width / 2 - _x, -_height / 2 - y, 0); _viewMatrix.appendScale(_zoom, _zoom, 1); _renderMatrix.identity(); _renderMatrix.append(_viewMatrix); _renderMatrix.append(_projectionMatrix); } return _renderMatrix; } Here are two screenshots to show what I mean: How do I only let the inside of the stage3D scroll and not the whole stage?

    Read the article

  • Inverted textures

    - by brainydexter
    I'm trying to draw textures aligned with this physics body whose coordinate system's origin is at the center of the screen. (XNA)Spritebatch has its default origin set to top-left corner. I got the textures to be positioned correctly, but I noticed my textures are vertically inverted. That is, an arrow texture pointing Up , when rendered points down. I'm not sure where I am going wrong with the math. My approach is to convert everything in physic's meter units and draw accordingly. Matrix proj = Matrix.CreateOrthographic(scale * graphics.GraphicsDevice.Viewport.AspectRatio, scale, 0, 1); Matrix view = Matrix.Identity; effect.World = Matrix.Identity; effect.View = view; effect.Projection = proj; effect.TextureEnabled = true; effect.VertexColorEnabled = true; effect.Techniques[0].Passes[0].Apply(); SpriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, DepthStencilState.Default, RasterizerState.CullNone, effect); m_Paddles[1].Draw(gameTime); SpriteBatch.End(); where Paddle::Draw looks like: SpriteBatch.Draw(paddleTexture, mBody.Position, null, Color.White, 0f, new Vector2(16f, 16f), // origin of the texture 0.1875f, SpriteEffects.None, // width of box is 3*2 = 6 meters. texture is 32 pixels wide. to make it 6 meters wide in world space: 6/32 = 0.1875f 0); The orthographic projection matrix seem fine to me, but I am obviously doing something wrong somewhere! Can someone please help me figure out what am i doing wrong here ? Thanks

    Read the article

  • Is there any functional difference between immutable value types and immutable reference types?

    - by Kendall Frey
    Value types are types which do not have an identity. When one variable is modified, other instances are not. Using Javascript syntax as an example, here is how a value type works. var foo = { a: 42 }; var bar = foo; bar.a = 0; // foo.a is still 42 Reference types are types which do have an identity. When one variable is modified, other instances are as well. Here is how a reference type works. var foo = { a: 42 }; var bar = foo; bar.a = 0; // foo.a is now 0 Note how the example uses mutatable objects to show the difference. If the objects were immutable, you couldn't do that, so that kind of testing for value/reference types doesn't work. Is there any functional difference between immutable value types and immutable reference types? Is there any algorithm that can tell the difference between a reference type and a value type if they are immutable? Reflection is cheating. I'm wondering this mostly out of curiosity.

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >