Search Results

Search found 11717 results on 469 pages for 'credentials manager'.

Page 85/469 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Featured partner: Avnet To Supply Oracle Enterprise Cloud Management Solutions In Middle East & North Africa Region

    - by Javier Puerta
    "Global IT solutions distribution leader, Avnet Technology Solutions have been approved to distribute Oracle Enterprise Manager 12c, a complete, integrated and business-driven enterprise cloud management solution, in the Middle East & North Africa region. This will help Avnet which serve customers and suppliers in more than 70 countries to accelerate partners’ business growth in the region while providing support and enablement services to help them quickly address local opportunities. Oracle Enterprise Manager 12c creates business value from IT by leveraging the built-in management capabilities of the Oracle stack for traditional and cloud environments. Using this solution, customers have reported 12 times faster achievement of IT-business alignment. According to Senior Director Oracle business MENA, Avnet Technology Solutions, Hani Barakat, “Enterprises in the Middle East and North Africa region can increase their efficiency and responsiveness while reducing costs and complexity for traditional data centers, virtualised, and cloud computing environments with the help of Oracle Enterprise Manager 12c.” See full press release in "Ventures Africa"

    Read the article

  • Tips to Increase PC Performance in Windows 7

    The Windows 7 Task Manager is a solid tool that gives you an overview of what is happening in terms of running processes on your computer. While the Task Manager may appear simple to the naked eye it can be used in several ways to help identify possible sources of problematic performance. This tutorial will offer some tips that you can employ with the Task Manager to help improve your PC s performance.... Rolling out Agile Development? Try now! Explore Agile on an integrated platform for Agile and traditional development

    Read the article

  • Consultant in a firm that doesn't understand the tech!

    - by techsjs2012
    I got the job as a Consultant in a firm that has 3 other programmers. My job is to rewrite all the old system in Java, Spring etc but the staff programmers only know perl and the manager does not know any programming. I am trying to get them to understand that I have 6 projects to rewrite here but no one has design docs or spec. the staff programmers never had to write any documents. Plus I cant get the manager to understand the new java tech stuff.. he keeps asking some of the staff for views on things but the staff don't know it or understand it. Where do I go from here to make the manager understand that the staff programmers or someone has to write a design document so I know what to build. or if I have to write the documents how do I get the information?

    Read the article

  • Why are downloads from Canonical Partners repository so slow?

    - by Sabacon
    If I need Sun Java, Adobe Flash Plugin or anything else that comes from Canonical Partners the package downloads are painfully slow even small sized packages like the Flash plugin, to speed things up I have to go here: http://archive.canonical.com/ubuntu/pool/partner/ to find what I want, download the packages with a download manager (which is usually about 20 times faster than the package manager) and then place them in my /var/cache/apt/archives folder I run the package manager afterwards, as long as the right versions of the packages I ask to install are detected in the /var/cache/apt/archives folder they will be installed immediately. I would like to stop doing this, so I am wondering if anyone else has this problem, what could be the cause and if there is a fix. I am located in the Western Caribbean region. I think it would be helpful to note that all other packages coming from the repository I have selected with synaptic download at acceptable speeds.

    Read the article

  • New release for the Visual Studio 2010 and .NET Framework 4 Training Kit

    - by Enrique Lima
    Among the new content in the release, is a set of ALM docs and labs. The ALM content referenced above is: o Using Code Analysis with Visual Studio 2010 to Improve Code Quality o Introduction to Exploratory Testing with Microsoft Test Manager 2010 o Introduction to Platform Testing with Microsoft Test Manager 2010 o Introduction to Quality Tracking with Visual Studio 2010 o Introduction to Test Planning with Microsoft Test Manager 2010 All ALM labs point to the latest version of the VS 2010 RTM VM. You can download the Training Kit from :  http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23507 Visit the online content: http://msdn.microsoft.com/en-us/VS2010TrainingCourse Download the most recent version of the Visual Studio: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=240

    Read the article

  • Synchronized Property Changes (Part 4)

    - by Geertjan
    The next step is to activate the undo/redo functionality... for a Node. Something I've not seen done before. I.e., when the Node is renamed via F2 on the Node, the "Undo/Redo" buttons should start working. Here is the start of the solution, via this item in the mailing list and Timon Veenstra's BeanNode class, note especially the items in bold: public class ShipNode extends BeanNode implements PropertyChangeListener, UndoRedo.Provider { private final InstanceContent ic; private final ShipSaveCapability saveCookie; private UndoRedo.Manager manager; private String oldDisplayName; private String newDisplayName; private Ship ship; public ShipNode(Ship bean) throws IntrospectionException { this(bean, new InstanceContent()); } private ShipNode(Ship bean, InstanceContent ic) throws IntrospectionException { super(bean, Children.LEAF, new ProxyLookup(new AbstractLookup(ic), Lookups.singleton(bean))); this.ic = ic; setDisplayName(bean.getType()); setShortDescription(String.valueOf(bean.getYear())); saveCookie = new ShipSaveCapability(bean); bean.addPropertyChangeListener(WeakListeners.propertyChange(this, bean)); } @Override public Action[] getActions(boolean context) { List<? extends Action> shipActions = Utilities.actionsForPath("Actions/Ship"); return shipActions.toArray(new Action[shipActions.size()]); } protected void fire(boolean modified) { if (modified) { ic.add(saveCookie); } else { ic.remove(saveCookie); } } @Override public UndoRedo getUndoRedo() { manager = Lookup.getDefault().lookup( UndoRedo.Manager.class); return manager; } private class ShipSaveCapability implements SaveCookie { private final Ship bean; public ShipSaveCapability(Ship bean) { this.bean = bean; } @Override public void save() throws IOException { StatusDisplayer.getDefault().setStatusText("Saving..."); fire(false); } } @Override public boolean canRename() { return true; } @Override public void setName(String newDisplayName) { Ship c = getLookup().lookup(Ship.class); oldDisplayName = c.getType(); c.setType(newDisplayName); fireNameChange(oldDisplayName, newDisplayName); fire(true); fireUndoableEvent("type", ship, oldDisplayName, newDisplayName); } public void fireUndoableEvent(String property, Ship source, Object oldValue, Object newValue) { ReUndoableEdit reUndoableEdit = new ReUndoableEdit( property, source, oldValue, newValue); UndoableEditEvent undoableEditEvent = new UndoableEditEvent( this, reUndoableEdit); manager.undoableEditHappened(undoableEditEvent); } private class ReUndoableEdit extends AbstractUndoableEdit { private Object oldValue; private Object newValue; private Ship source; private String property; public ReUndoableEdit(String property, Ship source, Object oldValue, Object newValue) { super(); this.oldValue = oldValue; this.newValue = newValue; this.source = source; this.property = property; } @Override public void undo() throws CannotUndoException { setName(oldValue.toString()); } @Override public void redo() throws CannotRedoException { setName(newValue.toString()); } } @Override public String getDisplayName() { Ship c = getLookup().lookup(Ship.class); if (null != c.getType()) { return c.getType(); } return super.getDisplayName(); } @Override public String getShortDescription() { Ship c = getLookup().lookup(Ship.class); if (null != String.valueOf(c.getYear())) { return String.valueOf(c.getYear()); } return super.getShortDescription(); } @Override public void propertyChange(PropertyChangeEvent evt) { if (evt.getPropertyName().equals("type")) { String oldDisplayName = evt.getOldValue().toString(); String newDisplayName = evt.getNewValue().toString(); fireDisplayNameChange(oldDisplayName, newDisplayName); } else if (evt.getPropertyName().equals("year")) { String oldToolTip = evt.getOldValue().toString(); String newToolTip = evt.getNewValue().toString(); fireShortDescriptionChange(oldToolTip, newToolTip); } fire(true); } } Undo works when rename is done, but Redo never does, because Undo is constantly activated, since it is reactivated whenever there is a name change. And why must the UndoRedoManager be retrieved from the Lookup (it doesn't work otherwise)? Don't get that part of the code either. Help welcome!

    Read the article

  • Nautilus won't browse my USB hard drive unless I double click it twice.

    - by agnul
    On my laptop, running 10.10, whenever I plug in a thumb drive Nautilus will add an icon on the desktop and open a file manager window with the drive contents. This does not work for my 250Mb external hard drive: the icon is added on the desktop, but no file manager window pops up. Double clicking on the icon just causes some disk activity (on the system drive) and nothing else. Double clicking another time on the icon the file manager eventually opens. At first I thought this was related to nautilus-elementary, but after removing nothing has changed. How do I even start debugging this?

    Read the article

  • Connect to a VPN from a Virtual Machine

    - by kaharas
    I am running an Ubuntu ditro inside a VMWare virtual machine with a bridged connection on a windows 7 host. What I am trying to do is to have the virtual machine connect to a VPN, but I'm not having much success. At first, the system was using wicd, but I replaced it with network-manager who's supposed to have OpenVPN support. The problem is, even tho i've purged the wicd installation, there're connections shown in the network manager, and still, I'm able to access the net. I've also added the openvpn data in the network manager tab, but it's shown as never used. PS: when I try to stop and start the NetworkManager service, an error message pops up telling me that there's not such a service, but apt-get tells me i've already installed it...

    Read the article

  • Some problem about Compiz and GTK on Ubuntu 14.04

    - by LVHoanh
    I installed Ubuntu 14.04 on my computer : i3/4G/750G/VGA Intel HD3000 & 1GB NVidia with CUDA GT540M. I've some problems with Ubuntu 14.04 : Does Ubuntu 14.04 support Emerald Theme Manager? Emerald Theme Manager is not available in Ubuntu Software Center? I installed Compiz and Compiz Setting Manager, it works ok, but the Windows 3D plugin is not work? The Shutdown/Restart function on Right-Top Panel is not work, is there any solution for this problem? The mouse theme I installed only work in current session, next time i restart computer it change back to default mouse theme? How can i resize the spacing between Desktop Icon? Waiting for everyone respond. Thankyou so much. P/s : My English is not good, expect you understand. :)

    Read the article

  • Managing shots of the player

    - by Bitbridge
    I'm currently developing a 2D Jump'n'Run and the situation is the following: The player has different weapons he can collect and is then able to shoot the weapon's projectiles (laser, rockets, whatever). In my previous game (space shooter) I just had a manager class for all the weapon-shots, it stored them in a container and then updates and draws every single one. When the "shoot-event" occurred, the "ProjectileManager" was notified and it added the wanted projectile. The input for player action is handled in the player-class, so the player would have to know the manager to call the function of the manager. I also have a collisionManager, that checks for collisions between, for example, enemies and the projectiles and then notifies these objects. However, I somehow have the feeling, that I shouldn't use this approach and that there might be a better way to handle this. I know, the question is a bit vague, I'm pretty much just looking for input and ideas to improve my design.

    Read the article

  • ?????????????????~BI/DWH????????(2)

    - by Yusuke.Yamamoto
    Oracle Database ???BI???????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????! ??????????(04:19)????????????? Enterprise Manager ?????????????????????????? ???????????????????? Enterprise Manager ??SQL????????????????????I/O?????????? ??????????????Oracle GRID Center ??????????????? ????????????????????????? ????????DWH????????????????·???????? ???? ???????/???????!! ???????????? ~????·???????????????~ ????????: Advanced Compression|??????????? Oracle Enterprise Manager|??????????? DWH(?????????)??·??

    Read the article

  • ??????(??????????)

    - by ???02
    ??????(??????????)??????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????????Web?????·???????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????????????????? Oracle Adaptive Access Manager????·????????????????????? Oracle Identity Federation????????????????Oracle Entitlements Server ????????????·??????????????????????????? -????·?????-?????????????Oracle Adaptive Access Manager -- ??????????????????????????????Oracle Adaptive Access Manager???????????????????????????????????????????????????????·???????????????????????????????????????????????(????)?????????????????????????????ID???????????????????????????????????(1)???????????????????????????????????????????·?????(2)????????????????????????????????????????????????????????????(3)??????????????????Web??????????(????)?????????????(4)?????????????????????????????????Web?????????????????????????????????????Oracle Identity Federation -- ?????????????Oracle Identity Federation???????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????IT??????????????????(1)????????:??????????????????????·???????????????????????????:SAML?ID-FF?WS-Federation?Windows CardSpace(2)??????????????????????????????????????·???????????????????Oracle Entitlements Server -- ????????????Oracle Entitlements Server????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????UI??????????????????????????????????????????????????????(1)OASIS XACML????????????????????(2)??????????????????????????????????????????????????(3)???????????????????????????????????????(4)????????????????????????????????????????Oracle OpenSSO Security Token Service -- ?????????????????Oracle OpenSSO Security Token Service(OpenSSO STS)????????????????Web ???????????????????????????(????????????)????????????????OASIS WS-Trust ????????????????????(issurance)???(renewal)???(validation)??????????????(1)WS-Trust????????????????????(issuance)???(renewal)???(validation)???(2)Web???????ID???????????????????(3)?????????????????? ?????? Oracle Direct

    Read the article

  • Create a WCF REST Client Proxy Programatically (in C#)

    - by Tawani
    I am trying to create a REST Client proxy programatically in C# using the code below but I keep getting a CommunicationException error. Am I missing something? public static class WebProxyFactory { public static T Create<T>(string url) where T : class { ServicePointManager.Expect100Continue = false; WebHttpBinding binding = new WebHttpBinding(); binding.MaxReceivedMessageSize = 1000000; WebChannelFactory<T> factory = new WebChannelFactory<T>(binding, new Uri(url)); T proxy = factory.CreateChannel(); return proxy; } public static T Create<T>(string url, string userName, string password) where T : class { ServicePointManager.Expect100Continue = false; WebHttpBinding binding = new WebHttpBinding(); binding.Security.Mode = WebHttpSecurityMode.TransportCredentialOnly; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic; binding.UseDefaultWebProxy = false; binding.MaxReceivedMessageSize = 1000000; WebChannelFactory<T> factory = new WebChannelFactory<T>(binding, new Uri(url)); ClientCredentials credentials = factory.Credentials; credentials.UserName.UserName = userName; credentials.UserName.Password = password; T proxy = factory.CreateChannel(); return proxy; } } So that I can use it as follows: IMyRestService proxy = WebProxyFactory.Create<IMyRestService>(url, usr, pwd); var result = proxy.GetSomthing(); // Fails right here

    Read the article

  • Unable to debug WCF service in VS2008 after UserNamePasswordValidator fault

    - by lsb
    Hi! I have a WCF service that I secure with a custom UserNamePasswordValidator and Message security running over wsHttpBinding. The release code works great. Unfortunately, when I try to run in debug mode after having previously used invalid credentials (the current credentials ARE valid!) VS2008 displays an annoying dialog box (more on this below). A simplified version of my Validate method from the validator might look like the following: public override void Validate(string userName, string password) { if (password != "ABC123") throw new FaultException("The password is invalid!"); } The client receives a MessageSecurityException with InnerException set to the FaultException I explictly threw. This is workable since my client can display the message text of the original FaultException I wanted the user to see. Unfortunately, in all subsequent service calls VS2008 displays an "Unable to automatically debug..." dialog. The only way I can stop this from happening is to exit VS2008, get back in and connect to my service using correct credentials. I should also add that this occurs even when I create a brand new proxy on each and every call. There's no chance MY channel is faulted when I make a call. Its likely, however, that VS2008 hangs on to the previously faulted channel and tries to use it for debugging purposes. Needless to say, this sucks! The entire reason I'm entering "bad" credentials is to test the "bad-credential" handling. Anyway, if anyone has any ideas as to how I can get around this bug (?!?) I'd be very very appreciative....

    Read the article

  • ajax redirect dillema, how to get redirect URL OR how to set properties for redirect request

    - by Anthony
    First, I'm working in Google Chrome, if that helps. Here is the behavior: I send an xhr request via jquery to a remote site (this is a chrome Extension, and I've set all of the cross-site settings...): $.ajax({ type: "POST", contentType : "text/xml", url: some_url, data: some_xml, username: user, password: pass, success: function(data,status,xhr){ alert(data); }, error: function(xhr, status, error){ alert(xhr.status); } }); The URL that is being set returns a 302 (this is expected), and Chrome follows the redirect (also expected). The new URL returns a prompt for credentials, which are not being pulled from the original request, so Chrome shows a login dialog. If I put in the original credentials, I get back a response about invalid request sent (it's a valid HTTP request -- 200 -- the remote server just doesn't like one of the headers). When viewing the developer window in Chrome, there are two requests sent. The first is to the original URL with all settings set in the ajax request. The second is to the redirect URL, with a method of "GET", nothing from the "POST" field, and no credentials. I am at a loss as to what I can do. I either need to: Get the redirect URL so I can send a second request (xhr.getResponseHeader("Location") does NOT work), Have the new redirect request preserver the settings from the original request, or Get the final URL that the error came from so I can send another request. Ideally I don't want the user to have to put in their credentials a second time in this dialog box, but I'll take what I can get if I can just get the final URL.

    Read the article

  • SharePoint custom web service consumption problems - HTTP 401: Unauthorized

    - by alekz
    I have a custom web service deployed into WSS 3. It has two web methods. The first one returns the version of the loaded assembly without any invocation of the SharePoint objects. The second returns some basic info about the library, something like: var spLibrary = [find library logic]; return spLibrary.Name+"@"+spLibrary.Url; In the client app I have something like the following: var service = new WebService1(); service.Url = [url]; service.Credentials = System.Net.CredentialCache.DefaultCredentials; service.Method1(); service.Method2(); When the client app runs on the machine where SharePoint is deployed, everything works just fine. When the client app runs on the remote machine (but under the same user) the first method still works, but the second one throws System.Net.WebException: HTTP 401: Unauthorized. I have tried to set credentials manualy (service.Credentials = new System.Net.NetworkCredential(login, password, domain);) but this doesnt help. I've tried to invoke the built in SharePoint web services using a similar scenario, and they work just fine: Sorry for the mistake... Some methods were not working fine without the appropriate privileges. var service = new GroupsService(); service.Url = [url]; service.Credentials = System.Net.CredentialCache.DefaultCredentials; service.SomeMethod();

    Read the article

  • Authenticating to Google Search Appliance using Basic HTTP auth and ASP.NET (VB)

    - by Chainlink
    I've run into a snag though which has to do with authentication between the Google Search Appliance and ASP. Normally, when asking for secure pages from the search appliance, the search appliance asks for credentials, then uses these credentials to try and access the secure results. If this attempt is successful, the page shows up in the results list. Since ASP is contacting the search appliance on the client's behalf, it will need to collect credentials and pass them along to the search appliance. I have tried a couple of different documented ways of accomplishing this, but they don't seem to work. Below is the code I have tried: 'Bypass SSL since discovery.gov.mb.ca does not have valid SSL cert (NOT PRODUCTION SAFE) ServerCertificateValidationCallback = New System.Net.Security.RemoteCertificateValidationCallback(AddressOf customXertificateValidation) googleUrl = "https://removed.com" Dim rdr As New XmlTextReader(googleUrl) Dim resolver As New XmlUrlResolver() Dim myCred As New System.Net.NetworkCredential("USERNAME", "PASSWORD", Nothing) Dim credCache As New CredentialCache() credCache.Add(New Uri(googleUrl), "Basic", myCred) resolver.Credentials = credCache rdr.XmlResolver = resolver doc = New System.Xml.XPath.XPathDocument(rdr) path = doc.CreateNavigator() Private Function customXertificateValidation(ByVal sender As Object, ByVal certificate As System.Security.Cryptography.X509Certificates.X509Certificate, ByVal chain As System.Security.Cryptography.X509Certificates.X509Chain, ByVal sslPolicyErrors As Net.Security.SslPolicyErrors) As Boolean Return True End Function

    Read the article

  • MessageSecurityException: The security header element 'Timestamp' with the '' id must be signed

    - by NiklasN
    I'm asking the same question here that I've already asked on msdn forums http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/70f40a4c-8399-4629-9bfc-146524334daf I'm consuming a (most likely Java based) Web Service with I have absolutely no access to modify. It won't be modified even though I would ask them (it's a nation wide system). I've written the client with WCF. Here's some code: CustomBinding binding = new CustomBinding(); AsymmetricSecurityBindingElement element = SecurityBindingElement.CreateMutualCertificateDuplexBindingElement(MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10); element.AllowSerializedSigningTokenOnReply = true; element.SetKeyDerivation(false); element.IncludeTimestamp = true; element.KeyEntropyMode = SecurityKeyEntropyMode.ClientEntropy; element.MessageProtectionOrder = System.ServiceModel.Security.MessageProtectionOrder.SignBeforeEncrypt; element.LocalClientSettings.IdentityVerifier = new CustomIdentityVerifier(); element.SecurityHeaderLayout = SecurityHeaderLayout.Lax; element.IncludeTimestamp = false; binding.Elements.Add(element); binding.Elements.Add(new TextMessageEncodingBindingElement(MessageVersion.Soap11, Encoding.UTF8)); binding.Elements.Add(new HttpsTransportBindingElement()); EndpointAddress address = new EndpointAddress(new Uri("url")); ChannelFactory<MyPortTypeChannel> factory = new ChannelFactory<MyPortTypeChannel>(binding, address); ClientCredentials credentials = factory.Endpoint.Behaviors.Find<ClientCredentials>(); credentials.ClientCertificate.Certificate = myClientCert; credentials.ServiceCertificate.DefaultCertificate = myServiceCert; credentials.ServiceCertificate.Authentication.CertificateValidationMode = X509CertificateValidationMode.None; service = factory.CreateChannel(); After this every request done to the service fails in client side (I can confirm my request is accepted by the service and a sane response is being returned) I always get the following exception MessageSecurityException: The security header element 'Timestamp' with the '' id must be signed. By looking at trace I can see that in the response there really is a timestamp element, but in the security section there is only a signature for body. Can I somehow make WCF to ingore the fact Timestamp isn't signed?

    Read the article

  • using sfDoctrineGuardPlugin for regular member login?

    - by fayer
    i want to create users for my webapplication. im using symfony. i wonder if i should do that with sfDoctrineGuardPlugin or symfony's provided methods for this? // Add one or more credentials $user->addCredential('foo'); $user->addCredentials('foo', 'bar'); // Check if the user has a credential echo $user->hasCredential('foo'); => true // Check if the user has both credentials echo $user->hasCredential(array('foo', 'bar')); => true // Check if the user has one of the credentials echo $user->hasCredential(array('foo', 'bar'), false); => true // Remove a credential $user->removeCredential('foo'); echo $user->hasCredential('foo'); => false // Remove all credentials (useful in the logout process) $user->clearCredentials(); echo $user->hasCredential('bar'); => false or is the purpose of sfDoctrineGuardPlugin just securing the admin page and not the frontend logging system? thanks.

    Read the article

  • SharePoint 2010 / ASP.Net Integration - Looking for advice

    - by jpennal
    I have been Googling a problem that I have with trying to integrate the web application that I am working on with SharePoint 2010. The web application is a wiki style tool that allows users to log in via forms authentication or WIA against Active Directory and create content for themselves and others. What we would like to do is to allow a user have a page with the content they have created in our web application mixed in with content that they have living on the SharePoint server. For example, they may want to see a list of documents that they have on the SharePoint server mixed in with some of their content. To accomplish this, we would like to take the credentials the user has logged into our web application with (for example MYDOMAIN\jsmith) and be able to query SharePoint for the documents of that same user (MYDOMAIN\jsmith) WITHOUT the user being prompted to re-enter their credentials to access the SharePoint server (we are trying to avoid the double-hop problem) We have come up with some options for how we want to do this, but we are unsure of what the best approach is. For example, we could - Have a global user, shared by all users to get information we need from SharePoint. The downside is that we cannot filter SharePoint content to a particular user - We could store the users credentials when they log in, but that would only work for users authenticating via forms auth and would be a security issue that some users/clients would not like - Writing a SharePoint extension using WCF to allow us to access the information we need, however we'd still have the issue of figuring out how to impersonate the user we want. Neither of these options are ideal and in our investigation we came across the Claims Authentication/STS option which seems like it is trying to solve the problem we are having. So my question is, based on what I have written, is Claims/STS the best approach for us? We have not been able to find much direction on how to use this method to call into SharePoint from a Web Application and pass along the existing credentials. Does anyone have any experience with any of these issues?

    Read the article

  • How would a user stay logged in to a REST-based website?

    - by unforgiven3
    A year or so ago I asked this question: Can you help me understand this? “Common REST Mistakes: Sessions are irrelevant”. My question was essentially this: Okay, I get that HTTP authentication is done automatically on every message - but how? Is the username/password sent with every request? Doesn't that just increase attack surface area? I feel like I'm missing part of the puzzle. The answers I received made perfect sense in the context of a mobile (iPhone, Android, WP7) app - when talking to a REST service, the app would just send user credentials along with each request. That worked great for me. But now, I would like to better understand how one would secure a REST-like website, like StackOverflow itself or something like Reddit. How would things work if it was a user logged in via a web browser instead of logged in via an iPhone app? What happens when a user logs in? Are the credentials saved in the browser somehow? How would the browser know what credentials to send with subsequent REST requests? What if it's a JavaScript call to a webservice? How would the JavaScript call include user credentials? I'll be quite frank: my understanding of security when it comes to websites is pretty limited. I enjoyed working with REST services from an app perspective, but now I want to try and build a website that is based on REST principles, and I'm finding myself to be pretty lost. If there is anything in the above question that is unclear that you'd like me to clarify, please leave a comment and I'll address it.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • How add logic to Views? Ruby on Rails

    - by Gotjosh
    Right now I'm building a project management app in rails, here is some background info: Right now i have 2 models, one is User and the other one is Client. Clients and Users have a one-to-one relationship (client - has_one and user - belongs_to which means that the foreign key it's in the users table) So what I'm trying to do it's once you add a client you can actually add credentials (add an user) to that client, in order to do so all the clients are being displayed with a link next to that client's name meaning that you can actually create credentials for that client. What i can't figure it out how to do is, that if you actually have credentials in the database (meaning that there's a record in the users table with your client id) then don't display that link. Here's what i thought that would work <% for client in @client%> <h5> <h4><%= client.id %></h4> <a href="/clients/<%= client.id %>"><%= client.name %></a> <% for user in @user %> <% if user.client_id = client.id %> <a href="/clients/<%= client.id %>/user/new">Credentials</a> <%end%> <% end %> </h5> <% end %> And here's the controller: def index @client = Client.find_all_by_admin(0) @user = User.find(:all) end but instead it just puts the link the amount of times per records in the user table. Any help?

    Read the article

  • Edit Settings in web.config

    - by Scott Selby
    I didn't know how to title this question - I am making a request to PayPal's Express Payment API. I am using their dll that helps make the request and parse the response. The instructions for their code to work is to add you authorization credentials in the web.config file. I have done so. My problem is that I want to be able to edit these credentials that are being set dynamically ( probably get from SQL ) because we are going to allow different users to enter their API credentials. Sending the request to PayPal looks like this Dim wrapper As New SetExpressCheckoutReq() wrapper.SetExpressCheckoutRequest = request Dim service As New PayPalAPIInterfaceServiceService() Dim setECResponse As SetExpressCheckoutResponseType = service.SetExpressCheckout(wrapper) There's not much room in there to edit the header of the request , because PayPalAPIInterfaceServiceService() is defined in their dll and applies its own header based on the credentials in the web.config. So, my question is , is there a way to point in the web.config to another location when it looks in web.config? I'm not to sure this is possible , also is there any way to edit the header of a request that is defined in a dll without changing the dll (to stay pci compliant) The line in the web.config is here: <account apiUsername="****" apiPassword="****" apiSignature="****"/>

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >