Search Results

Search found 17646 results on 706 pages for 'security warning'.

Page 76/706 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Using Active Directory Security Groups as Hierarchical Tags

    - by Nathan Hartley
    Because active directory security groups can... hold objects regardless of OU. be used for reporting, documentation, inventory, etc. be referenced by automated processes (Get-QADGroupMember). be used to apply policy be used by WSUS I would like to use security groups as hierarchical tags, representing various attributes of a computer or user. I am thinking of (computer centric) tags something like these: /tag/vendor/vendorName /tag/system/overallSystemName /tag/application/vendorsApplicationName /tag/dependantOn/computerName /tag/department/departmentName /tag/updates/Group1 Before fumbling through implementing this, I thought I would seek comments from the community. Specifically in the areas: Does this make sense? Would it work? Has anyone else attempted this? Is there a good reference on the matter I should read? How best to implement the hierarchy? Tag_OU\Type_OU\GroupName (limits quantity in OU, uniqueness not guaranteed) Tag_OU\Type_OU\Tag-Type-GroupName (limits quantity in OU, uniqueness guaranteed, verbose) etc ... Thanks in advance!

    Read the article

  • setting nproc in /etc/security/limit.conf prevents ssh login

    - by omry
    I am trying to use /etc/security/limit.conf on Linux (Debian) to limit the number of processes per user. for starters, I tried to limit my own user processes by adding this to /etc/security/limit.conf: omry hard nproc 100 this locked my user out of ssh. I could open new processes (verified with su omry), but could not log into ssh with that user : sshd reported this in it's log: fatal: setreuid 1000: Resource temporarily unavailable also, I am certain my user is not running anything near 100 processes (actually 6). what can be the reason for this?

    Read the article

  • The application attempted to perform an operation not allowed by the security policy

    - by user16521
    I ran this command on the server that has the share of code that my local IIS site set to (Via UNC to that share): http://support.microsoft.com/kb/320268 Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on (obviously I replaced Drive with C, and the actual computername and sharename with the one I'm sharing out). But when I run the ASP.NET site, I am still getting this runtime exception: Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.

    Read the article

  • Remove "Security Shield" (win XP)? [closed]

    - by ALTT
    Possible Duplicate: Computer is infected by a virus or a malware, what do I do now? I have a problem with "Security Shield". They pop up every 5 min and tell me that there are many viruses in my computer, that I should buy their license.. I have installed Sophos and don't want "Security Shield". But I didn't find any way to get rid up it. Any help would be appreciated. Thanks! Sorry if I repeat the question from someone. If so, please show me the link.

    Read the article

  • trying to copy security groups to a user using dsmod group utility in AD

    - by newbie
    i am trying to create a batch file that asks to enter source samid and destination samid. then using dsquery and dsget find out what security groups source samid is assigned to and assign destination samid to those security groups using dsmod. everything works except the dsmod group command. it doesnt do anything and batch file stops. if i literally put "CN=marketing,OU=test group,DC=abc,DC=com" instead of %%g and "CN=test1,OU=test group,DC=abc,DC=com" instead of %dusercn%, it works fine. can anyone help with this? i have pasted my scrip here. this last small thing is killing me. echo off echo %date% at %time% set /p susername=enter source user name: set /P dusername=enter destination user name: echo %susername% echo %dusername% set dusercn= %dusercn%=dsquery user -samid %dusername% echo %dusercn% for /f "tokens=*" %%g in ('dsquery user -samid %susername% ^|dsget user -memberof') do (dsmod group %%g -addmbr %dusercn%) echo completed pause

    Read the article

  • Healthcare and Distributed Data Don't Mix

    - by [email protected]
    How many times have you heard the story?  Hard disk goes missing, USB thumb drive goes missing, laptop goes missing...Not a week goes by that we don't hear about our data going missing...  Healthcare data is a big one, but we hear about credit card data, pricing info, corporate intellectual property...  When I have spoken at Security and IT conferences part of my message is "Why do you give your users data to lose in the first place?"  I don't suggest they can't have access to it...in fact I work for the company that provides the premiere data security and desktop solutions that DO provide access.  Access isn't the issue.  'Keeping the data' is the issue.We are all human - we all make mistakes... I fault no one for having their car stolen or that they dropped a USB thumb drive. (well, except the thieves - I can certainly find some fault there)  Where I find fault is in policy (or lack thereof sometimes) that allows users to carry around private, and important, data with them.  Mr. Director of IT - It is your fault, not theirs.  Ms. CSO - Look in the mirror.It isn't like one can't find a network to access the data from.  You are on a network right now.  How many Wireless ones (wifi, mifi, cellular...) are there around you, right now?  Allowing employees to remove data from the confines of (wait for it... ) THE DATA CENTER is just plain indefensible when it isn't required.  The argument that the laptop had a password and the hard disk was encrypted is ridiculous.  An encrypted drive tells thieves that before they sell the stolen unit for $75, they should crack the encryption and ascertain what the REAL value of the laptop is... credit card info, Identity info, pricing lists, banking transactions... a veritable treasure trove of info people give away on an 'encrypted disk'.What started this latest rant on lack of data control was an article in Government Health IT that was forwarded to me by Denny Olson, an Oracle Principal Sales Consultant in Minnesota.  The full article is here, but the point was that a couple laptops went missing in a couple different cases, and.. well... no one knows where the data is, and yes - they were loaded with patient info.  What were you thinking?Obviously you can't steal data form a Sun Ray appliance... since it has no data, nor any storage to keep the data on, and Secure Global Desktop allows access from Macs, Linux and Windows client devices...  but in all cases, there is no keeping the data unless you explicitly allow for it in your policy.   Since you can get at the data securely from any network, why would you want to take personal responsibility for it?  Both Sun Rays and Secure Global Desktop are widely used in Healthcare... but clearly not widely enough.We need to do a better job of getting the message out -  Healthcare (or insert your business type here) and distributed data don't mix. Then add Hot Desking and 'follow me printing' and you have something that Clinicians (and CSOs) love.Thanks for putting up my blood pressure, Denny.

    Read the article

  • How to suppress a compiler warning in C# without using #pragma ?

    - by Dylan Lin
    Hi, I need to suppress a specfic compiler warning in C#. Now I can do it like this: #pragma warning disable 0649 private string _field; #pragma warning restore 0649 Is there a way to do it like the following? [SuppressCompilerWarning("0649")] private string _field; Because I only need to suppress warnings for this field, not a code block. Note: I want to suppress the compiler warning, not the Code-Analysis warning. Thanks!

    Read the article

  • .NET HTML Sanitation for rich HTML Input

    - by Rick Strahl
    Recently I was working on updating a legacy application to MVC 4 that included free form text input. When I set up the new site my initial approach was to not allow any rich HTML input, only simple text formatting that would respect a few simple HTML commands for bold, lists etc. and automatically handles line break processing for new lines and paragraphs. This is typical for what I do with most multi-line text input in my apps and it works very well with very little development effort involved. Then the client sprung another note: Oh by the way we have a bunch of customers (real estate agents) who need to post complete HTML documents. Oh uh! There goes the simple theory. After some discussion and pleading on my part (<snicker>) to try and avoid this type of raw HTML input because of potential XSS issues, the client decided to go ahead and allow raw HTML input anyway. There has been lots of discussions on this subject on StackOverFlow (and here and here) but to after reading through some of the solutions I didn't really find anything that would work even closely for what I needed. Specifically we need to be able to allow just about any HTML markup, with the exception of script code. Remote CSS and Images need to be loaded, links need to work and so. While the 'legit' HTML posted by these agents is basic in nature it does span most of the full gamut of HTML (4). Most of the solutions XSS prevention/sanitizer solutions I found were way to aggressive and rendered the posted output unusable mostly because they tend to strip any externally loaded content. In short I needed a custom solution. I thought the best solution to this would be to use an HTML parser - in this case the Html Agility Pack - and then to run through all the HTML markup provided and remove any of the blacklisted tags and a number of attributes that are prone to JavaScript injection. There's much discussion on whether to use blacklists vs. whitelists in the discussions mentioned above, but I found that whitelists can make sense in simple scenarios where you might allow manual HTML input, but when you need to allow a larger array of HTML functionality a blacklist is probably easier to manage as the vast majority of elements and attributes could be allowed. Also white listing gets a bit more complex with HTML5 and the new proliferation of new HTML tags and most new tags generally don't affect XSS issues directly. Pure whitelisting based on elements and attributes also doesn't capture many edge cases (see some of the XSS cheat sheets listed below) so even with a white list, custom logic is still required to handle many of those edge cases. The Microsoft Web Protection Library (AntiXSS) My first thought was to check out the Microsoft AntiXSS library. Microsoft has an HTML Encoding and Sanitation library in the Microsoft Web Protection Library (formerly AntiXSS Library) on CodePlex, which provides stricter functions for whitelist encoding and sanitation. Initially I thought the Sanitation class and its static members would do the trick for me,but I found that this library is way too restrictive for my needs. Specifically the Sanitation class strips out images and links which rendered the full HTML from our real estate clients completely useless. I didn't spend much time with it, but apparently I'm not alone if feeling this library is not really useful without some way to configure operation. To give you an example of what didn't work for me with the library here's a small and simple HTML fragment that includes script, img and anchor tags. I would expect the script to be stripped and everything else to be left intact. Here's the original HTML:var value = "<b>Here</b> <script>alert('hello')</script> we go. Visit the " + "<a href='http://west-wind.com'>West Wind</a> site. " + "<img src='http://west-wind.com/images/new.gif' /> " ; and the code to sanitize it with the AntiXSS Sanitize class:@Html.Raw(Microsoft.Security.Application.Sanitizer.GetSafeHtmlFragment(value)) This produced a not so useful sanitized string: Here we go. Visit the <a>West Wind</a> site. While it removed the <script> tag (good) it also removed the href from the link and the image tag altogether (bad). In some situations this might be useful, but for most tasks I doubt this is the desired behavior. While links can contain javascript: references and images can 'broadcast' information to a server, without configuration to tell the library what to restrict this becomes useless to me. I couldn't find any way to customize the white list, nor is there code available in this 'open source' library on CodePlex. Using Html Agility Pack for HTML Parsing The WPL library wasn't going to cut it. After doing a bit of research I decided the best approach for a custom solution would be to use an HTML parser and inspect the HTML fragment/document I'm trying to import. I've used the HTML Agility Pack before for a number of apps where I needed an HTML parser without requiring an instance of a full browser like the Internet Explorer Application object which is inadequate in Web apps. In case you haven't checked out the Html Agility Pack before, it's a powerful HTML parser library that you can use from your .NET code. It provides a simple, parsable HTML DOM model to full HTML documents or HTML fragments that let you walk through each of the elements in your document. If you've used the HTML or XML DOM in a browser before you'll feel right at home with the Agility Pack. Blacklist based HTML Parsing to strip XSS Code For my purposes of HTML sanitation, the process involved is to walk the HTML document one element at a time and then check each element and attribute against a blacklist. There's quite a bit of argument of what's better: A whitelist of allowed items or a blacklist of denied items. While whitelists tend to be more secure, they also require a lot more configuration. In the case of HTML5 a whitelist could be very extensive. For what I need, I only want to ensure that no JavaScript is executed, so a blacklist includes the obvious <script> tag plus any tag that allows loading of external content including <iframe>, <object>, <embed> and <link> etc. <form>  is also excluded to avoid posting content to a different location. I also disallow <head> and <meta> tags in particular for my case, since I'm only allowing posting of HTML fragments. There is also some internal logic to exclude some attributes or attributes that include references to JavaScript or CSS expressions. The default tag blacklist reflects my use case, but is customizable and can be added to. Here's my HtmlSanitizer implementation:using System.Collections.Generic; using System.IO; using System.Xml; using HtmlAgilityPack; namespace Westwind.Web.Utilities { public class HtmlSanitizer { public HashSet<string> BlackList = new HashSet<string>() { { "script" }, { "iframe" }, { "form" }, { "object" }, { "embed" }, { "link" }, { "head" }, { "meta" } }; /// <summary> /// Cleans up an HTML string and removes HTML tags in blacklist /// </summary> /// <param name="html"></param> /// <returns></returns> public static string SanitizeHtml(string html, params string[] blackList) { var sanitizer = new HtmlSanitizer(); if (blackList != null && blackList.Length > 0) { sanitizer.BlackList.Clear(); foreach (string item in blackList) sanitizer.BlackList.Add(item); } return sanitizer.Sanitize(html); } /// <summary> /// Cleans up an HTML string by removing elements /// on the blacklist and all elements that start /// with onXXX . /// </summary> /// <param name="html"></param> /// <returns></returns> public string Sanitize(string html) { var doc = new HtmlDocument(); doc.LoadHtml(html); SanitizeHtmlNode(doc.DocumentNode); //return doc.DocumentNode.WriteTo(); string output = null; // Use an XmlTextWriter to create self-closing tags using (StringWriter sw = new StringWriter()) { XmlWriter writer = new XmlTextWriter(sw); doc.DocumentNode.WriteTo(writer); output = sw.ToString(); // strip off XML doc header if (!string.IsNullOrEmpty(output)) { int at = output.IndexOf("?>"); output = output.Substring(at + 2); } writer.Close(); } doc = null; return output; } private void SanitizeHtmlNode(HtmlNode node) { if (node.NodeType == HtmlNodeType.Element) { // check for blacklist items and remove if (BlackList.Contains(node.Name)) { node.Remove(); return; } // remove CSS Expressions and embedded script links if (node.Name == "style") { if (string.IsNullOrEmpty(node.InnerText)) { if (node.InnerHtml.Contains("expression") || node.InnerHtml.Contains("javascript:")) node.ParentNode.RemoveChild(node); } } // remove script attributes if (node.HasAttributes) { for (int i = node.Attributes.Count - 1; i >= 0; i--) { HtmlAttribute currentAttribute = node.Attributes[i]; var attr = currentAttribute.Name.ToLower(); var val = currentAttribute.Value.ToLower(); span style="background: white; color: green">// remove event handlers if (attr.StartsWith("on")) node.Attributes.Remove(currentAttribute); // remove script links else if ( //(attr == "href" || attr== "src" || attr == "dynsrc" || attr == "lowsrc") && val != null && val.Contains("javascript:")) node.Attributes.Remove(currentAttribute); // Remove CSS Expressions else if (attr == "style" && val != null && val.Contains("expression") || val.Contains("javascript:") || val.Contains("vbscript:")) node.Attributes.Remove(currentAttribute); } } } // Look through child nodes recursively if (node.HasChildNodes) { for (int i = node.ChildNodes.Count - 1; i >= 0; i--) { SanitizeHtmlNode(node.ChildNodes[i]); } } } } } Please note: Use this as a starting point only for your own parsing and review the code for your specific use case! If your needs are less lenient than mine were you can you can make this much stricter by not allowing src and href attributes or CSS links if your HTML doesn't allow it. You can also check links for external URLs and disallow those - lots of options.  The code is simple enough to make it easy to extend to fit your use cases more specifically. It's also quite easy to make this code work using a WhiteList approach if you want to go that route. The code above is semi-generic for allowing full featured HTML fragments that only disallow script related content. The Sanitize method walks through each node of the document and then recursively drills into all of its children until the entire document has been traversed. Note that the code here uses an XmlTextWriter to write output - this is done to preserve XHTML style self-closing tags which are otherwise left as non-self-closing tags. The sanitizer code scans for blacklist elements and removes those elements not allowed. Note that the blacklist is configurable either in the instance class as a property or in the static method via the string parameter list. Additionally the code goes through each element's attributes and looks for a host of rules gleaned from some of the XSS cheat sheets listed at the end of the post. Clearly there are a lot more XSS vulnerabilities, but a lot of them apply to ancient browsers (IE6 and versions of Netscape) - many of these glaring holes (like CSS expressions - WTF IE?) have been removed in modern browsers. What a Pain To be honest this is NOT a piece of code that I wanted to write. I think building anything related to XSS is better left to people who have far more knowledge of the topic than I do. Unfortunately, I was unable to find a tool that worked even closely for me, or even provided a working base. For the project I was working on I had no choice and I'm sharing the code here merely as a base line to start with and potentially expand on for specific needs. It's sad that Microsoft Web Protection Library is currently such a train wreck - this is really something that should come from Microsoft as the systems vendor or possibly a third party that provides security tools. Luckily for my application we are dealing with a authenticated and validated users so the user base is fairly well known, and relatively small - this is not a wide open Internet application that's directly public facing. As I mentioned earlier in the post, if I had my way I would simply not allow this type of raw HTML input in the first place, and instead rely on a more controlled HTML input mechanism like MarkDown or even a good HTML Edit control that can provide some limits on what types of input are allowed. Alas in this case I was overridden and we had to go forward and allow *any* raw HTML posted. Sometimes I really feel sad that it's come this far - how many good applications and tools have been thwarted by fear of XSS (or worse) attacks? So many things that could be done *if* we had a more secure browser experience and didn't have to deal with every little script twerp trying to hack into Web pages and obscure browser bugs. So much time wasted building secure apps, so much time wasted by others trying to hack apps… We're a funny species - no other species manages to waste as much time, effort and resources as we humans do :-) Resources Code on GitHub Html Agility Pack XSS Cheat Sheet XSS Prevention Cheat Sheet Microsoft Web Protection Library (AntiXss) StackOverflow Links: http://stackoverflow.com/questions/341872/html-sanitizer-for-net http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://code.google.com/p/subsonicforums/source/browse/trunk/SubSonic.Forums.Data/HtmlScrubber.cs?r=61© Rick Strahl, West Wind Technologies, 2005-2012Posted in Security  HTML  ASP.NET  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SSL authentication error: RemoteCertificateChainErrors on ASP.NET on Ubuntu

    - by Frank Krueger
    I am trying to access Gmail's SMTP service from an ASP.NET MVC site running under Mono 2.4.2.3. But I keep getting this error: System.InvalidOperationException: SSL authentication error: RemoteCertificateChainErrors at System.Net.Mail.SmtpClient.m__3 (System.Object sender, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, SslPolicyErrors sslPolicyErrors) [0x00000] at System.Net.Security.SslStream+c__AnonStorey9.m__9 (System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Int32[] certErrors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.OnRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslStreamBase.RaiseRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.RaiseServerCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] certificateErrors) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process () at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] I have installed certificates using: certmgr -ssl -m smtps://smtp.gmail.com:465 with this output: Mono Certificate Manager - version 2.4.2.3 Manage X.509 certificates and CRL from stores. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. X.509 Certificate v3 Issued from: C=US, O=Equifax, OU=Equifax Secure Certificate Authority Issued to: C=US, O=Google Inc, CN=Google Internet Authority Valid from: 06/08/2009 20:43:27 Valid until: 06/07/2013 19:43:27 *** WARNING: Certificate signature is INVALID *** Import this certificate into the CA store ?yes X.509 Certificate v3 Issued from: C=US, O=Google Inc, CN=Google Internet Authority Issued to: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Valid from: 04/22/2010 20:02:45 Valid until: 04/22/2011 20:12:45 Import this certificate into the AddressBook store ?yes 2 certificates added to the stores. In fact, this worked for a month but mysteriously stopped working on May 5. I installed these new certs today, but I am still getting these errors.

    Read the article

  • Java RMI (Server: TCP Connection Idle/Client: Unmarshalexception (EOFException))

    - by Perry Dahl Christensen
    I'm trying to implement Sun Tutorials RMI application that calculates Pi. I'm having some serious problems and I cant find the solution eventhough I've been searching the entire web and several javaskilled people. I'm hoping you can put an end to my frustrations. The crazy thing is that I can run the application from the cmd on my desktop computer. Trying the exact same thing with the exact same code in the exact same directories on my laptop produces the following errors. The problem occures when I try to connect the client to the server. I don't believe that the error is due to my policyfile as I can run it on the desktop. It must be elsewhere. Have anyone tried the same and can you give me a hint as to where my problem is, please? POLICYFILE SERVER: grant { permission java.security.AllPermissions; permission java.net.SocketPermission"*", "connect, resolve"; }; POLICYFILE CLIENT: grant { permission java.security.AllPermissions; permission java.net.SocketPermission"*", "connect, resolve"; }; SERVERSIDE ERRORS: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\STUDENTcd\ C:start rmiregistry C:java -cp c:\java;c:\java\compute.jar -Djava.rmi.server.codebase=file:/c:/jav a/compute.jar -Djava.rmi.server.hostname=localhost -Djava.security.policy=c:/jav a/servertest.policy engine.ComputeEngine ComputeEngine bound Exception in thread "RMI TCP Connection(idle)" java.security.AccessControlExcept ion: access denied (java.net.SocketPermission 127.0.0.1:1440 accept,resolve) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkAccept(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.checkAcceptPermi ssion(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.checkAcceptPermission(Unknown Sour ce) at sun.rmi.transport.Transport$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Sou rce) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Sour ce) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source ) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) CLIENTSIDE ERRORS: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\STUDENTcd\ C:java -cp c:\java;c:\java\compute.jar -Djava.rmi.server.codebase=file:\C:\jav a\files\ -Djava.security.policy=c:/java/clienttest.policy client.ComputePi local host 45 ComputePi exception: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source) at sun.rmi.server.UnicastRef.invoke(Unknown Source) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unkn own Source) at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source) at $Proxy0.executeTask(Unknown Source) at client.ComputePi.main(ComputePi.java:18) Caused by: java.io.EOFException at java.io.DataInputStream.readByte(Unknown Source) ... 6 more C: Thanks in advance Perry

    Read the article

  • Is using GET with a tokenID for security a good idea?

    - by acidzombie24
    I was thinking about this and it appears POST only a little less vulnerable and somewhat harder (do to requiring the user to click something). I read about token ids and double submitted cookies and i am not sure what the difference is http://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet#Disclosure_of_Token_in_URL http://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet#Double_Submit_Cookies Right now i have the user id (PK in my table) and a session id so you cant simply change your cookie ID and act like someone else. Now it seems like i put the session id as a token in each of my forms and check them bc attackers cant guess these tokens. However i dislike the idea of putting the session id into the page for ppl to see. But really, is there a problem with that? short of having the user copy/pasting the html is there any attacks that can happen due to the session id being in plain view in html?

    Read the article

  • Warning: Cannot modify header information - headers already sent

    - by Pankaj Khurana
    Hi, I am using code available on http://www.forosdelweb.com/f18/zip-lib-php-archivo-zip-vacio-431133/ for creating zip file. First file-zip.lib.php <?php /* $Id: zip.lib.php,v 1.1 2004/02/14 15:21:18 anoncvs_tusedb Exp $ */ // vim: expandtab sw=4 ts=4 sts=4: /** * Zip file creation class. * Makes zip files. * * Last Modification and Extension By : * * Hasin Hayder * HomePage : www.hasinme.info * Email : [email protected] * IDE : PHP Designer 2005 * * * Originally Based on : * * http://www.zend.com/codex.php?id=535&single=1 * By Eric Mueller <[email protected]> * * http://www.zend.com/codex.php?id=470&single=1 * by Denis125 <[email protected]> * * a patch from Peter Listiak <[email protected]> for last modified * date and time of the compressed file * * Official ZIP file format: http://www.pkware.com/appnote.txt * * @access public */ class zipfile { /** * Array to store compressed data * * @var array $datasec */ var $datasec = array(); /** * Central directory * * @var array $ctrl_dir */ var $ctrl_dir = array(); /** * End of central directory record * * @var string $eof_ctrl_dir */ var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; /** * Last offset position * * @var integer $old_offset */ var $old_offset = 0; /** * Converts an Unix timestamp to a four byte DOS date and time format (date * in high two bytes, time in low two bytes allowing magnitude comparison). * * @param integer the current Unix timestamp * * @return integer the current date in a four byte DOS format * * @access private */ function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method /** * Adds "file" to archive * * @param string file contents * @param string name of the file in the archive (may contains the path) * @param integer the current timestamp * * @access public */ function addFile($data, $name, $time = 0) { $name = str_replace('', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = 'x' . $dtime[6] . $dtime[7] . 'x' . $dtime[4] . $dtime[5] . 'x' . $dtime[2] . $dtime[3] . 'x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method /** * Dumps out file * * @return string the zipped file * * @access public */ function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method /** * A Wrapper of original addFile Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param array An Array of files with relative/absolute path to be added in Zip File * * @access public */ function addFiles($files /*Only Pass Array*/) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } /** * A Wrapper of original file Function * * Created By Hasin Hayder at 29th Jan, 1:29 AM * * @param string Output file name * * @access public */ function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> My second file newzip.php <? include("zip.lib.php"); $ziper = new zipfile(); $ziper->addFiles(array("index.htm")); //array of files // the next three lines force an immediate download of the zip file: header("Content-type: application/octet-stream"); header("Content-disposition: attachment; filename=test.zip"); echo $ziper -> file(); ?> I am getting this warning while executing newzip.php Warning: Cannot modify header information - headers already sent by (output started at E:\xampp\htdocs\demo\zip.lib.php:233) in E:\xampp\htdocs\demo\newzip.php on line 6 I am unable to figure out the reason for the same. Please help me on this. Thanks

    Read the article

  • (PHP) Validation, Security and Speed - Does my app have these?

    - by Devner
    Hi all, I am currently working on a building community website in PHP. This contains forms that a user can fill right from registration to lot of other functionality. I am not an Object-oriented guy, so I am using functions most of the time to handle my application. I know I have to learn OOPS, but currently need to develop this website and get it running soon. Anyway, here's a sample of what I let my app. do: Consider a page (register.php) that has a form where a user has 3 fields to fill up, say: First Name, Last Name and Email. Upon submission of this form, I want to validate the form and show the corresponding errors to the users: <form id="form1" name="form1" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <label for="name">Name:</label> <input type="text" name="name" id="name" /><br /> <label for="lname">Last Name:</label> <input type="text" name="lname" id="lname" /><br /> <label for="email">Email:</label> <input type="text" name="email" id="email" /><br /> <input type="submit" name="submit" id="submit" value="Submit" /> </form> This form will POST the info to the same page. So here's the code that will process the POST'ed info: <?php require("functions.php"); if( isset($_POST['submit']) ) { $errors = fn_register(); if( count($errors) ) { //Show error messages } else { //Send welcome mail to the user or do database stuff... } } ?> <?php //functions.php page: function sql_quote( $value ) { if( get_magic_quotes_gpc() ) { $value = stripslashes( $value ); } else { $value = addslashes( $value ); } if( function_exists( "mysql_real_escape_string" ) ) { $value = mysql_real_escape_string( $value ); } return $value; } function clean($str) { $str = strip_tags($str, '<br>,<br />'); $str = trim($str); $str = sql_quote($str); return $str; } foreach ($_POST as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } foreach ($_GET as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } function validate_name( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( isset($fld) && $fld != '' && !preg_match("/^[a-zA-Z\ ]+$/", $fld)) { $str = "$label: Invalid characters used! Only Lowercase, Uppercase alphabets and Spaces are allowed"; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function validate_email( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( !eregi('^[a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.([a-zA-Z]{2,4})$', $fld) ) { $str = "$label: Invalid format. Please check."; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function val_rules( $str, $val_type, $rule='required' ){ switch ($val_type) { case 'name': $val = validate_name( $str, 3, 20, $rule, 'First Name'); break; case 'lname': $val = validate_name( $str, 10, 20, $rule, 'Last Name'); break; case 'email': $val = validate_email( $str, 10, 60, $rule, 'Email'); break; } return $val; } function fn_register() { $errors = array(); $val_name = val_rules( $_POST['name'], 'name' ); $val_lname = val_rules( $_POST['lname'], 'lname', 'optional' ); $val_email = val_rules( $_POST['email'], 'email' ); if ( $val_name != '0' ) { $errors['name'] = $val_name; } if ( $val_lname != '0' ) { $errors['lname'] = $val_lname; } if ( $val_email != '0' ) { $errors['email'] = $val_email; } return $errors; } //END of functions.php page ?> OK, now it might look like there's a lot, but lemme break it down target wise: 1. I wanted the foreach ($_POST as &$value) and foreach ($_GET as &$value) loops to loop through the received info from the user submission and strip/remove all malicious input. I am calling a function called clean on the input first to achieve the objective as stated above. This function will process each of the input, whether individual field values or even arrays and allow only tags and remove everything else. The rest of it is obvious. Once this happens, the new/cleaned values will be processed by the fn_register() function and based on the values returned after the validation, we get the corresponding errors or NULL values (as applicable). So here's my questions: 1. This pretty much makes me feel secure as I am forcing the user to correct malicious data and won't process the final data unless the errors are corrected. Am I correct? Does the method that I follow guarantee the speed (as I am using lots of functions and their corresponding calls)? The fields of a form differ and the minimum number of fields I may have at any given point of time in any form may be 3 and can go upto as high as 100 (or even more, I am not sure as the website is still being developed). Will having 100's of fields and their validation in the above way, reduce the speed of application (say upto half a million users are accessing the website at the same time?). What can I do to improve the speed and reduce function calls (if possible)? 3, Can I do something to improve the current ways of validation? I am holding off object oriented approach and using FILTERS in PHP for the later. So please, I request you all to suggest me way to improve/tweak the current ways and suggest me if the script is vulnerable or safe enough to be used in a Live production environment. If not, what I can do to be able to use it live? Thank you all in advance.

    Read the article

  • How can I make a security token automatically expire in a passive STS setup?

    - by Rising Star
    I have a passive STS set up for a new application I'm working on. I've noticed that when a user's session expires, the user is still authenticated. I would have thought that when the session expires, the user would no longer be authenticated. My boss discussed this with me as I am currently charged with setting up the authentication. He says that it would be good if we could make the user's log on expire after a certain period of inactivity similar to how the session expires. I am familiar with how to sign a user out with a few lines of code. How can I make it so that the user is automatically signed out after a specified period of inactivity? Currently, I have some code in the global.asax file that programmatically checks when the last request was and compares it to the current time; it then signs the user out if a certain period of time has expired.

    Read the article

  • Have the default security settings changed in Windows 7 that would affect IPrincipal.IsInRole?

    - by adrianbanks
    We use NTLM auth in our application to determine whether a user can perform certain operations. We use the IPrincipal of their current Windows login (in WinForms applications), calling IsInRole to check for specific group memberships. To check that a user is a local administrator on the machine, we use: AppDomain.CurrentDomain.SetPrincipalPolicy(PrincipalPolicy.WindowsPrincipal); ... bool allowed = Thread.CurrentPrincipal.IsInRole(@"Builtin\Administrators") This works if the current user is the Administrator user, or is another user that is a member of the Builtin\Administrators group. In our testing on Windows 7, we have found that this no longer works as expected. The Administrator user still works fine, but any other user that is a member of the Builtin\Administrators group returns false for the IsInRole call. What could be causing this difference? I have a gut feeling that a default setting has changed somewhere (possible in gpedit), but cannot find anything that looks like the culprit.

    Read the article

  • What am I missing in this ASP.NET XSS Security Helper class?

    - by smartcaveman
    I need a generic method for preventing XSS attacks in ASP.NET. The approach I came up with is a ValidateRequest method that evaluates the HttpRequest for any potential issues, and if issues are found, redirect the user to the same page, but in a away that is not threatening to the application. (Source code below) While I know this method will prevent most XSS attacks, I am not certain that I am adequately preventing all possible attacks while also minimizing false positives. So, what is the most effective way to adequately prevent all possible attacks, while minimizing false positives? Are there changes I should make to the helper class below, or is there an alternative approach or third party library that offers something more convincing? public static class XssSecurity { public const string PotentialXssAttackExpression = "(http(s)*(%3a|:))|(ftp(s)*(%3a|:))|(javascript)|(alert)|(((\\%3C) <)[^\n]+((\\%3E) >))"; private static readonly Regex PotentialXssAttackRegex = new Regex(PotentialXssAttackExpression, RegexOptions.IgnoreCase); public static bool IsPotentialXssAttack(this HttpRequest request) { if(request != null) { string query = request.QueryString.ToString(); if(!string.IsNullOrEmpty(query) && PotentialXssAttackRegex.IsMatch(query)) return true; if(request.HttpMethod.Equals("post", StringComparison.InvariantCultureIgnoreCase)) { string form = request.Form.ToString(); if (!string.IsNullOrEmpty(form) && PotentialXssAttackRegex.IsMatch(form)) return true; } if(request.Cookies.Count > 0) { foreach(HttpCookie cookie in request.Cookies) { if(PotentialXssAttackRegex.IsMatch(cookie.Value)) { return true; } } } } return false; } public static void ValidateRequest(this HttpContext context, string redirectToPath = null) { if(context == null || !context.Request.IsPotentialXssAttack()) return; // expire all cookies foreach(HttpCookie cookie in context.Request.Cookies) { cookie.Expires = DateTime.Now.Subtract(TimeSpan.FromDays(1)); context.Response.Cookies.Set(cookie); } // redirect to safe path bool redirected = false; if(redirectToPath != null) { try { context.Response.Redirect(redirectToPath,true); redirected = true; } catch { redirected = false; } } if (redirected) return; string safeUrl = context.Request.Url.AbsolutePath.Replace(context.Request.Url.Query, string.Empty); context.Response.Redirect(safeUrl,true); } }

    Read the article

  • Have I found a security problem in an API or do I just not understand SSL?

    - by jamieb
    I'm working on building a set of Python bindings around an XML-based API provided by a vendor. The vendor requires that all transactions be conducted over SSL. Using a Linux box, I created a key file and a CSR for my application. Using their self-service web portal, I then generate a certificate using that CSR. Both the key file and the certificate are used when making the SSL request to the API. I'm now working on designing exception classes to make error messages more verbose (and, hopefully, more useful to developers using my bindings). Part of my testing has included altering the key file: transpose a couple characters here, replace 4 or 5 with random characters there, etc. To my surprise, altering the key file had no effect! As long as I didn't change the total length of it, the API didn't complain about a bad key file. The only way I was able to throw an error was by swapping in a completely different key from another application. At that point, the API complained about the Common Name not matching. Is this normal behavior or has the vendor not properly implemented SSL?

    Read the article

  • How can I write a "user can only access own profile page" type of security check in Play Framework?

    - by karianneberg
    I have a Play framework application that has a model like this: A Company has one and only one User associated with it. I have URLs like http://www.example.com/companies/1234, http://www.example.com/companies/1234/departments, http://www.example.com/companies/1234/departments/employees and so on. The numbers are the company id's, not the user id's. I want that normal users (not admins) should only be able to access their own profile pages, not other people's profile pages. So a user associated with the company with id 1234 should not be able to access the URL http://www.example.com/companies/6789 I tried to accomplish this by overriding Secure.check() and comparing the request parameter "id" to the ID of the company associated with the logged in user. However, this obviously fails if the parameter is called anything else than "id". Does anyone know how this could be accomplished?

    Read the article

  • How do I copy security information when creating a new folder?

    - by dhh
    In my app I'm creating folders for archiving old stuff from a harddisc. When creating a new folder I must copy all NTFS rights (Groups / Users) from the source folder to the newly created destination folder. Here is what I've written so far: FileSecurity fileSecurity = File.GetAccessControl(filenameSource, AccessControlSections.All); FileAttributes fileAttributes = File.GetAttributes(filenameSource); File.SetAccessControl(filenameDest, fileSecurity); File.SetAttributes(filenameDest, fileAttributes); Is this really all I ought to do or am I missing something important?

    Read the article

  • How do I get the security details for a long path?

    - by Biff MaGriff
    Hello, I am doing a file server migration and I'm writing a small C# app to help me map the user permissions so we can put them in user groups. I'm currently using Directory.GetAccessControl(path); However it fails when it get to this 263 char file path. Invalid name. Parameter name: name I get the same error when I use DirectoryInfo.GetAccessControl(); Is there a work around or alternative to this method? Thanks!

    Read the article

  • ASP.NET MVC security: how to check if a controller method is allowed to execute under current user's

    - by Gart
    Given an ASP.NET MVC Controller class declaration: public class ItemController : Controller { public ActionResult Index() { // ... } public ActionResult Details() { // ... } [Authorize(Roles="Admin, Editor")] public ActionResult Edit() { // ... } [Authorized(Roles="Admin")] public ActionResult Delete() { // .. } } I need to reflect a list of methods in this class which may be invoked with the current user's permissions. Please share some ideas of what could be done in this case.

    Read the article

  • How to use Grails Spring Security Plugin to require logging in before access an action?

    - by Hoàng Long
    Hi all, I know that I can use annotation or Request mapping to restrict access to an ACTION by some specific ROLES. But now I have a different circumstance. My scenario is: every user of my site can create posts, and they can make their own post public, private, or only share to some other users. I implement sharing post by a database table PERMISSION, which specify if a user have the right to view a post or not. The problem arises here is that when a customer access a post through a direct link, how can I determine he/she have the privilege to view it? There's 3 circumstances: The post is public, so it can be viewed by anyone (include not-login user) The post is private, so only the login-owner can view it The post is sharing, it means only the login-user that is shared and the owner can view it. I want to process like this: If the requested post is public: ok. If the requested post is private/sharing: I want to redirect the customer to the login page; after logging in, the user will be re-direct to the page he wants to see. The problem here is that I can redirect the user to login controller/ auth action, but after that I don't know how to redirect it back. The link to every post is different by post_id, so I can't use SpringSecurityUtils.securityConfig.successHandler.defaultTargetUrl Could anyone know a way to do this?

    Read the article

  • Is sending a hashed password over the wire a security hole?

    - by Ubiquitous Che
    I've come across a system that is in use by a company that we are considering partnering with on a medium-sized (for us, not them) project. They have a web service that we will need to integrate with. My current understanding of proper username/password management is that the username may be stored as plaintext in the database. Every user should have a unique pseudo-random salt, which may also be stored in plaintext. The text of their password must be concatenated with the salt and then this combined string may be hashed and stored in the database in an nvarchar field. So long as passwords are submitted to the website (or web service) over plaintext, everything should be just lovely. Feel free to rip into my understanding as summarized above if I'm wrong. Anyway, back to the subject at hand. The WebService run by this potential partner doesn't accept username and password, which I had anticipated. Instead, it accepts two string fields named 'Username' and 'PasswordHash'. The 'PasswordHash' value that I have been given does indeed look like a hash, and not just a value for a mis-named password field. This is raising a red flag for me. I'm not sure why, but I feel uncomfortable sending a hashed password over the wire for some reason. Off the top of my head I can't think of a reason why this would be a bad thing... Technically, the hash is available on the database anyway. But it's making me nervous, and I'm not sure if there's a reason for this or if I'm just being paranoid.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >