Search Results

Search found 12720 results on 509 pages for 'moss2007 security'.

Page 99/509 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Port 80 not accessible Amazon ec2

    - by Jasper
    I have started a Amazon EC2 instance (Linux Redhat)... And Apache as well. But when i try: http://MyPublicHostName I get no response. I have ensured that my Security Group allows access to port 80. I can reach port 22 for sure, as i am logged into the instance via ssh. Within the Amazon EC2 Linux Instance when i do: $ wget http://localhost i do get a response. This confirms Apache and port 80 is indeed running fine. Since Amazon starts instances in VPC, do i have to do anything there... Infact i cannot even ping the instance, although i can ssh to it! Any advice? EDIT: Note that i had edited /etc/hosts file earlier to make 389-ds (ldap) installation work. My /etc/hosts file looks like this(IP addresses as shown as w.x.y.z ) 127.0.0.1   localhost.localdomain localhost w.x.y.z   ip-w-x-y-z.us-west-1.compute.internal w.x.y.z   ip-w-x-y-z.localdomain

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • Can't reach server without proxy (website down from my home)

    - by user2128576
    I have a website hosted on Hostinger However I am experiencing problems with my wordpress site. This is really annoying. If I understood the situation right, The server is blocking me or denying access to my own website. When I visit the site with google chrome, it returns: Oops! Google Chrome could not find Same thing happens to firefox! Firefox can't find the server but when I do a check if my site is online and working through http://www.downforeveryoneorjustme.com/ it says that the site is working and up. Another thing, I access the website through a proxy, both on chrome and in firefox, and t works. Why is this? I have also recently installed the plugin Better Wp Security 5 days ago. Could the plugin have caused it? but I don't remember setting any IP's to be blocked. Also, this happens at random times, sometimes I can access it, sometimes it fails to reach the server. I am currently developing the site live. Was I blocked by the server for frequently refreshing the page? (duh, I'm a developer and I need to refresh to see changes.) or is this a problem with my ISP's DNS server? How can I resolve? and what are the possible fixes? Thanks in advance! -Jomar

    Read the article

  • Sed: regular expression match lines without <!--

    - by sixtyfootersdude
    I have a sed command to comment out xml commands sed 's/^\([ \t]*\)\(.*[0-9a-zA-Z<].*\)$/\1<!-- Security: \2 -->/' web.xml Takes: <a> <!-- Comment --> <b> bla </b> </a> Produces: <!-- Security: <a> --> <!-- Security: <!-- Comment --> --> // NOTE: there are two end comments. <!-- Security: <b> --> <!-- Security: bla --> <!-- Security: </b> --> <!-- Security: </a> --> Ideally I would like to not use my sed script to comment things that are already commented. Ie: <!-- Security: <a> --> <!-- Comment --> <!-- Security: <b> --> <!-- Security: bla --> <!-- Security: </b> --> <!-- Security: </a> --> I could do something like this: sed 's/^\([ \t]*\)\(.*[0-9a-zA-Z<].*\)$/\1<!-- Security: \2 -->/' web.xml sed 's/^[ \t]*<!-- Security: \(<!--.*-->\) -->/\1/' web.xml but I think a one liner is cleaner (?) This is pretty similar: http://stackoverflow.com/questions/436850/matching-a-line-that-doesnt-contain-specific-text-with-regular-expressions

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • a couple of questions about proxy server,vpn & how they works

    - by Q8Y
    I have a couple of questions that are related to security. Correct me if i'm wrong :) If I want to request something (ex: visiting www.google.com): my computer will request that then it will to the ISP then to my ISP proxy server that will take the request and act as a middle man in this situation ask for the site (www.google.com) and retrieve it then the proxy will send it back to me. I know that its being done like that. So, my question is that, in this situation my ISP knows everything and what I did request, and the proxy server is set by default (when I ask for an internet subscription). So, if I use here another proxy (lets assume that is a highly anonymous and my ISP can't detect my IP address from it), would I visit my ISP and then from my ISP it will redirect me to the new proxy server that I provide? Will it know that there is someone using another proxy? Or will it go to another network rather than my ISP? Because I didn't get the view clearly. This question is related to the first one. When I use a VPN, I know that VPN provides for me a tunneling, encryption and much more features that a proxy can't. So my data is travelling securely and my ISP can't know what I'm doing. But my questions are: From where is the tunneling started? Does it start after I visit the ISP network (since they are the one that are responsible for forwarding my data and requests)? If so, then not all my connection is tunneled in this way, there is a part that is not being tunneled. Since, every time I need to do anything I have to go to my ISP and ask to do that. Correct me if I misunderstand this. I know that VPN can let my computer be virtually in another place and access its resources (ex: be like in my office while I'm in my home. This is done via VPN). If I use a VPN service provider so that I can access the internet securely and without being monitored by my ISP. In this case, where is my encrypted data saved? Is it saved in my ISP or in the VPN service provider? If I use a VPN, does anyone on the internet know what I'm doing or who I am? Even the VPN service provider? Can they know me? I think they should know the person that is asking for this VPN service, am I right?

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now.

    - by The Geek
    Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Just to be clear and direct with you: we absolutely recommend Microsoft Security Essentials as your anti-malware / anti-virus utility over any other option—and how can you argue? It’s totally free! New Features in 2.0 Here’s all of the new features in the latest release, which make it even more of a must-download: Network Traffic Inspection integrates into the network system and monitors the traffic at a low level without slowing down your PC, so it can actually detect threats before they get to your PC.   Internet Explorer Integration blocks malicious scripts before IE even starts running them—clearly a big security advantage.  Heuristic Scanning Engine finds malware that hasn’t been previously detected by scanning for certain types of attacks. This provides even more protection than just through virus definitions.   These new features make MSE on par with other anti-malware applications, especially the heuristic scanning, which has been the only complaint that anybody could make against MSE in the past—but now it has it Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Wildcards in jnlp template file

    - by Andy
    Since the last security changes in Java 7u40, it is required to sign a JNLP file. This can either be done by adding the final JNLP in JNLP-INF/APPLICATION.JNLP, or by providing a template JNLP in JNLP-INF/APPLICATION_TEMPLATE.JNLP in the signed main jar. The first way works well, but we would like to allow to pass a previously unknown number of runtime arguments to our application. Therefore, our APPLICATION_TEMPLATE.JNLP looks like this: <?xml version="1.0" encoding="UTF-8"?> <jnlp codebase="*"> <information> <title>...</title> <vendor>...</vendor> <description>...</description> <offline-allowed /> </information> <security> <all-permissions/> </security> <resources> <java version="1.7+" href="http://java.sun.com/products/autodl/j2se" /> <jar href="launcher/launcher.jar" main="true"/> <property name="jnlp...." value="*" /> <property name="jnlp..." value="*" /> </resources> <application-desc main-class="..."> * </application-desc> </jnlp> The problem is the * inside of the application-desc tag. It is possible to wildcard a fixed number of arguments using multiple argument tags (see code below), but then it is not possible to provide more or less arguments to the application (Java Webstart will no start with an empty argument tag). <application-desc main-class="..."> <argument>*</argument> <argument>*</argument> <argument>*</argument> </application-desc> Does someone can confirm this problem and/or has a solution for passing a previously undefined number of runtime arguments to the Java application? Thanks alot!

    Read the article

  • wss4j: - Cannot find key for alias: monit

    - by feiroox
    Hi I'm using axis1.4 and wss4j. When I define in client-config.wsdd for WSDoAllSender and WSDoAllReceiver different signaturePropFiles where I have different key stores defined with different certificates, I'm able to have different certificates for sending and receiving. But when I use the same signaturePropFiles' with the same keystore. I get this message when I try to send a message: org.apache.ws.security.components.crypto.CryptoBase -- Cannot find key for alias: [monit] in keystore of type [jks] from provider [SUN version 1.5] with size [2] and aliases: {other, monit} - Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] org.apache.ws.security.WSSecurityException: Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:60) at org.apache.ws.security.handler.WSHandler.doSenderAction(WSHandler.java:202) at org.apache.ws.axis.security.WSDoAllSender.invoke(WSDoAllSender.java:168) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:127) at org.apache.axis.client.Call.invokeEngine(Call.java:2784) at org.apache.axis.client.Call.invoke(Call.java:2767) at org.apache.axis.client.Call.invoke(Call.java:2443) at org.apache.axis.client.Call.invoke(Call.java:2366) at org.apache.axis.client.Call.invoke(Call.java:1812) at cz.ing.oopf.model.wsclient.ModelWebServiceSoapBindingStub.getStatus(ModelWebServiceSoapBindingStub.java:213) at cz.ing.oopf.wsgemonitor.monitor.util.MonitorUtil.checkStatus(MonitorUtil.java:18) at cz.ing.oopf.wsgemonitor.monitor.Test02WsMonitor.runTest(Test02WsMonitor.java:23) at cz.ing.oopf.wsgemonitor.Main.main(Main.java:75) Caused by: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:721) at org.apache.ws.security.message.WSSecSignature.build(WSSecSignature.java:780) at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:57) ... 15 more Caused by: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.components.crypto.CryptoBase.getPrivateKey(CryptoBase.java:214) at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:713) ... 17 more How to have two certificates for wss4j in the same keystore? why it cannot find my certificate there when i have two certificates in one keystore. I have the same password for both certificates regarding PWCallback (CallbackHandler) My properties file: org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin org.apache.ws.security.crypto.merlin.keystore.type=jks org.apache.ws.security.crypto.merlin.keystore.password=keystore org.apache.ws.security.crypto.merlin.keystore.alias=monit org.apache.ws.security.crypto.merlin.alias.password=*** org.apache.ws.security.crypto.merlin.file=key.jks My client-config.wsdd: <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"> <globalConfiguration> <requestFlow> <handler name="WSSecurity" type="java:org.apache.ws.axis.security.WSDoAllSender"> <parameter name="user" value="monit"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="monit.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> <parameter name="mustUnderstand" value="0"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="session"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="request"/> <parameter name="extension" value=".jwr"/> </handler> </requestFlow> <responseFlow> <handler name="DoSecurityReceiver" type="java:org.apache.ws.axis.security.WSDoAllReceiver"> <parameter name="user" value="other"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="other.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> </handler> </responseFlow> </globalConfiguration> <transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"> </transport> </deployment> Listing from keytool: keytool -keystore monit-key.jks -v -list Enter keystore password: Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries Alias name: other Creation date: Jul 22, 2009 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: .... Alias name: monit Creation date: Oct 19, 2009 Entry type: trustedCertEntry

    Read the article

  • Session Id in url and/or cookie? [closed]

    - by Jacco
    Most people advice against rewriting every (internal) url to include the sessionId (both GET and POST). The standard argument against it seems to be:   If an attacker gets hold of the sessionId, they can hijack the session.   With the sessionId in the url, it easily leaks to the attacker (by referer etc.) But what if you put the sessionId in both an (encrypted) cookie and the url. if the sessionId in either the cookie or the url is missing or if they do not match, decline the request. Let's pretend the website in question is free of xss holes, the cookie encryption is strong enough, etc. etc. Then what is the increased risk of rewriting every url to include the sessionId? UPDATE: @Casper That is a very good point. so up to now there are 2 reasons: bad for search engines / SEO if used in public part of the website can cause trouble when users post an url with a session Id on a forum, send it trough email or bookmark the page apart from the:   It increases the security risk, but it is not clear what the increased risk is. some background info: I've a website that offers blog-like service to travellers. I cannot be sure cookies work nor can I require cookies to work. Most computers in internet cafes are old and not (even close to) up-to-date. The user has no control over them and the connection can be very unreliable for some more 'off the beaten path' locations. Binding the session to an IP-address is not possible, some places use load-balancing proxies with multiple IP addresses. (and from China there is The Great Firewall). Upon receiving the first cookie back, I flag cookies as mandatory. However, if the cookie was flagged as mandatory but not there, I ask for their password once more, knowing their session from the url. (Also cookies have a 1 time token in them, but that's not the point of this question). UPDATE 2: The conclusion seems to be that there are no extra *security* issues when you expose you session id trough the URL while also keeping a copy of the session id in an encrypted cookie. Do not hesitate to add additional information about any possible security implications

    Read the article

  • Authorizing a computer to access a web application

    - by HackedByChinese
    I have a web application, and am tasked with adding secure sign-on to bolster security, akin to what Google has added to Google accounts. Use Case Essentially, when a user logs in, we want to detect if the user has previously authorized this computer. If the computer has not been authorized, the user is sent a one-time password (via email, SMS, or phone call) that they must enter, where the user may choose to remember this computer. In the web application, we will track authorized devices, allowing users to see when/where they logged in from that device last, and deauthorize any devices if they so choose. We require a solution that is very light touch (meaning, requiring no client-side software installation), and works with Safari, Chrome, Firefox, and IE 7+ (unfortunately). We will offer x509 security, which provides adequate security, but we still need a solution for customers that can't or won't use x509. My intention is to store authorization information using cookies (or, potentially, using local storage, degrading to flash cookies, and then normal cookies). At First Blush Track two separate values (local data or cookies): a hash representing a secure sign-on token, as well as a device token. Both values are driven (and recorded) by the web application, and dictated to the client. The SSO token is dependent on the device as well as a sequence number. This effectively allows devices to be deauthorized (all SSO tokens become invalid) and mitigates replay (not effectively, though, which is why I'm asking this question) through the use of a sequence number, and uses a nonce. Problem With this solution, it's possible for someone to just copy the SSO and device tokens and use in another request. While the sequence number will help me detect such an abuse and thus deauthorize the device, the detection and response can only happen after the valid device and malicious request both attempt access, which is ample time for damage to be done. I feel like using HMAC would be better. Track the device, the sequence, create a nonce, timestamp, and hash with a private key, then send the hash plus those values as plain text. Server does the same (in addition to validating the device and sequence) and compares. That seems much easier, and much more reliable.... assuming we can securely negotiate, exchange, and store private keys. Question So then, how can I securely negotiate a private key for authorized device, and then securely store that key? Is it more possible, at least, if I settle for storing the private key using local storage or flash cookies and just say it's "good enough"? Or, is there something I can do to my original draft to mitigate the vulnerability I describe?

    Read the article

  • showSettings callback in Flex?

    - by Jim Robert
    I am pretty new to flex, so forgive me if this is an obvious question. Is there a way to open the Security.showSettings (flash.system.Security) with a callback? or at least to detect if it is currently open or not? My flex application is used for streaming audio, and is normally controlled by javascript, so I keep it hidden for normal use (via absolute positioning it off the page). When I need microphone access I need to make the flash settings dialog visible, which works fine, I move it into view and open the dialog. When the user closes it, I need to move it back off the screen so they don't see an empty flex app sitting there after they change their settings. thanks :)

    Read the article

  • How to secure authorization of methods

    - by Kurresmack
    I am building a web site in C# using MVC.Net How can I secure that no unauthorized persons can access my methods? What I mean is that I want to make sure that only admins can create articles on my page. If I put this logic in the method actually adding this to the database, wouldn't I have business logic in my data layer? Is it a good practise to have a seperate security layer that is always in between of the data layer and the business layer to make? The problem is that if I protect at a higher level I will have to have checks on many places and it is more likely that I miss one place and users can bypass security. Thanks!

    Read the article

  • Custom certificate as proof of transaction

    - by Andy
    I'm developing a site where a user conducts a given transaction and once completed, the user is issued with a 'secure certificate'. The certificate serves as proof of the transaction and the user is able to upload the certificate at a later stage, to view the details of the transaction. At the moment I'm using a custom XML document with encrypted fields. It works perfect, but I would like a standardized approach, such as an X.509 certificate. I'm no encryption expert, but from what I gather, X.509 is more geared towards SSL issued by a CA. Is it possible to create your own valid valid CRT file? As a test, I created a CRT file with the example provided on WikiPedia. However, when I open the file in Windows I get this warning: Invalid Public Key Security Object File - This file is invalid as the following: Security Certificate. Not having much luck here, so time to ask the experts. What direction should I be heading in? Any guidance would be greatly appreciated.

    Read the article

  • Can an Aspect conditionally render parts of a JSP page ?

    - by Scott The Scot
    At present the jsp pages have normal authorize tags to conditionally render links and information etc. The website is on the intranet, and we're using Spring Security 2.0.4. Ive now got a business user who wants to allow all roles to access everything for the first few weeks, then gradually add the security back in as feedback is gathered from the business. Rather than go through every page, removing the authorize tags, only to have to put them back in, is is possible to configure these through an aspect, or is there any other way to externalize this into a config file ? I've found Spring's MethodSecurityInterceptor and the meta data tags, but these wouldn't give me the externalization. I've been on google for the last hour, and am now pretty sure this can't be done, but would love to find out I haven't been asking the right questions. Advice appreciated

    Read the article

  • Is it possible to put binary image data into html markup and then get the image displayed as usual i

    - by Joern Akkermann
    It's an important security issue and I'm sure this should be possible. A simple example: You run a community portal. Users are registered and upload their pictures. Your application gives security rules whenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone else's uploaded pictures. Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks. If it's possible to put the binary data of an image directly into the HTML markup, you can restrict the user access of your image dirs the user and group your web application runs of and pass the image data to your Apache user and group directly in the HTML. The only possible weakness then is the password of the user that your web app runs as. Is there already a possibility?

    Read the article

  • Best Pratice to Implement Secure Remember Me

    - by Yan Cheng CHEOK
    Sometimes, I came across certain web development framework which doesn't provide authentication feature as in Authenication ASP.NET I was wondering what is the security measure needs to be considered, when implementing "Remember Me" login feature, by hand coding? Here are the things I usually did. 1) Store the user name in cookie. The user name are not encrypted. 2) Store a secret key in cookie. The secret key is generated using one way function based on user name. The server will verify secret key against user name, to ensure this user name is not being changed. 3) Use HttpOnly in cookie. http://www.codinghorror.com/blog/2008/08/protecting-your-cookies-httponly.html Any things else I could miss out, which could possible lead a security hole.

    Read the article

  • How to implement a good system for login/out into a webapp

    - by Brandon Wang
    I am one of the developers at PassPad, a secure password generator and username storage system. We're still working on it, but I have a few questions on the best way to implement a secure login/out system. Right now, what we plan on doing is to have the login system save a cookie with the username and a session key, and that's all that serves as authentication. The server verifies the two to match. Upon login/out a new key is created. This is a security-related webapp and while we don't actually store any information that might make the user queasy, because it is security-oriented it makes it a necessity for us to at least appear secure in a way that the user would be happy with. Is there a better way to implement a login/out system in PHP? Preferably it won't take too much coding time or server resources. Is there anything else I need to implement, like brute-force protection, etc? How would I go about that?

    Read the article

  • What are the weaknesses of this user authentication method?

    - by byronh
    I'm developing my own PHP framework. It seems all the security articles I have read use vastly different methods for user authentication than I do so I could use some help in finding security holes. Some information that might be useful before I start. I use mod_rewrite for my MVC url's. Passwords are sha1 and md5 encrypted with 24 character salt unique to each user. mysql_real_escape_string and/or variable typecasting on everything going in, and htmlspecialchars on everything coming out. Step-by step process: Top of every page: session_start(); session_regenerate_id(); If user logs in via login form, generate new random token to put in user's MySQL row. Hash is generated based on user's salt (from when they first registered) and the new token. Store the hash and plaintext username in session variables, and duplicate in cookies if 'Remember me' is checked. On every page, check for cookies. If cookies set, copy their values into session variables. Then compare $_SESSION['name'] and $_SESSION['hash'] against MySQL database. Destroy all cookies and session variables if they don't match so they have to log in again. If login is valid, some of the user's information from the MySQL database is stored in an array for easy access. So far, I've assumed that this array is clean so when limiting user access I refer to user.rank and deny access if it's below what's required for that page. I've tried to test all the common attacks like XSS and CSRF, but maybe I'm just not good enough at hacking my own site! My system seems way too simple for it to actually be secure (the security code is only 100 lines long). What am I missing? I've also spent alot of time searching for the vulnerabilities with mysql_real_escape string but I haven't found any information that is up-to-date (everything is from several years ago at least and has apparently been fixed). All I know is that the problem was something to do with encoding. If that problem still exists today, how can I avoid it? Any help will be much appreciated.

    Read the article

  • How to secure authiorization of methods

    - by Kurresmack
    I am building a web site in C# using MVC.Net How can I secure that no unauthorized persons can access my methods? What I mean is that I want to make sure that only admins can create articles on my page. If I put this logic in the method actually adding this to the database, wouldn't I have business logic in my data layer? Is it a good practise to have a seperate security layer that is always in between of the data layer and the business layer to make? The problem is that if I protect at a higher level I will have to have checks on many places and it is more likely that I miss one place and users can bypass security. Thanks!

    Read the article

  • Using WCF HttpBindings on a LAN

    - by dcw
    We have a WCF-based client server that operates over a LAN. We've been getting along ok by using the NetTcpBinding, chosen because we couldn't get either HttpBinding to work between hosts. (Within a single host works fine, but is not useful for the production environment.) We're now back at the point where we want to explore using either BasicHttpBinding or WsHttpBinding, but we simply can't see the server from the client: even putting in the path to the endpoint into IE fails to see the server. Is there something simple we've overlooked? We're not specifying any security settings (or anything else, for that matter). Should we be doing so (e.g. explicitly setting security settings to None)?

    Read the article

  • Dealing with passwords securely

    - by Krt_Malta
    Hi I have a Java web service and a Java web client making use of this service. One of the functions is to create a new user account. My two concerns are: How will I send the user's password securely from the client. How will I store the user's password securely on the server. How can I achieve these? I know the theory basically behind security, security algorithms etc but can anyone give me some advice on how I should go about in coding? Could anyone point me to some good (and if possible not complicated) examples to follow since I found some examples on the Internet very contorted? Thanks a lot and regards, Krt_Malta

    Read the article

  • Is it possible for a XSS attack to obtain HttpOnly cookies?

    - by Dan Herbert
    Reading this blog post about HttpOnly cookies made me start thinking, is it possible for an HttpOnly cookie to be obtained through any form of XSS? Jeff mentions that it "raises the bar considerably" but makes it sound like it doesn't completely protect against XSS. Aside from the fact that not all browser support this feature properly, how could a hacker obtain a user's cookies if they are HttpOnly? I can't think of any way to make an HttpOnly cookie send itself to another site or be read by script, so it seems like this is a safe security feature, but I'm always amazed at how easily some people can work around many security layers. In the environment I work in, we use IE exclusively so other browsers aren't a concern. I'm looking specifically for other ways that this could become an issue that don't rely on browser specific flaws.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >