Search Results

Search found 16593 results on 664 pages for 'adf security deploy'.

Page 128/664 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Watchguard Firebox "split" fibre optic line into 2 interfaces

    - by fRAiLtY-
    We have a requirement on our Watchguard Firebox XTM505 to be able to split our incoming external interface, in this case a fibre optic dedicated leased line, 100/100. We use the line in our office of approx 30 machines however we also re-sell to an external company who utilise it to provide wireless internet solutions to the public. The current infrastructure is as follows: Data in (Leased Line) - Juniper SRX210 managed by ISP - 1 cable out into unmanaged Netgear switch - 1 cable into our firewall and office network, 1 cable to our external providers core router managed by them. We have been informed that having the unmanaged switch in the position it is poses a security risk and that a good option would be to get our Watchguard Firewall to perform the split, by separating our office onto a trusted interface, and by "passing through" the external line to their managed router. It is alleged that the Watchguard is capable of doing this and also rate limiting the interfaces, i.e. 20mbps for the trusted interface and 80mbps for the "pass-through", however Watchguard technical support don't seem to be able to understand what we're trying to achieve. Can anyone provide any advice on whether this is possible on a Watchguard device and how or perhaps if there's a better way of achieving this, perhaps with a managed switch instead of unmanaged? Cheers

    Read the article

  • What to do after a fresh Linux install in a production server?

    - by Rhyuk
    I havent had previous experience with the 'serious' IT scene. At work I've been handed a server that will host an application and MYSQL (I will install and configure everything), this will be a productive server. Soon I will be installing RHEL5 to it but I would like to know like, if you get a new production server, what would be the first 5 things you would do after you do a fresh Linux install? (configuration/security/reliability wise) EDIT: Added more information regarding the server enviroment and server roles: -The server will be inside my company's intranet/firewall. -The server will receive files (GBs) in binary code from another internal server. The application installed in this server is in charge of "translating" all that binary into human readable input. Server will get queried to get this information. -Only 2-3(max) users will be logging in. -(2) 145GB HDs in RAID1 for the OS and (2) 600GB HDs in RAID1 also for data. I mean, I know I may not get the perfect guideline. But at least something thats better than leaving everything on default.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • How to deploy a single website to multiple physical servers.?

    - by user70122
    I want to deploy my PHP and Ajax based Dating website to multiple physical servers. Does anyone know the method. Is there possible any method to save the files and setting (and also the user input), on all of the physical server, so that the servers do the following jobs. Servers used as a network, speeding up the process giving a robust thing to the end user. Data to be stored on all the servers' hard drivers, and Every server have the internet connection from different provider, so for example a. if any of the internet providers gets down, or b. there occur any hardware or power failure, the website still runs, without a break and without any data loss. Does anyone know about some online course, or some comprehensive tutorials about this topic ?

    Read the article

  • UNC shared path not accessible though necessary permissions are set

    - by Vysakh
    I have 2 environments A and B. A is an original environment whereas B is a clone of A, exactly except AD servers. AD server of B has been assigned a trust relationship with A, so that all the service and user accounts of A can be used in B too. And trusting works fine, perfect!! But I encounter some issues accessing UNC paths(\server2\shared) with these service accounts. I had a check in A environment and all the permissions set in that environment is done in B too (already set since it is a clone of A),but the issue is with B environment only. And FYI, the user is an owner of that folder in both the environments. I tried creating a folder inside the share(\server2\shared) using command prompt, but failed with error "access denied". What I done a workaround is that I added that user in "security" tab of folder permissions and after that it worked fine. But this was not done in the original environment. Is this something related to trust relationship? Why the share to the same location for the same user works differently in 2 environments, though they've been set with the same permissions. FYI, these are windows 2003 servers. Can someone please help.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Port 80 not accessible Amazon ec2

    - by Jasper
    I have started a Amazon EC2 instance (Linux Redhat)... And Apache as well. But when i try: http://MyPublicHostName I get no response. I have ensured that my Security Group allows access to port 80. I can reach port 22 for sure, as i am logged into the instance via ssh. Within the Amazon EC2 Linux Instance when i do: $ wget http://localhost i do get a response. This confirms Apache and port 80 is indeed running fine. Since Amazon starts instances in VPC, do i have to do anything there... Infact i cannot even ping the instance, although i can ssh to it! Any advice? EDIT: Note that i had edited /etc/hosts file earlier to make 389-ds (ldap) installation work. My /etc/hosts file looks like this(IP addresses as shown as w.x.y.z ) 127.0.0.1   localhost.localdomain localhost w.x.y.z   ip-w-x-y-z.us-west-1.compute.internal w.x.y.z   ip-w-x-y-z.localdomain

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • Can't reach server without proxy (website down from my home)

    - by user2128576
    I have a website hosted on Hostinger However I am experiencing problems with my wordpress site. This is really annoying. If I understood the situation right, The server is blocking me or denying access to my own website. When I visit the site with google chrome, it returns: Oops! Google Chrome could not find Same thing happens to firefox! Firefox can't find the server but when I do a check if my site is online and working through http://www.downforeveryoneorjustme.com/ it says that the site is working and up. Another thing, I access the website through a proxy, both on chrome and in firefox, and t works. Why is this? I have also recently installed the plugin Better Wp Security 5 days ago. Could the plugin have caused it? but I don't remember setting any IP's to be blocked. Also, this happens at random times, sometimes I can access it, sometimes it fails to reach the server. I am currently developing the site live. Was I blocked by the server for frequently refreshing the page? (duh, I'm a developer and I need to refresh to see changes.) or is this a problem with my ISP's DNS server? How can I resolve? and what are the possible fixes? Thanks in advance! -Jomar

    Read the article

  • Sed: regular expression match lines without <!--

    - by sixtyfootersdude
    I have a sed command to comment out xml commands sed 's/^\([ \t]*\)\(.*[0-9a-zA-Z<].*\)$/\1<!-- Security: \2 -->/' web.xml Takes: <a> <!-- Comment --> <b> bla </b> </a> Produces: <!-- Security: <a> --> <!-- Security: <!-- Comment --> --> // NOTE: there are two end comments. <!-- Security: <b> --> <!-- Security: bla --> <!-- Security: </b> --> <!-- Security: </a> --> Ideally I would like to not use my sed script to comment things that are already commented. Ie: <!-- Security: <a> --> <!-- Comment --> <!-- Security: <b> --> <!-- Security: bla --> <!-- Security: </b> --> <!-- Security: </a> --> I could do something like this: sed 's/^\([ \t]*\)\(.*[0-9a-zA-Z<].*\)$/\1<!-- Security: \2 -->/' web.xml sed 's/^[ \t]*<!-- Security: \(<!--.*-->\) -->/\1/' web.xml but I think a one liner is cleaner (?) This is pretty similar: http://stackoverflow.com/questions/436850/matching-a-line-that-doesnt-contain-specific-text-with-regular-expressions

    Read the article

  • Easy way to deploy the recovery parition in Windows 7?

    - by Jesse K
    We're using ImageX to deploy Windows 7 Professional. We've gotten the Windows partition to work, but the recovery partition (100-200MB at the front of the drive in a standard install) isn't as simple. Here's a Technet guide I found: http://technet.microsoft.com/en-us/library/dd744280%28WS.10%29.aspx That looks like it could work, but would take alot of time if we need to do that for every single machine we deploy. Is there a faster/automated way?

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • a couple of questions about proxy server,vpn & how they works

    - by Q8Y
    I have a couple of questions that are related to security. Correct me if i'm wrong :) If I want to request something (ex: visiting www.google.com): my computer will request that then it will to the ISP then to my ISP proxy server that will take the request and act as a middle man in this situation ask for the site (www.google.com) and retrieve it then the proxy will send it back to me. I know that its being done like that. So, my question is that, in this situation my ISP knows everything and what I did request, and the proxy server is set by default (when I ask for an internet subscription). So, if I use here another proxy (lets assume that is a highly anonymous and my ISP can't detect my IP address from it), would I visit my ISP and then from my ISP it will redirect me to the new proxy server that I provide? Will it know that there is someone using another proxy? Or will it go to another network rather than my ISP? Because I didn't get the view clearly. This question is related to the first one. When I use a VPN, I know that VPN provides for me a tunneling, encryption and much more features that a proxy can't. So my data is travelling securely and my ISP can't know what I'm doing. But my questions are: From where is the tunneling started? Does it start after I visit the ISP network (since they are the one that are responsible for forwarding my data and requests)? If so, then not all my connection is tunneled in this way, there is a part that is not being tunneled. Since, every time I need to do anything I have to go to my ISP and ask to do that. Correct me if I misunderstand this. I know that VPN can let my computer be virtually in another place and access its resources (ex: be like in my office while I'm in my home. This is done via VPN). If I use a VPN service provider so that I can access the internet securely and without being monitored by my ISP. In this case, where is my encrypted data saved? Is it saved in my ISP or in the VPN service provider? If I use a VPN, does anyone on the internet know what I'm doing or who I am? Even the VPN service provider? Can they know me? I think they should know the person that is asking for this VPN service, am I right?

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now.

    - by The Geek
    Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Just to be clear and direct with you: we absolutely recommend Microsoft Security Essentials as your anti-malware / anti-virus utility over any other option—and how can you argue? It’s totally free! New Features in 2.0 Here’s all of the new features in the latest release, which make it even more of a must-download: Network Traffic Inspection integrates into the network system and monitors the traffic at a low level without slowing down your PC, so it can actually detect threats before they get to your PC.   Internet Explorer Integration blocks malicious scripts before IE even starts running them—clearly a big security advantage.  Heuristic Scanning Engine finds malware that hasn’t been previously detected by scanning for certain types of attacks. This provides even more protection than just through virus definitions.   These new features make MSE on par with other anti-malware applications, especially the heuristic scanning, which has been the only complaint that anybody could make against MSE in the past—but now it has it Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Wildcards in jnlp template file

    - by Andy
    Since the last security changes in Java 7u40, it is required to sign a JNLP file. This can either be done by adding the final JNLP in JNLP-INF/APPLICATION.JNLP, or by providing a template JNLP in JNLP-INF/APPLICATION_TEMPLATE.JNLP in the signed main jar. The first way works well, but we would like to allow to pass a previously unknown number of runtime arguments to our application. Therefore, our APPLICATION_TEMPLATE.JNLP looks like this: <?xml version="1.0" encoding="UTF-8"?> <jnlp codebase="*"> <information> <title>...</title> <vendor>...</vendor> <description>...</description> <offline-allowed /> </information> <security> <all-permissions/> </security> <resources> <java version="1.7+" href="http://java.sun.com/products/autodl/j2se" /> <jar href="launcher/launcher.jar" main="true"/> <property name="jnlp...." value="*" /> <property name="jnlp..." value="*" /> </resources> <application-desc main-class="..."> * </application-desc> </jnlp> The problem is the * inside of the application-desc tag. It is possible to wildcard a fixed number of arguments using multiple argument tags (see code below), but then it is not possible to provide more or less arguments to the application (Java Webstart will no start with an empty argument tag). <application-desc main-class="..."> <argument>*</argument> <argument>*</argument> <argument>*</argument> </application-desc> Does someone can confirm this problem and/or has a solution for passing a previously undefined number of runtime arguments to the Java application? Thanks alot!

    Read the article

  • wss4j: - Cannot find key for alias: monit

    - by feiroox
    Hi I'm using axis1.4 and wss4j. When I define in client-config.wsdd for WSDoAllSender and WSDoAllReceiver different signaturePropFiles where I have different key stores defined with different certificates, I'm able to have different certificates for sending and receiving. But when I use the same signaturePropFiles' with the same keystore. I get this message when I try to send a message: org.apache.ws.security.components.crypto.CryptoBase -- Cannot find key for alias: [monit] in keystore of type [jks] from provider [SUN version 1.5] with size [2] and aliases: {other, monit} - Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] org.apache.ws.security.WSSecurityException: Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:60) at org.apache.ws.security.handler.WSHandler.doSenderAction(WSHandler.java:202) at org.apache.ws.axis.security.WSDoAllSender.invoke(WSDoAllSender.java:168) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:127) at org.apache.axis.client.Call.invokeEngine(Call.java:2784) at org.apache.axis.client.Call.invoke(Call.java:2767) at org.apache.axis.client.Call.invoke(Call.java:2443) at org.apache.axis.client.Call.invoke(Call.java:2366) at org.apache.axis.client.Call.invoke(Call.java:1812) at cz.ing.oopf.model.wsclient.ModelWebServiceSoapBindingStub.getStatus(ModelWebServiceSoapBindingStub.java:213) at cz.ing.oopf.wsgemonitor.monitor.util.MonitorUtil.checkStatus(MonitorUtil.java:18) at cz.ing.oopf.wsgemonitor.monitor.Test02WsMonitor.runTest(Test02WsMonitor.java:23) at cz.ing.oopf.wsgemonitor.Main.main(Main.java:75) Caused by: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:721) at org.apache.ws.security.message.WSSecSignature.build(WSSecSignature.java:780) at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:57) ... 15 more Caused by: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.components.crypto.CryptoBase.getPrivateKey(CryptoBase.java:214) at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:713) ... 17 more How to have two certificates for wss4j in the same keystore? why it cannot find my certificate there when i have two certificates in one keystore. I have the same password for both certificates regarding PWCallback (CallbackHandler) My properties file: org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin org.apache.ws.security.crypto.merlin.keystore.type=jks org.apache.ws.security.crypto.merlin.keystore.password=keystore org.apache.ws.security.crypto.merlin.keystore.alias=monit org.apache.ws.security.crypto.merlin.alias.password=*** org.apache.ws.security.crypto.merlin.file=key.jks My client-config.wsdd: <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"> <globalConfiguration> <requestFlow> <handler name="WSSecurity" type="java:org.apache.ws.axis.security.WSDoAllSender"> <parameter name="user" value="monit"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="monit.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> <parameter name="mustUnderstand" value="0"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="session"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="request"/> <parameter name="extension" value=".jwr"/> </handler> </requestFlow> <responseFlow> <handler name="DoSecurityReceiver" type="java:org.apache.ws.axis.security.WSDoAllReceiver"> <parameter name="user" value="other"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="other.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> </handler> </responseFlow> </globalConfiguration> <transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"> </transport> </deployment> Listing from keytool: keytool -keystore monit-key.jks -v -list Enter keystore password: Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries Alias name: other Creation date: Jul 22, 2009 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: .... Alias name: monit Creation date: Oct 19, 2009 Entry type: trustedCertEntry

    Read the article

  • Session Id in url and/or cookie? [closed]

    - by Jacco
    Most people advice against rewriting every (internal) url to include the sessionId (both GET and POST). The standard argument against it seems to be:   If an attacker gets hold of the sessionId, they can hijack the session.   With the sessionId in the url, it easily leaks to the attacker (by referer etc.) But what if you put the sessionId in both an (encrypted) cookie and the url. if the sessionId in either the cookie or the url is missing or if they do not match, decline the request. Let's pretend the website in question is free of xss holes, the cookie encryption is strong enough, etc. etc. Then what is the increased risk of rewriting every url to include the sessionId? UPDATE: @Casper That is a very good point. so up to now there are 2 reasons: bad for search engines / SEO if used in public part of the website can cause trouble when users post an url with a session Id on a forum, send it trough email or bookmark the page apart from the:   It increases the security risk, but it is not clear what the increased risk is. some background info: I've a website that offers blog-like service to travellers. I cannot be sure cookies work nor can I require cookies to work. Most computers in internet cafes are old and not (even close to) up-to-date. The user has no control over them and the connection can be very unreliable for some more 'off the beaten path' locations. Binding the session to an IP-address is not possible, some places use load-balancing proxies with multiple IP addresses. (and from China there is The Great Firewall). Upon receiving the first cookie back, I flag cookies as mandatory. However, if the cookie was flagged as mandatory but not there, I ask for their password once more, knowing their session from the url. (Also cookies have a 1 time token in them, but that's not the point of this question). UPDATE 2: The conclusion seems to be that there are no extra *security* issues when you expose you session id trough the URL while also keeping a copy of the session id in an encrypted cookie. Do not hesitate to add additional information about any possible security implications

    Read the article

  • Authorizing a computer to access a web application

    - by HackedByChinese
    I have a web application, and am tasked with adding secure sign-on to bolster security, akin to what Google has added to Google accounts. Use Case Essentially, when a user logs in, we want to detect if the user has previously authorized this computer. If the computer has not been authorized, the user is sent a one-time password (via email, SMS, or phone call) that they must enter, where the user may choose to remember this computer. In the web application, we will track authorized devices, allowing users to see when/where they logged in from that device last, and deauthorize any devices if they so choose. We require a solution that is very light touch (meaning, requiring no client-side software installation), and works with Safari, Chrome, Firefox, and IE 7+ (unfortunately). We will offer x509 security, which provides adequate security, but we still need a solution for customers that can't or won't use x509. My intention is to store authorization information using cookies (or, potentially, using local storage, degrading to flash cookies, and then normal cookies). At First Blush Track two separate values (local data or cookies): a hash representing a secure sign-on token, as well as a device token. Both values are driven (and recorded) by the web application, and dictated to the client. The SSO token is dependent on the device as well as a sequence number. This effectively allows devices to be deauthorized (all SSO tokens become invalid) and mitigates replay (not effectively, though, which is why I'm asking this question) through the use of a sequence number, and uses a nonce. Problem With this solution, it's possible for someone to just copy the SSO and device tokens and use in another request. While the sequence number will help me detect such an abuse and thus deauthorize the device, the detection and response can only happen after the valid device and malicious request both attempt access, which is ample time for damage to be done. I feel like using HMAC would be better. Track the device, the sequence, create a nonce, timestamp, and hash with a private key, then send the hash plus those values as plain text. Server does the same (in addition to validating the device and sequence) and compares. That seems much easier, and much more reliable.... assuming we can securely negotiate, exchange, and store private keys. Question So then, how can I securely negotiate a private key for authorized device, and then securely store that key? Is it more possible, at least, if I settle for storing the private key using local storage or flash cookies and just say it's "good enough"? Or, is there something I can do to my original draft to mitigate the vulnerability I describe?

    Read the article

  • showSettings callback in Flex?

    - by Jim Robert
    I am pretty new to flex, so forgive me if this is an obvious question. Is there a way to open the Security.showSettings (flash.system.Security) with a callback? or at least to detect if it is currently open or not? My flex application is used for streaming audio, and is normally controlled by javascript, so I keep it hidden for normal use (via absolute positioning it off the page). When I need microphone access I need to make the flash settings dialog visible, which works fine, I move it into view and open the dialog. When the user closes it, I need to move it back off the screen so they don't see an empty flex app sitting there after they change their settings. thanks :)

    Read the article

  • Custom certificate as proof of transaction

    - by Andy
    I'm developing a site where a user conducts a given transaction and once completed, the user is issued with a 'secure certificate'. The certificate serves as proof of the transaction and the user is able to upload the certificate at a later stage, to view the details of the transaction. At the moment I'm using a custom XML document with encrypted fields. It works perfect, but I would like a standardized approach, such as an X.509 certificate. I'm no encryption expert, but from what I gather, X.509 is more geared towards SSL issued by a CA. Is it possible to create your own valid valid CRT file? As a test, I created a CRT file with the example provided on WikiPedia. However, when I open the file in Windows I get this warning: Invalid Public Key Security Object File - This file is invalid as the following: Security Certificate. Not having much luck here, so time to ask the experts. What direction should I be heading in? Any guidance would be greatly appreciated.

    Read the article

  • How to secure authorization of methods

    - by Kurresmack
    I am building a web site in C# using MVC.Net How can I secure that no unauthorized persons can access my methods? What I mean is that I want to make sure that only admins can create articles on my page. If I put this logic in the method actually adding this to the database, wouldn't I have business logic in my data layer? Is it a good practise to have a seperate security layer that is always in between of the data layer and the business layer to make? The problem is that if I protect at a higher level I will have to have checks on many places and it is more likely that I miss one place and users can bypass security. Thanks!

    Read the article

  • Can an Aspect conditionally render parts of a JSP page ?

    - by Scott The Scot
    At present the jsp pages have normal authorize tags to conditionally render links and information etc. The website is on the intranet, and we're using Spring Security 2.0.4. Ive now got a business user who wants to allow all roles to access everything for the first few weeks, then gradually add the security back in as feedback is gathered from the business. Rather than go through every page, removing the authorize tags, only to have to put them back in, is is possible to configure these through an aspect, or is there any other way to externalize this into a config file ? I've found Spring's MethodSecurityInterceptor and the meta data tags, but these wouldn't give me the externalization. I've been on google for the last hour, and am now pretty sure this can't be done, but would love to find out I haven't been asking the right questions. Advice appreciated

    Read the article

  • Is it possible to put binary image data into html markup and then get the image displayed as usual i

    - by Joern Akkermann
    It's an important security issue and I'm sure this should be possible. A simple example: You run a community portal. Users are registered and upload their pictures. Your application gives security rules whenever a picture is allowed to be displayed. For example users must be friends on each sides by the system, in order that you can view someone else's uploaded pictures. Here comes the problem: it is possible that someone crawls the image directories of your server. But you want to protect your users from such attacks. If it's possible to put the binary data of an image directly into the HTML markup, you can restrict the user access of your image dirs the user and group your web application runs of and pass the image data to your Apache user and group directly in the HTML. The only possible weakness then is the password of the user that your web app runs as. Is there already a possibility?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >