Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 176/886 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Does Guest WiFi on an Access Point make any sense? [migrated]

    - by Jason
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • The best way to hide data Encryption,Connection,Hardware

    - by Tico Raaphorst
    So to say, if i have a VPS which i own now, and i wanted to make the most secure and stable system that i can make. How would i do that? Just to try: I installed debian 7 with LVM Encryption via installation: You get the 2 partitions a /boot and a encrypted partition. When booting you will be prompted to fill in the password to unlock the encryption of the encrypted partition, Which then will have more partitions like /home /usr and swapspace which will automatically mount. Now, i do need to fill in the password over a VNC-SSL connection via the control panel website of the VPS hoster, so they can see my disk encryption password if they wanted to, they have the option if they wanted to look at what i have as data right? Data encryption on VPS , Is it possible to have a 100% secure virtual private server? So lets say i have my server and it is sitting well locked next to me, with the following examples covered bios (you have to replace bios) raid (you have to unlock raid-config) disk (you have to unlock disk encryption) filelike-zip-tar (files are stored in encrypted archives) which are in some other crypted file mounted as partition (archives mounted as partitions) all on the same system So it will be slow but it would be extremely difficult to crack the encryption. So to say if you stole the server. Then i only need to make the connection like ssh safer with single use passwords, block all incoming and outgoing connections but give one "exception" for myself. And maybe one for if i somehow lose my identity for the "exeption" What other overkill but realistic security options are available, i have heard about SElinux?

    Read the article

  • How to disable Utility Manager (Windows Key + U)

    - by Skizz
    How do I disable the Windows + U hotkey in Windows XP? Alternatively, how do I stop the utility manager from being active? The two are related. The utilty manager is currently providing a potential security hole and I need to remove it[1]. The system I'm developing uses a custom Gina to log in and start a custom shell. This removes most Windows Key hotkeys but the Win + U still pops up the manager app. Update: Things I've tried and don't work: NoWinKeys registry setting - this only affects explorer hotkeys; Renaming utilman.exe - program reappears next login; Third party software - not really an option, these machines are audited by the clients and additional, third party software would be unlikely to be accepted. Also, the proedure needs to be reasonably straightforward - this has to be done by field service engineers to existing machines (machines currently in Russia, Holland, France, Spain, Ireland and USA). [1] The hole is via the internet options in the help viewer the utility app links to.

    Read the article

  • SFTP access without hassle

    - by enobayram
    I'm trying to provide access to a local folder for someone over the internet. After googling around a bit, I've come to the conclusion that SFTP is the safest thing to expose through the firewall to the chaotic and evil world of the Internet. I'm planning to use the openssh-server to this end. Even though I trust that openssh will stop a random attacker, I'm not so sure about the security of my computer once someone is connected through ssh. In particular, even if I don't give that person's user account any privileges whatsoever, he might just be able to "su" to, say, "nobody". And since I was never worried about such things before, I might have given some moderate privileges to nobody at some point (not sudo rights surely!). I would of course value your comments about giving privileges to nobody in the first place, but that's not the point, really. My aim is to give SFTP access to someone in such a sandboxed state that I shouldn't need to worry about such things (at least not more so than I should have done before). Is this really possible? Am I speaking nonsense or worried in vain?

    Read the article

  • What can inexperienced admin expect after server setup completed seemingly fine? [closed]

    - by Miloshio
    Inexperienced person seems to have done everything fine so far. This is his very first time that he is the only one in charge for LAMP server. He has installed OS, network, Apache, PHP, MySQL, Proftpd, MTA & MDA software, configured VirtualHosts properly (facts because he calls himself admin), done user management and various configuration settings with respect to security recommendations and... everything is fine for now... For now. If you were directing horror movie for server admin above mentioned what would you make up for boogieman that showed up and started to pursue him? Omitting hardware disaster cases for which one cannot do anything 'from remote', what is the most common causes of server or part-of-server or server-related significant failure when managed by inexperienced admin? I have in mind something that is newbie admins very often missing which is leading to later intervention of someone with experience? May that be some uncontrolled CPU-eating leftover process, memory-related glitch, widely-used feature that messes up something unexpected on anything like that? Newbie admin for now only monitors disk-space and RAM usage, and number of running processes. He would appreciate any tips regarding what's probably going to happen to his server over time.

    Read the article

  • Pull network or power? (for contianing a rooted server)

    - by Aleksandr Levchuk
    When a server gets rooted (e.g. a situation like this), one of the first things that you may decide to do is containment. Some security specialists advise not to enter remediation immediately and to keep the server online until forensics are completed. Those advises are usually for APT. It's different if you have occasional Script kiddie breaches. However, you may decide to remediate (fix things) early and one of the steps in remediation is containment of the server. Quoting from Robert Moir's Answer - "disconnect the victim from its muggers". A server can be contained by pulling the network cable or the power cable. Which method is better? Taking into consideration the need for: Protecting victims from further damage Executing successful forensics (Possibly) Protecting valuable data on the server Edit: 5 assumptions Assuming: You detected early: 24 hours. You want to recover early: 3 days of 1 systems admin on the job (forensics and recovery). The server is not a Virtual Machine or a Container able to take a snapshot capturing the contents of the servers memory. You decide not to attempt prosecuting. You suspect that the attacker may be using some form of software (possibly sophisticated) and this software is still running on the server.

    Read the article

  • Watchguard Firebox "split" fibre optic line into 2 interfaces

    - by fRAiLtY-
    We have a requirement on our Watchguard Firebox XTM505 to be able to split our incoming external interface, in this case a fibre optic dedicated leased line, 100/100. We use the line in our office of approx 30 machines however we also re-sell to an external company who utilise it to provide wireless internet solutions to the public. The current infrastructure is as follows: Data in (Leased Line) - Juniper SRX210 managed by ISP - 1 cable out into unmanaged Netgear switch - 1 cable into our firewall and office network, 1 cable to our external providers core router managed by them. We have been informed that having the unmanaged switch in the position it is poses a security risk and that a good option would be to get our Watchguard Firewall to perform the split, by separating our office onto a trusted interface, and by "passing through" the external line to their managed router. It is alleged that the Watchguard is capable of doing this and also rate limiting the interfaces, i.e. 20mbps for the trusted interface and 80mbps for the "pass-through", however Watchguard technical support don't seem to be able to understand what we're trying to achieve. Can anyone provide any advice on whether this is possible on a Watchguard device and how or perhaps if there's a better way of achieving this, perhaps with a managed switch instead of unmanaged? Cheers

    Read the article

  • What to do after a fresh Linux install in a production server?

    - by Rhyuk
    I havent had previous experience with the 'serious' IT scene. At work I've been handed a server that will host an application and MYSQL (I will install and configure everything), this will be a productive server. Soon I will be installing RHEL5 to it but I would like to know like, if you get a new production server, what would be the first 5 things you would do after you do a fresh Linux install? (configuration/security/reliability wise) EDIT: Added more information regarding the server enviroment and server roles: -The server will be inside my company's intranet/firewall. -The server will receive files (GBs) in binary code from another internal server. The application installed in this server is in charge of "translating" all that binary into human readable input. Server will get queried to get this information. -Only 2-3(max) users will be logging in. -(2) 145GB HDs in RAID1 for the OS and (2) 600GB HDs in RAID1 also for data. I mean, I know I may not get the perfect guideline. But at least something thats better than leaving everything on default.

    Read the article

  • Need a helpful/managed VPS to help transition from shared hosting

    - by Xeoncross
    I am looking for a VPS that can help me transition out of a shared hosting environment. My main OS is Ubuntu, although I am still new to the linux world. I spend most of my day programming PHP applications using a git over SSH workflow. I want PHP, SSH, git, MySQL/PostgreSQL and Apache to work well. Someday after I figure out server management I'll move on to http://nginx.org/ or something. I don't really understand 1) linux firewalls, 2) mail servers, or 3) proper daily package/lib update flow. I need a host that can help with these so I don't get hit with a security hole. (I monitor apache access logs so I think I can take it from there.) I want to know if there is a sub $50/m VPS that can help me learn (or do for me) these three main things I need to run a server. I can't leave my shared hosts (plural shows my need!) until I am sure my sites will be safe despite my incompetence. To clarify again, I need the most helpful, supportive, walk-me-through, check-up-on-me, be-there-when-I-need you VPS I can get. Learning isn't a problem when there is someone to turn too. ;)

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • UNC shared path not accessible though necessary permissions are set

    - by Vysakh
    I have 2 environments A and B. A is an original environment whereas B is a clone of A, exactly except AD servers. AD server of B has been assigned a trust relationship with A, so that all the service and user accounts of A can be used in B too. And trusting works fine, perfect!! But I encounter some issues accessing UNC paths(\server2\shared) with these service accounts. I had a check in A environment and all the permissions set in that environment is done in B too (already set since it is a clone of A),but the issue is with B environment only. And FYI, the user is an owner of that folder in both the environments. I tried creating a folder inside the share(\server2\shared) using command prompt, but failed with error "access denied". What I done a workaround is that I added that user in "security" tab of folder permissions and after that it worked fine. But this was not done in the original environment. Is this something related to trust relationship? Why the share to the same location for the same user works differently in 2 environments, though they've been set with the same permissions. FYI, these are windows 2003 servers. Can someone please help.

    Read the article

  • How do I set up Tomcat 7's server.xml to access a network share with an different url?

    - by jneff
    I have Apache Tomcat 7.0 installed on a Windows 2008 R2 Server. Tomcat has access to a share '\server\share' that has a documents folder that I want to access using '/foo/Documents' in my web application. My application is able to access the documents when I set the file path to '//server/share/documents/doc1.doc'. I don't want the file server's path to be exposed on my link to the file in my application. I want to be able to set the path to '/foo/Documents/doc1.doc'. In http://www3.ntu.edu.sg/home/ehchua/programming/howto/Tomcat_More.html under 'Setting the Context Root Directory and Request URL of a Webapp' item number two says that I can rename the path by putting in a context to the server.xml file. So I put <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" /> <Context path="/foo" docBase="//server/share" reloadable="false"></Context> </Host> The context at the bottum was added. Then I tried to pull the file using '/foo/Documents/doc1.doc' and it didn't work. What do I need to do to get it to work correctly? Should I be using an alias instead? Are there other security issues that this may cause?

    Read the article

  • Port 80 not accessible Amazon ec2

    - by Jasper
    I have started a Amazon EC2 instance (Linux Redhat)... And Apache as well. But when i try: http://MyPublicHostName I get no response. I have ensured that my Security Group allows access to port 80. I can reach port 22 for sure, as i am logged into the instance via ssh. Within the Amazon EC2 Linux Instance when i do: $ wget http://localhost i do get a response. This confirms Apache and port 80 is indeed running fine. Since Amazon starts instances in VPC, do i have to do anything there... Infact i cannot even ping the instance, although i can ssh to it! Any advice? EDIT: Note that i had edited /etc/hosts file earlier to make 389-ds (ldap) installation work. My /etc/hosts file looks like this(IP addresses as shown as w.x.y.z ) 127.0.0.1   localhost.localdomain localhost w.x.y.z   ip-w-x-y-z.us-west-1.compute.internal w.x.y.z   ip-w-x-y-z.localdomain

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • Can't reach server without proxy (website down from my home)

    - by user2128576
    I have a website hosted on Hostinger However I am experiencing problems with my wordpress site. This is really annoying. If I understood the situation right, The server is blocking me or denying access to my own website. When I visit the site with google chrome, it returns: Oops! Google Chrome could not find Same thing happens to firefox! Firefox can't find the server but when I do a check if my site is online and working through http://www.downforeveryoneorjustme.com/ it says that the site is working and up. Another thing, I access the website through a proxy, both on chrome and in firefox, and t works. Why is this? I have also recently installed the plugin Better Wp Security 5 days ago. Could the plugin have caused it? but I don't remember setting any IP's to be blocked. Also, this happens at random times, sometimes I can access it, sometimes it fails to reach the server. I am currently developing the site live. Was I blocked by the server for frequently refreshing the page? (duh, I'm a developer and I need to refresh to see changes.) or is this a problem with my ISP's DNS server? How can I resolve? and what are the possible fixes? Thanks in advance! -Jomar

    Read the article

  • Sed: regular expression match lines without <!--

    - by sixtyfootersdude
    I have a sed command to comment out xml commands sed 's/^\([ \t]*\)\(.*[0-9a-zA-Z<].*\)$/\1<!-- Security: \2 -->/' web.xml Takes: <a> <!-- Comment --> <b> bla </b> </a> Produces: <!-- Security: <a> --> <!-- Security: <!-- Comment --> --> // NOTE: there are two end comments. <!-- Security: <b> --> <!-- Security: bla --> <!-- Security: </b> --> <!-- Security: </a> --> Ideally I would like to not use my sed script to comment things that are already commented. Ie: <!-- Security: <a> --> <!-- Comment --> <!-- Security: <b> --> <!-- Security: bla --> <!-- Security: </b> --> <!-- Security: </a> --> I could do something like this: sed 's/^\([ \t]*\)\(.*[0-9a-zA-Z<].*\)$/\1<!-- Security: \2 -->/' web.xml sed 's/^[ \t]*<!-- Security: \(<!--.*-->\) -->/\1/' web.xml but I think a one liner is cleaner (?) This is pretty similar: http://stackoverflow.com/questions/436850/matching-a-line-that-doesnt-contain-specific-text-with-regular-expressions

    Read the article

  • How to write a test for accounts controller for forms authenticate

    - by Anil Ali
    Trying to figure out how to adequately test my accounts controller. I am having problem testing the successful logon scenario. Issue 1) Am I missing any other tests.(I am testing the model validation attributes separately) Issue 2) Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() and Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() test are not passing. I have narrowed it down to the line where i am calling _membership.validateuser(). Even though during my mock setup of the service i am stating that i want to return true whenever validateuser is called, the method call returns false. Here is what I have gotten so far AccountController.cs [HandleError] public class AccountController : Controller { private IMembershipService _membershipService; public AccountController() : this(null) { } public AccountController(IMembershipService membershipService) { _membershipService = membershipService ?? new AccountMembershipService(); } [HttpGet] public ActionResult LogOn() { return View(); } [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (_membershipService.ValidateUser(model.UserName,model.Password)) { if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); } return RedirectToAction("Index", "Overview"); } ModelState.AddModelError("*", "The user name or password provided is incorrect."); } return View(model); } } AccountServices.cs public interface IMembershipService { bool ValidateUser(string userName, string password); } public class AccountMembershipService : IMembershipService { public bool ValidateUser(string userName, string password) { throw new System.NotImplementedException(); } } AccountControllerFacts.cs public class AccountControllerFacts { public static AccountController GetAccountControllerForLogonSuccess() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(true); return controller; } public static AccountController GetAccountControllerForLogonFailure() { var membershipServiceStub = MockRepository.GenerateStub<IMembershipService>(); var controller = new AccountController(membershipServiceStub); membershipServiceStub .Stub(x => x.ValidateUser("someuser", "somepass")) .Return(false); return controller; } public class LogOn { [Fact] public void Get_ReturnsViewResultWithDefaultViewName() { // Arrange var controller = GetAccountControllerForLogonSuccess(); // Act var result = controller.LogOn(); // Assert Assert.IsType<ViewResult>(result); Assert.Empty(((ViewResult)result).ViewName); } [Fact] public void Put_ReturnsOverviewRedirectToRouteResultIfLogonSuccessAndNoReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, null); var redirectresult = (RedirectToRouteResult) result; // Assert Assert.IsType<RedirectToRouteResult>(result); Assert.Equal("Overview", redirectresult.RouteValues["controller"]); Assert.Equal("Index", redirectresult.RouteValues["action"]); } [Fact] public void Put_ReturnsRedirectResultIfLogonSuccessAndReturnUrlGiven() { // Arrange var controller = GetAccountControllerForLogonSuccess(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var redirectResult = (RedirectResult) result; // Assert Assert.IsType<RedirectResult>(result); Assert.Equal("someurl", redirectResult.Url); } [Fact] public void Put_ReturnsViewIfInvalidModelState() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); controller.ModelState.AddModelError("*","Invalid model state."); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); } [Fact] public void Put_ReturnsViewIfLogonFailed() { // Arrange var controller = GetAccountControllerForLogonFailure(); var user = new LogOnModel(); // Act var result = controller.LogOn(user, "someurl"); var viewResult = (ViewResult) result; // Assert Assert.IsType<ViewResult>(result); Assert.Empty(viewResult.ViewName); Assert.Same(user,viewResult.ViewData.Model); Assert.Equal(false,viewResult.ViewData.ModelState.IsValid); } } }

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • Looking for a new, free firewall (Sunbelt has a huge hole)

    - by Jason
    I've been using Sunbelt Personal Firewall v. 4.5 (previously Kerio). I've discovered that blocking Firefox connections in the configuration doesn't stop EXISTING Firefox connections. (See my post here yesterday http://superuser.com/questions/132625/sunbelt-firewall-4-5-wont-block-firefox) The "stop all traffic" may work on existing connections - but I'm done testing, as I need to be able to be selective, at any time. I was using the free version, so the "web filtering" option quit working after some time (mostly blocking ads and popups), but I didn't use that anyway. I used the last free version of Kerio before finally having to go to Sunbelt, because Kerio had an unfixed bug where you'd eventually get the BSOD and have to reset Kerio's configuration and start over (configure everything again). So I'm looking for a new Firewall. I don't like ZoneAlarm at all (no offense to all it's users that may be here - personal taste). I need the following: (Sunbelt has all these, except *) - 1. Be able to block in/out to localhost (trusted)/internet selectively for each application with a click (so there's 4 click boxes for each application) [*that effects everything immediately, regardless of what's already connected]. When a new application attempts a connection, you get an allow/deny/remember windows. - 2. Be able to easily set up filter rules for 'individual application'/'all applications,' by protocol, port/address (range), local, remote, in, out. [*Adding a filter rule also doesn't block existing connections in Sunbelt. That needs to work too.] - 3. Have an easy-to-get-to way to "stop all traffic" (like a right click option on the running icon in the task bar). - 4. Be able to set trusted/internet in/out block/allowed (4 things per item) for each of IGMP, ping, DNS, DHCP, VPN, and broadcasts. - 5. Define locahost as trusted/untrusted, define adapter connections as trusted/untrusted. - 6. Block incoming connetions during boot-up and shutdown. - 7. Show existing connections, including local & remote ip/port, protocol, current speed, total bytes transferred, and local ports opened for Listening. - 8. An Intrusion Prevention System which blocks (optionally select each one) known intrustions (long list). - 9. Block/allow applications from starting other applications (deny/allow/remember window). Wish list: A way of knowing what svchost.exe is doing - who is actually using it/calling it. I allowed it for localhost, and selectively allowed it for internet each time the allow/deny window came up. Thanks for any help/suggestions. (I'm using Windows XP SP3.)

    Read the article

  • a couple of questions about proxy server,vpn & how they works

    - by Q8Y
    I have a couple of questions that are related to security. Correct me if i'm wrong :) If I want to request something (ex: visiting www.google.com): my computer will request that then it will to the ISP then to my ISP proxy server that will take the request and act as a middle man in this situation ask for the site (www.google.com) and retrieve it then the proxy will send it back to me. I know that its being done like that. So, my question is that, in this situation my ISP knows everything and what I did request, and the proxy server is set by default (when I ask for an internet subscription). So, if I use here another proxy (lets assume that is a highly anonymous and my ISP can't detect my IP address from it), would I visit my ISP and then from my ISP it will redirect me to the new proxy server that I provide? Will it know that there is someone using another proxy? Or will it go to another network rather than my ISP? Because I didn't get the view clearly. This question is related to the first one. When I use a VPN, I know that VPN provides for me a tunneling, encryption and much more features that a proxy can't. So my data is travelling securely and my ISP can't know what I'm doing. But my questions are: From where is the tunneling started? Does it start after I visit the ISP network (since they are the one that are responsible for forwarding my data and requests)? If so, then not all my connection is tunneled in this way, there is a part that is not being tunneled. Since, every time I need to do anything I have to go to my ISP and ask to do that. Correct me if I misunderstand this. I know that VPN can let my computer be virtually in another place and access its resources (ex: be like in my office while I'm in my home. This is done via VPN). If I use a VPN service provider so that I can access the internet securely and without being monitored by my ISP. In this case, where is my encrypted data saved? Is it saved in my ISP or in the VPN service provider? If I use a VPN, does anyone on the internet know what I'm doing or who I am? Even the VPN service provider? Can they know me? I think they should know the person that is asking for this VPN service, am I right?

    Read the article

  • How to prevent dual booted OSes from damaging each other?

    - by user1252434
    For better compatibility and performance in games I'm thinking about installing Windows additionally to Linux. I have security concerns about this, though. Note: "Windows" in the remaining text includes not only the OS but also any software running on it. Regardless of whether it comes included or is additionally installed, whether it is started intentionally or unintentionally (virus, malware). Is there an easy way to achieve the following requirements: Windows MUST NOT be able to kill my linux partition or my data disk neither single files (virus infection) nor overwriting the whole disk Windows MUST NOT be able to read data disk (- extra protection against spyware) Linux may or may not have access to the windows partition both Linux and Windows should have full access to the graphics card this rules out desktop VM solutions for gaming I want the manufacturer's windows graphics card driver Regarding Windows to be unable to destroy my linux install: this is not just the usual paranoia, that has happened to me in the past. So I don't accept "no ext4 driver" as an argument. Once bitten, twice shy. And even if destruction targeted at specific (linux) files is nearly impossible, there should be no way to shred the whole partition. I may accept the risk of malware breaking out of a barrier (e.g. VM) around the whole windows box, though. Currently I have a system disk (SSD) and a data disk (HDD), both SATA. I expect I have to add another disk. If i don't: even better. My CPU is a Intel Core i5, with VT-x and VT-d available, though untested. Ideas I've had so far: deactivate or hide other HDs until reboot at low level possible? can the boot loader (grub) do this for me? tiny VM layer: load windows in a VM that provides access to almost all hardware, except the HDs any ready made software solution for this? Preferably free. as I said: the main problem seems to be to provide full access to the graphics card hardware switch to cut power to disks commercial products expensive and lots of warnings against cheap home built solutions preferably all three hard disks with one switch (one push) mobile racks - won't wear of daily swapping be a problem?

    Read the article

  • Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now.

    - by The Geek
    Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Just to be clear and direct with you: we absolutely recommend Microsoft Security Essentials as your anti-malware / anti-virus utility over any other option—and how can you argue? It’s totally free! New Features in 2.0 Here’s all of the new features in the latest release, which make it even more of a must-download: Network Traffic Inspection integrates into the network system and monitors the traffic at a low level without slowing down your PC, so it can actually detect threats before they get to your PC.   Internet Explorer Integration blocks malicious scripts before IE even starts running them—clearly a big security advantage.  Heuristic Scanning Engine finds malware that hasn’t been previously detected by scanning for certain types of attacks. This provides even more protection than just through virus definitions.   These new features make MSE on par with other anti-malware applications, especially the heuristic scanning, which has been the only complaint that anybody could make against MSE in the past—but now it has it Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Wildcards in jnlp template file

    - by Andy
    Since the last security changes in Java 7u40, it is required to sign a JNLP file. This can either be done by adding the final JNLP in JNLP-INF/APPLICATION.JNLP, or by providing a template JNLP in JNLP-INF/APPLICATION_TEMPLATE.JNLP in the signed main jar. The first way works well, but we would like to allow to pass a previously unknown number of runtime arguments to our application. Therefore, our APPLICATION_TEMPLATE.JNLP looks like this: <?xml version="1.0" encoding="UTF-8"?> <jnlp codebase="*"> <information> <title>...</title> <vendor>...</vendor> <description>...</description> <offline-allowed /> </information> <security> <all-permissions/> </security> <resources> <java version="1.7+" href="http://java.sun.com/products/autodl/j2se" /> <jar href="launcher/launcher.jar" main="true"/> <property name="jnlp...." value="*" /> <property name="jnlp..." value="*" /> </resources> <application-desc main-class="..."> * </application-desc> </jnlp> The problem is the * inside of the application-desc tag. It is possible to wildcard a fixed number of arguments using multiple argument tags (see code below), but then it is not possible to provide more or less arguments to the application (Java Webstart will no start with an empty argument tag). <application-desc main-class="..."> <argument>*</argument> <argument>*</argument> <argument>*</argument> </application-desc> Does someone can confirm this problem and/or has a solution for passing a previously undefined number of runtime arguments to the Java application? Thanks alot!

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >