Search Results

Search found 6234 results on 250 pages for 'exposed filter'.

Page 112/250 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Securing RDP access to Windows Server 2008 R2: is Network Level Authentication enough?

    - by jamesfm
    I am a dev with little admin expertise, administering a single dedicated web server remotely. A recent independent security audit of our site recommended that "RDP is not exposed to the Internet and that a robust management solution such as a VPN is considered for remote access. When used, RDP should be configured for Server Authentication to ensure that clients cannot be subjected to man-in-the-middle attacks." Having read around a bit, it seems like Network Level Authentication is a Good Thing so I have enabled the "Allow connections only from Remote Desktop with NLA" option on the server today. Is this acion enough to mitigate the risk of a Man-in-the-Middle attack? Or are there other essential steps I should be taking? If VPN is essential, how do I go about it?

    Read the article

  • Get the "source network address" in Event ID 529 audit entries on Windows XP

    - by Make it useful Keep it simple
    In windows server 2003 when an Event 529 (logon failure) occures with a logon type of 10 (remote logon), the source network IP address is recorded in the event log. On a windows XP machine, this (and some other details) are omitted. If a bot is trying a brute force over RDP (some of my XP machines are (and need to be) exposed with a public IP address), i cannot see the originating IP address so i don't know what to block (with a script i run every few minutes). The DC does not log this detail either when the logon attempt is to the client xp machine and the DC is only asked to authenticate the credentials. Any help getting this detail in the log would be appreciated.

    Read the article

  • how to run an AFS file server on a specific ethernet card (in Debian)

    - by listboss
    I have a linux box running Debian server with minimal number of packages (so no GUI for network management). The box has two ethernet cards, one of which (eth0) is connected to a Mac OSX computer using a cross-cable. I can bring up eth0 and assign a static ip (10.10.11.16) to it. This way I can ssh to the box through the cross-cable. This is what I run on Linux box: ifconfig eth0 10.10.11.16 netmask 255.255.255.0 up I also installed/started a file server (AFS) on Debian. So far, the file server can only be accessed through eth1 which is exposed to my home LAN and www. My goal is to set up the file server so that it's only visible through eth0. Is this possible? and if yes, how can I do it?

    Read the article

  • Active DFS node did not restore after failure

    - by Mark Henderson
    On Tuesday we had a Server 2008 R2 DFS-R node go offline unexpectedly. DFS did the right thing and started routing requests to a different node, which was in a remote site. This is by design, because even though it's slow, at least it's still working. We had the local DFS-R node back online within an hour, and it had synced all its changes 10 minutes after that. 3 of the 5 terminal servers reset themselves to the local DFS node, but the other two stayed pointing at the remote DFS node for three days, until someone finally piped up about how slow requests were. What reasons could there be why some, but not all, of the server reverted? Is the currently active DFS node for a namespace exposed anywhere in the OS (WMI, or even scripts) so that we can monitor the active nodes?

    Read the article

  • media is write protected when using diskshadow.exe, start-bitstransfer powershell cmdlet

    - by Aaron - Solution Evangelist
    i am trying to use the powershell start-bitstransfer cmdlets to transfer a file i have exposed using a vss snapshot (via diskshadow), but unfortunately i am receiving the following error: Start-BitsTransfer : The media is write protected. At line:1 char:49 + Import-CSV c:\hda1\bits.txt | start-bitstransfer <<<< -transfertype upload -Authentication "Basic" -Credential $cred + CategoryInfo : InvalidOperation: (:) [Start-BitsTransfer], Exception + FullyQualifiedErrorId : StartBitsTransferCOMException,Microsoft.BackgroundIntelligentTransfer.Management.NewBits TransferCommand we really want to utilize the bits endpoint we are attempting to transfer the files to. is there any other way we can go about this (aside from copying the files elsewhere first, unless we can copy one slice at a time and transfer that)?

    Read the article

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted filesystem for Linux that can be mounted in a write-only mode, by that I mean you should be able to mount it without supplying a password, yet still be able to write/append files, but neither should you be able to read the files you have written nor read the files already on the filesystem. Access to the files should only be given when the filesystem is mounted via the password. The purpose of this is to write log files or similar data that is only written, but never modified, without having the files themselves be exposed. File permissions don't help here as I want the data to be inaccessible even when the system is fully compromised. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't easily get access to the filesystem as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • Storage setup for large files

    - by Mecca
    I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice. Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people. I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable. What do you recommend? Thanks

    Read the article

  • Prevent machine in a LAN from receiving a remote shutdown

    - by WebDevHobo
    I'm probably just overreacting, but I recently came across a LAN-scanner that showed me the option "remote shutdown", for all found computers on the scanned network. Now, how exactly does this work? If I send such a message, will the shutdown happen no matter what, or is it required to have the password/user-name of the user of that other computer. Mostly I'm wondering: can this be done to me and how do I prevent it? EDIT: what's more, I had the scanner check for shares. The result being this: Double clicking the links opens them in explorer, basically meaning my entire C and F drive(only 2 HD's I have) are completely exposed to anyone in my LAN. Or can I open these because it's my own machine?

    Read the article

  • How to write a ProxyPass rule to go from HTTPS to HTTP in IIRF

    - by Keith Nicholas
    I have a server which is running a web app that self serves HTTP. I'm wanting to use IIS6 (on the same server) to provide a HTTPS layer to this web app. From what I can tell doing a reverse proxy will allow me to do this. IIRF seems like the tool to do this job. There are no domain names involved.... its all ip numbers. So I think I want :- https:<ipnumber>:5001 to send all its requests to the same server but on a different port and use HTTP ( not exposed to the net ) http:<ipnumber>:5000 but not sure how to go about it with IIRF, I'm not entirely sure how to write the rules? I think I need to make a virtual web app on 5001 using HTTPS? then add a rules file.

    Read the article

  • cluster of services and restarting on package upgrade

    - by Marcin Cylke
    I'm using puppet to manage a bunch of servers. Those servers run a simple service - exposed to the world via load balancer. That service's instances are independent in that they can run on their own, are are deployed on multiple servers to increase responsiveness. Now, when I push a new package to repo and puppet catches up with it appearing there it just updates this package on all services. This results in a short downtime of entire service. Is there a way of configuring puppet to do restart the services sequentially? Or using any other kind of strategy?

    Read the article

  • Configure iptables with a bridge and static IPs

    - by Andrew Koester
    I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs): eth0 \- br0 - 1.1.1.2 |- [VM 1's eth0] | |- 1.1.1.3 | \- 1.1.1.4 \- [VM 2's eth0] \- 1.1.1.5 My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed). Thanks!

    Read the article

  • Recurring network issues the same time every day.

    - by Peter Turner
    Something has been happening on my company's network at 9:30 every day. I'm not the sysadmin but he's not a ServerFault guy so I'm not privy to every aspect of the network but I can ask questions if follow up is needed. The symptoms are the following : Sluggish network and download speed (I don't notice it, but others do) 3Com phones start ringing without having people on the other end. We've got the following ports exposed to the public for a web server, a few other ports for communicating with our clients for tech support and a VPN. We've got a Cisco ASA blocking everything else. We've got a smallish network (less than 50 computers/vms on at any time). An Active Directory server and a few VM servers. We host our own mail server too. I'm thinking the problem is internal, but what's a good way to figure out where it's coming from?

    Read the article

  • Get the "source network address" in Event ID 529 audit entries on Windows XP

    - by Make it useful Keep it simple
    In windows server 2003 when an Event 529 (logon failure) occures with a logon type of 10 (remote logon), the source network IP address is recorded in the event log. On a windows XP machine, this (and some other details) are omitted. If a bot is trying a brute force over RDP (some of my XP machines are (and need to be) exposed with a public IP address), i cannot see the originating IP address so i don't know what to block (with a script i run every few minutes). The DC does not log this detail either when the logon attempt is to the client xp machine and the DC is only asked to authenticate the credentials. Any help getting this detail in the log would be appreciated.

    Read the article

  • Laptop white screen on power-up. Still displays via HDMI output

    - by Inno
    my wife's laptop recently started displaying a white screen. It doesn't show post or anything, just a white screen when it's powered on. However, it works normally with HDMI output to our television. I took it apart and fiddled with both ends of the display cable, but I either didn't fiddle correctly or that's just not the problem. I also noticed that the screen won't turn off anymore when the laptop is closed. Is there a name for the mechanism that controls this function, so I can try and locate it? My guesses are that the problem lies with the screen itself or the display cable, but I'm curious if there's anything else I might be overlooking. Also of note is that the left hinge is partially broken. The corner of the plastic computer case broke off, so the hinge is exposed and doesn't stay in place. I've tried holding it in place, wiggling it around, tapping various parts of the computer, but the white screen remains.

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

  • Monitor Windows Terminal Sessions from Linux/Mac

    - by mhd
    I'm writing some scripts to make remote connections to a Windows 2003 server a bit more user-friendly, and in doing this I want to see who's logged in already. In Windows, I could use qwinsta.exe to do this, even for remote servers. So it is exposed somehow, but I couldn't find a matching command line tool for Unix. Lacking such a tool, I could install an ssh server on the machine and call it remotely, parsing the output or write a small service of my own that would expose this via http, if I don't want full-blown ssh access. Do I have to do this, or is there already a tool for querying terminal services remotely?

    Read the article

  • how to copy photos in ipad to my pc?

    - by davidshen84
    hi, i used the iTune to sync my photos to my ipad. but now, i lost the copies of the photos on my pc, so i want to restore them back from my ipad. but from the storage folder that ipad exposed, i cannot find my photos. and i am not sure if the photo sync function in iTune can sync the photos on my ipad to my pc, because it seems it can only sync stuffs from pc to ipad. i am not sure if jail break the ipad can help me.

    Read the article

  • RemoteApps and Cached Credentials

    - by user66774
    I'm looking for a guidance on an issue we're having. We are hosting an application over terminal services through RDWeb on Windows 2008 Server. To give users the ability to change their password we've exposed the iisadmpwd to allow the users to change their passwords. When the users change their password, they are prompted to log into broker server, even if they log off of the RDWeb page and log back in. What we've found is that the credentials seem to be cached in memory after logging in. Ending task on TSWBPRXY.EXE, WKSPRT.EXE, closing IE and logging back into the RDWEb page, then launching the application allows the user to log into the application without additional credentials. I'm wondering if there is a better way to either have the user change their password from a web interface, but allow them to reestablish their connection from the RDWeb login page rather than through the RDP login prompt that comes up.

    Read the article

  • How can I prevent Apache from exposing a user's password?

    - by Marius Marais
    When using basic authentication (specifically via LDAP, but htpasswd also) with Apache, it makes the REMOTE_USER variable available to the PHP / Ruby / Python code underneath -- this is very useful for offloading authentication to the webserver. In our office environment we have lots of internal applications working like this over SSL, all quite secure. BUT: Apache exposes the PHP_AUTH_USER (=REMOTE_USER) and PHP_AUTH_PW variables to any application inside PHP. (PHP_AUTH_PW contains the plaintext password the user entered.) This means it's possible for the app to harvest usernames and passwords. Presumably the same information is available to Python and Ruby (all three are currently in use; PHP is being phased out). So how can I prevent Apache from doing this? One idea is to use Kerberos Negotiate authentication (which does not expose the password and has the benefit of being SSO), but that automatically falls back to Basic for some browsers (Chrome and in some cases Firefox), causing the password to be exposed again.

    Read the article

  • Does Google Chrome officially work on Windows 7 64-Bit Yet?

    - by Nick Josevski
    As soon as I jumped onto one of the beta releases for Windows 7, I tried to install Google Chrome. Being on a 64-bit installation it came up with a 'non-supported OS' or some error (can't remember). Having a look around at the time I saw lots of posts/tips about just appending --in-process-plugins to the shortcut for chrome, after trying this and still not having luck, I found more posts including what seemed ones from the Chrome developers saying this was not wise and exposed a security risk. So does anyone have a well sourced answer, as to what's holding up Win 7 64-bit support in Chrome, or better yet an "official" answer to say that it is supported in Win7 x64 RTM and works well now...

    Read the article

  • How to access a PHP Web Service from ASP.Net?

    - by Steve Johnson
    I am trying use a web service in a C# ASP.Net Web Application. The service is built in php and is located on some remote server not under my control so i cant modify it to add meta data or something else into it. When i use the "Add Web Reference" option in Visual Studio 2008, I receive the following error: The HTML document does not contain Web service discovery information. while trying to add the following web service. https://subreg.forpsi.com/robot2/subreg_command.php?wsdl The web service functions are exposed and displayed in Visual Studio 2008. however i could not add the reference to it for use in ASP.Net Application. t3Service" Description Methods __construct ( ) create_contact ( ) get_contact ( ) get_domain_info ( ) get_last_error_code ( ) get_last_error_msg ( ) get_NSSET ( ) get_owner_mail ( ) login ( ) register_domain ( ) register_domain_with_admin_contacts ( ) renew_domain ( ) request_sendmail ( ) send_auth_info ( ) transfer_domain ( ) I also tried the wsdl.exe method by retrieving the xml and copying it to a wsdl file and generating a proxy class. But the wsdl output contains warnings and the proxy class generated skips the exposed fucntions and generates something like this: // CODEGEN: The operation binding 'create_contact' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_contact' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_domain_info' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_last_error_code' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_last_error_msg' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_NSSET' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'get_owner_mail' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'send_auth_info' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'transfer_domain' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'request_sendmail' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'login' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'register_domain' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'register_domain_with_admin_contacts' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. // CODEGEN: The operation binding 'renew_domain' from namespace 'urn:t3' was ignored. Each message part in an use=encoded message must specify a type. Any help in this regard will be highly appreciated. Regards

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • The Most Common and Least Used 4-Digit PIN Numbers [Security Analysis Report]

    - by Asian Angel
    How ‘secure’ is your 4-digit PIN number? Is your PIN number a far too common one or is it a bit more unique in comparison to others? The folks over at the Data Genetics blog have put together an interesting analysis report that looks at the most common and least used 4-digit PIN numbers chosen by people. Numerically based (0-9) 4-digit PIN numbers only allow for a total of 10,000 possible combinations, so it stands to reason that some combinations are going to be far more common than others. The question is whether or not your personal PIN number choices are among the commonly used ones or ‘stand out’ as being more unique. Note 1: Data Genetics used data condensed from released, exposed, & discovered password tables and security breaches to generate the analysis report. Note 2: The updates section at the bottom has some interesting tidbits concerning peoples’ use of dates and certain words for PIN number generation. The analysis makes for very interesting reading, so browse on over to get an idea of where you stand with regards to your personal PIN number choices. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • how to implemet a nice scanline effect using libgdx

    - by Alexandre GUIDET
    I am working on an old school platformer based on libgdx. My first attempt is to make a little texture and fill a rect above the whole screen, but it seems that I am messing arround with the orthographic camera (I use two camera, one for the tilemap and one to project the scanline filter). Sometime the texture is stuck on the tilemap and sometime it is too large and cover the whole screen in black. Is my approach correct using two camera? Does someone have a solution to achieve this retro effect using libgdx (see maldita castilla)? Thanks

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >