Search Results

Search found 851 results on 35 pages for 'rubin attack'.

Page 20/35 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Securing RDP access to Windows Server 2008 R2: is Network Level Authentication enough?

    - by jamesfm
    I am a dev with little admin expertise, administering a single dedicated web server remotely. A recent independent security audit of our site recommended that "RDP is not exposed to the Internet and that a robust management solution such as a VPN is considered for remote access. When used, RDP should be configured for Server Authentication to ensure that clients cannot be subjected to man-in-the-middle attacks." Having read around a bit, it seems like Network Level Authentication is a Good Thing so I have enabled the "Allow connections only from Remote Desktop with NLA" option on the server today. Is this acion enough to mitigate the risk of a Man-in-the-Middle attack? Or are there other essential steps I should be taking? If VPN is essential, how do I go about it?

    Read the article

  • Ways for managing the installation and configuration of various software applications and settings i

    - by EmpireJones
    What are some ways for managing the installation and configuration of various software applications and settings in group of linux development and server computers? Is a set of basic scripts a good means of attack? I was thinking about just having a ton of scripts, such as: setup_dev_env [install|uninstall|reinstall] setup_nfs [...] setup_nfs_share [...] setup_http [...] setup_memcache_node [...] Is there any better method? It would be nice to be able to "upgrade" an installation script too, for example, to change common development settings.

    Read the article

  • Hacking prevention, forensics, auditing and counter measures.

    - by tmow
    Recently (but it is also a recurrent question) we saw 3 interesting threads about hacking and security: My server's been hacked EMERGENCY. Finding how a hacked server was hacked File permissions question The last one isn't directly related, but it highlights how easy it is to mess up with a web server administration. As there are several things, that can be done, before something bad happens, I'd like to have your suggestions in terms of good practices to limit backside effects of an attack and how to react in the sad case will happen. It's not just a matter of securing the server and the code but also of auditing, logging and counter measures. Do you have any good practices list or do you prefer to rely on software or on experts that continuously analyze your web server(s) (or nothing at all)? If yes, can you share your list and your ideas/opinions?

    Read the article

  • hosts.deny not working

    - by Captain Planet
    Currently I am watching the live auth.log and someone is continuously trying the brute force attack for 10 hours. Its my local server so no need to worry but I want to test. I have installed denyhosts. There is already an entry for that IP address in hosts.deny. But still he is trying the attacks from same IP. System is not blocking that. Firstly I don't know how did that IP address get entered in that file. I didn't enter it, is there any other system script which can do that. hosts.deny is sshd: 120.195.108.22 sshd: 95.130.12.64 hosts.allow ALL:ALL sshd: ALL Is there any iptable setting that can override the host.deny file

    Read the article

  • How do companies know they've been hacked?

    - by Chad
    With the news of Google and others getting hacked, I was wondering how companies find out, detect, and/or know they've been hacked in the first place? Sure, if they find a virus/trojan on user's computers or see a very high access rate to parts of their system that don't usually see much, if any, traffic. But, from what I've see in articles, the attack was pretty 'sophisticated', so I wouldn't imagine the hackers would make it so obvious of their hacking in the first place. Maybe someone can enlighten me on current detection schemes/heuristics. Thanks.

    Read the article

  • Doing a passable 4X game AI

    - by Extrakun
    I am coding a rather "simple" 4X game (if a 4X game can be simple). It's indie in scope, and I am wondering if there's anyway to come up with a passable AI without having me spending months coding on it. The game has three major decision making portions; spending of production points, spending of movement points and spending of tech points (basically there are 3 different 'currency', currency unspent at end of turn is not saved) Spend Production Points Upgrade a planet (increase its tech and production) Build ships (3 types) Move ships from planets to planets (costing Movement Points) Move to attack Move to fortify Research Tech (can partially research a tech i.e, as in Master of Orion) The plan for me right now is a brute force approach. There are basically 4 broad options for the player - Upgrade planet(s) to its his production and tech output Conquer as many planets as possible Secure as many planets as possible Get to a certain tech as soon as possible For each decision, I will iterate through the possible options and come up with a score; and then the AI will choose the decision with the highest score. Right now I have no idea how to 'mix decisions'. That is, for example, the AI wishes to upgrade and conquer planets at the same time. I suppose I can have another logic which do a brute force optimization on a combination of those 4 decisions.... At least, that's my plan if I can't think of anything better. Is there any faster way to make a passable AI? I don't need a very good one, to rival Deep Blue or such, just something that has the illusion of intelligence. This is my first time doing an AI on this scale, so I dare not try something too grand too. So far I have experiences with FSM, DFS, BFS and A*

    Read the article

  • RDP failing due to Audit Failure on the IPSec driver

    - by paulwhit
    I am trying to RDP into a Windows 7 Hyper-V image connected to a corporate network that publishes IPSec policies via Active Directory. I am seeing this error in the log: IPsec dropped an inbound clear text packet that should have been secured. If the remote computer is configured with a Request Outbound IPsec policy, this might be benign and expected. This can also be caused by the remote computer changing its IPsec policy without informing this computer. This could also be a spoofing attack attempt. Remote Network Address: XXX.XXX.XXX.XXX Inbound SA SPI: 0 How do I change my settings on the computer using RDP to something suitable for the domain-joined Hyper-V image?

    Read the article

  • lots of dns requests from China, should I worry?

    - by nn4l
    I have turned on dns query logs, and when running "tail -f /var/log/syslog" I see that I get hundreds of identical requests from a single ip address: Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#10856: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#44334: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#15268: query: mydomain.de IN ANY + Apr 7 12:36:13 server17 named[26294]: client 121.12.173.191#59597: query: mydomain.de IN ANY + The frequency is about 5 - 10 requests per second, going on for about a minute. After that the same effect repeats from a different IP address. I have now logged about 10000 requests from about 25 ip addresses within just a couple of hours, all of them come from China according to "whois [ipaddr]". What is going on here? Is my name server under attack? Can I do something about this?

    Read the article

  • Fake links cause crawl error in Google Webmaster Tools

    - by Itai
    Google reported Crawl Errors last week on my largest site though Webmaster Tools. Here is the message: Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages. The Crawl Errors list is now full of hundreds of fake links like these causing 16,519 errors so far: Note that my site does not even have a search.html and is not related to any of the terms shown in the above image. Inspecting sources for one of those links, I can see this is not simply an isolated source but a concerted effort: Each of the links has a few to a dozen sources all from different, seemingly unrelated sites. It is completely baffling as to why would someone to spending effort doing this. What are they hoping to achieve? Is this an attack? Most importantly: Does this have a negative effect on my side? Could it negatively impact my ranking? If so, what to do about it? The few linking pages I looked at are full of thousands of links to tons of sites and have no contact information and do not seem like the kind of people who would simply stop if asked nicely! According to Google Webmaster Tools, these errors have appeared in a span of 11 days. No crawl errors were being reported previously.

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • The Oracle Platform

    - by Naresh Persaud
    Today’s enterprises typically create identity management infrastructures using ad-hoc, multiple point solutions. Relying on point solutions introduces complexity and high cost of ownership leading many organizations to rethink this approach. In a recent worldwide study of 160 companies conducted by Aberdeen Research, there was a discernible shift in this trend as businesses are now looking to move away from the point solution approach from multiple vendors and adopt an integrated platform approach. By deploying a comprehensive identity and access management strategy using a single platform, companies are saving as much as 48% in IT costs, while reducing audit deficiencies by nearly 35%. According to Aberdeen's research, choosing an integrated suite or “platform” of solutions for Identity Management from a single vendor can have many advantages over choosing “point solutions” from multiple vendors. The Oracle Identity Management Platform is uniquely designed to offer several compelling benefits to our customers.  Shared Services: Instead of separate solutions for - Administration, Authentication, Authorization, Audit and so on–  Oracle Identity Management offers a set of share services that allows these services to be consumed by each component in the stack and by developers of new applications  Actionable Intelligence: The most compelling benefit of the Oracle platform is ” Actionable intelligence” which means if there is a compliance violation, the same platform can fix it. And If a user is logging in from an un-trusted device or we detect an attack and act proactively on that information. Suite Interoperability: With the oracle platform the components all connect and integrated with each other. So if an organization purchase the platform for provisioning and wants to manage access, then the same platform can offer access management which leads to cost savings. Extensible and Configurable: With point solutions – you typically get limited ability to extend the tool to address custom requirements. But with the Oracle platform all of the components have a common way to extend the UI and behavior Find out more about the Oracle Platform approach in this presentation. Platform approach-series-the oracleplatform-final View more PowerPoint from OracleIDM

    Read the article

  • PDF Encrypted, Hidden Watermark

    - by Dave Jarvis
    Background Using LaTeX to write a book. When a user purchases the book, the PDF will be generated automatically. Problem The PDF should have a watermark that includes the person's name and contact information. Question What software meets the following criteria: Applies encrypted, undetectable watermarks to a PDF Open Source Platform independent (Linux, Windows) Fast (marks a 200 page PDF in under 1 second) Batch processing (exclusively command-line driven) Collusion-attack resistant Non-fragile (e.g., PDF - EPS - PDF still contains the watermark) Well documented (shows example usages) Ideas & Resources Some thoughts and findings: Natural language processing (NLP) watermarks. Apply steganography on a randomly selected image. http://openstego.sourceforge.net/cmdline.html The problem with NLP is that grammatical errors can be introduced. The problem with steganography is that the images are sourced from an image cache, and so recreating that cache with watermarked images will impart a delay when generating the PDF (I could just delete one image from the cache, but that's not an elegant solution). Thank you!

    Read the article

  • The Bing Sting - an alternative opinion

    - by Charles Young
    I know I'm a bit of an MS fanboy at times, but please, am I missing something here? Microsoft, with permission of users, exploits clickstream data gathered by observing user behaviour. One use for this data is to improve Bing queries. Google equips twenty of its engineers with laptops and installs the widgets required to provide Microsoft with clickstream data. It then gets their engineers to repeatedly (I assume) type in 'synthetic' queries which bring back 'doctored' hits. It asks its engineers to then click these results (think about this!). So, the behaviour of the engineers is observed and the resulting clickstream data goes off to Microsoft. It is processed and 'improves' Bing results accordingly.   What exactly did Microsoft do wrong here?   Google's so-called 'Bing sting' is clearly a very effective attack from a propaganda perspective, but is poor practice from a company that claims to do no evil. Generating and sending clickstream data deliberately so that you can then subsequently claim that your competitor 'copied' that data from you is neither fair nor reasonable, and suggests to me a degree of desperation in the face of real competition.   Monopolies are undesirable, whether they are Microsoft monopolies or Google monopolies.    Personally, I'm glad Microsoft has technology in place to observe user behaviour (with permission, of course) and improve their search results using such data. I can only assume Google doesn't implement similar capabilities. Sounds to me as if, at least in this respect, Microsoft may offer the better technology.

    Read the article

  • Does Firefox Phishing Protection / Safebrowsing have any privacy implications?

    - by Nowo
    The current google results are outdated. What is the current status? I was on www.mozilla.org/en-US/legal/privacy/firefox.html and saw "Protection Against Suspected Forgery and Attack Sites Features". Can you translate please more to human speak? First they download a list and compare local... Ok... Here it gets messy "If there is a match, Firefox will check with its third party provider to ensure that the website is still on the blacklist. The information sent between Firefox and its third party provider(s) are hashed URLs. In fact, multiple hashed URLs are sent with the real hash so that the third party provider(s) will not know what site you are visiting." - If that hash were send to mozilla, they would knew which site were accessed? "In order to safeguard your privacy, Firefox will not transmit the complete URL of web pages that you visit to anyone other than Mozilla and its service providers." - In other words, Mozilla and its service providers get all complete URLs (of sites, which were in blacklist)? "While it is possible that a third party service provider may determine the actual URL from the hashed URL sent, Mozilla’s policy is to require [...]" - Privacy is depends on policy rather than technology?

    Read the article

  • How do I make a Windows virtual machine replicate to another datacenter/cloud?

    - by zippy
    We have a Windows 2008 VM running IIS and SQL Server Express (it's an all-in-one web application). We need to have another copy at our secondary datacenter site. What is the best way to do this? It doesn't have to be running all the time but it has to have almost the latest copy of the current VM. I took a look at VMWare Fault Tolerance and after the heart attack at the price I starting looking for another solution. If need be I wouldn't mind copying it over to a cloud VM provider, if I can find one that lets me copy my own VMs up and start them up without any conversion process.

    Read the article

  • Clean MVC design when there is viewer latency

    - by Tony Suffolk 66
    It isn't clear if this question has already been answered, so apologies in advance if this is a duplicate : I am implementing a game and trying to design around a clean MVC pattern - so my Control plane will implement the rules of the game (but not how the game is displayed), and the View plane implements how the game is displayed, and user iteraction - i.e. what game items or controls the user has activated. The challenge that I have is this : In my game the Control Plane can move game items more or less instaneously (The decision about what item to place where - and some of the initial consequences of that placement are reasonably trivial to calculate), but I want to design the Control Plane so that the View plane can display these movements either instaneously or using movement animations. The other complication is that player interaction must be locked out while those game items are moving (similar to chess - you can't attack an opposing piece as it moves past one of your pieces) So do I : Implement all the logic in the Control Plane asynchronously - and separate the descision making from the actions - so the Control plane decides piece 'A' needs to move to a given place - tells the view plane, and but does not implement the move in data until the view plane informs the control plane that the move/animation is complete. A lot of interlock points between the two layers. Implement all the control plane logic in one place - decisions and movement (keeping track of what moved where), and pass all the movements in one go to the View plane to do with what it will. Control Plane is almost fire and forget here. A hybrid of 1 & 2 - The control plane implements all the moves in a temporary data store - but maintains a second store which reflects what is actually visible to the viewer, based on calls and feedback from the View plane. All 3 are relatively easy to implement (target language is python), but having never done a clean MVC pattern with view latency before - I am not sure which design is best

    Read the article

  • bind: blackhole for invalid recursive queries?

    - by Udo G
    I have a name server that's publicly accessible since it is the authoritative name server for a couple of domains. Currently the server is flooded with faked type ANY requests for isc.org, ripe.net and similar (that's a known distributed DoS attack). The server runs BIND and has allow-recursion set to my LAN so that these requests are rejected. In such cases the server responds just with authority and additional sections referring the root servers. Can I configure BIND so that it completely ignores these requests, without sending a response at all?

    Read the article

  • How can I get a virus by just visiting a website?

    - by Janet Jacobs
    It is common knowledge that you can get a virus just by visiting a website. But how is this possible? Do these viruses attack Windows, Mac and Linux users, or are Mac/Linux users immune? I understand that I obviously can get a virus by downloading and executing a .exe in Windows but how can I get a virus just by accessing a website? Are the viruses programmed in JavaScript? (It would make sense since it is a programming language that runs locally.) If so, what JavaScript functions are the ones commonly used?

    Read the article

  • Concerns about a Dedicated (Windows Server 2008) + DDoS

    - by TheKillerDev
    I am have today a dedicated server with these specs: Intel Core i5 750, 2x120GB (ssd + raid), Windows Server 2008 Web, 200Mbps Network, 24 Gb DD3 And I would like to know what are the best thing I can do to prevent a DDoS Attack, since I know this will be a real threat by the importance of the files that will be archived in it. Today I have apache listening port 80 and RDC listening port 3389. But the security is beeing made only by Windows Firewall. So, any thoughts on what would be good to prevent from DDoS attacks?

    Read the article

  • Multiplayer online game engine/pipeline

    - by Slav
    I am implementing online multiplayer game where client must be written in AS3 (Flash) to embed game into browser and server in C++ (abstract part of which is already written and used with other games). Networking models may differ from each other, but currently I'm looking toward game's logic run on both client and server parts but they're written on different languages while it's not the main problem. My previous game (pretty big one - was implemented with efforts of ~5 programmers in 1.5 years) was mainly "written" within electronic tables as structured objects with implemented inheritance: was written standalone tool which generated AS3 and C++ (languages of platforms to which the game was published) using specified electronic tables file (.xls or .ods). That file contained ~50 tables with ~50 rows and ~50 columns each and was mainly written by game designers which do not know any programming languages. But that game was single-player. Having declared problem with my currently implementing MMO, I'm looking toward some vast pipeline, where will be resolved such problems like: game objects descriptions (which starships exist within game, how much HP they have, how fast move, what damage deal...) actions descriptions (what players or NPCs can do: attack each other, collect resources, build structures, move, teleport, cast spells) - actions are transmitted through server between clients influences (what happens when specified action applied on specified object, e.i "Ship A attacked Ship B: field "HP" of Ship B reduced by amount of field "damage" of Ship A" Influences can be much more difficult, yes, e.i. "damage is twice it's size when Ship has =5 allies around him in a 200 units range during night" and so on. If to be able to write such logic within some "design document" it will be easily possible to: let designers to do their job without programmer's intervention or any bug-prone programming validate described logic transfer (transform, convert) to any programming language where it will be executed Did somebody worked on something like that? Is there some tools/engines/pipelines which concernes with it? How to handle all of this problems simultaneously in a best way or do I properly imagine my tasks and problems to myself?

    Read the article

  • How can I debug Cisco Firewall ASA "Dispatch Unit" very high CPU utilisation from ASDM?

    - by Andy
    I have recently had my first firewall installed so I am very new to this whole situation. I am finding that Dispatch unit is becoming overloaded and it would appear to be the reason I get serious bouts of lag on my server. The firewall has had little configuration apart from me blocking all the ports in "Access Rules" and allowing only the ones the server needs and from where it needs them. I guess what I am after is assistance with locating the issues causing "Dispatch Unit" to take up all the CPU Regards --Edit-- With ASDM statistics I found that packets inbound (peak of 70-100k/sec from <1k/sec normal), traffic inbound (peak of 40-50kbits/sec from <1kbits/sec normal) and CPU all peak at the same time so I am pretty sure it is an attack of some sort but as a beginner with ASA I am not sure how to resolve

    Read the article

  • What would be the best way to correlate logs and events on several hosts?

    - by user220746
    I'm trying to build a log correlation system on multiple hosts. SEC seems interesting but I don't know if it will cover my needs. How could I correlate system events, logs, network events, etc. on multiple hosts at the same time, in real time? Examples: If 5 failed logins happened on host A the last minute and if firewall B has denied lots of access on differents ports on A, then we assume there is a potential attack in progress on A. If the Apache service on host A didn't receive any request for the last N minutes and Apache service on host B did, then the load balancing could be faulty.

    Read the article

  • Misbehaving Network Printers - options?

    - by Dan Kelly
    We are having some issue with printers on our network. We have 3 floors, 2 printers per floor (A3 & A4) all connected to the same Print Server. The issue is that the same printer may not behave the same on two different, seemingly identical desktops. The commonest place this is seen on our bulk print script in AutoCAD - occasionally drawings may print Landscape on Portrait paper, despite drawings always being Landscape... Does any one have any suggestions on what we can check / try? The current line of attack is to setup a new Print Server, with the HP universal print driver rather than the device specific drivers, and replace printers using exactly the same method on all desktops. Sound good?

    Read the article

  • DDoS attacks to PBX

    - by user316687
    I'm wondering if DDOS attacks to PBX or telecommunications systems is possibe real. According to this links: http://threatpost.com/en_us/blogs/firm-sees-more-ddos-attacks-aimed-telecom-systems-073112 http://news.softpedia.com/news/DDOS-Attacks-Against-Telecom-Systems-Cost-as-Little-as-20-16-Per-Day-284875.shtml it is possible. There are DDOs attacks to web servers, which mostly give them so much concurrent loads or connections that service get unavailable. Many government or non-profit organizations that suffered this kind of attacks, eventually could choose to shutdown their web server and that's it, waiting for these attacks to end. For a DDOs attacks to PBX, I imagine that it would result in telephones getting busy or ringing all the time unstoppably. This kind of attack could really damage any kind organization. Is it possible to do that or are we just in the beginnings?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >