Search Results

Search found 374 results on 15 pages for 'hacked'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • Amazon S3 as secure backup without multiple invoices

    - by Tom Viner
    I'm storing copies of database backups on Amazon S3 using the Python Boto library. But I worry that if my web server was hacked, those backups could be deleted using the credentials I need to do the upload. Ok, so I know you can grant permissions to another Amazon email address, so I can imagine doing that after an upload then removing the original user's write access BUT in this scenario I now end up with 2 accounts and 2 sets of invoices to give to accounts every month. Is there a solution to this that doesn't require a new Amazon account for each web server I run?

    Read the article

  • Can you reuse a mysql result set in PHP?

    - by MarathonStudios
    I have a result set I pull from a large database: $result = mysql_query($sql); I loop through this recordset once to pull specific bits of data and get averages using while($row = mysql_fetch_array($result)). Later in the page, I want to loop through this same recordset again and output everything - but because I used the recordset earlier, my second loop returns nothing. I finally hacked around this by looping through a second identical recordset ($result2 = mysql_query($sql);), but I hate to make the same SQL call twice. Any way I can loop through the same dataset multiple times?

    Read the article

  • implementing security with session variables, how it is insecure

    - by haansi
    I am doing web based projects in dotnet. Currently I am implementing security using session variables. I keep current user id and user type in session and authenticate user from these session variables (say Session["UserId"],Session["UserName"] and Session["UserType"]). Please help me understand how this could be insecure. I've heard that such security can be broken and applications can be hacked very easily, like it is possible to get session id and directly connect to that session id etc. Please guide me on this.

    Read the article

  • where to store information like gender and year of birth?

    - by fayer
    i have users and i need them to specify a gender (male, female) and year of birth (1930, 1931...1999, 2000). i wonder where i should store these values: in the database? in php file? if i store them in the database i have to manually create all entries first. but a good thing is that the user table will have constraints so the gender field will always be male or female, it cannot be something else. if i store them in the php file (eg. as html) then i can easily add/remove values. but a con is that i dont have the constraints in database, so another value could be stored as gender by mistake, even though i could add validation in php backend so even if someone hacked the html it is not stored unless it's either male or female. what is best practice to do this? thanks

    Read the article

  • Creating a map in HTML, CSS, SVG?

    - by yeeeev
    Hi, I would like to create a web based regional map that would enable the user to click in order to choose a region on the map, and will also have some visual effect (resizing, etc) when hovering over one of the regions. I want the map to work on desktops and mobile devices. I'm having doubts regarding the best technology to use here when I'm mainly considering traditional image maps vs.SVG. Image map are more widely supported, but any animation that effects only a single area in the map must be hacked over. SVG is a more natural fit, but is not supported by Android (old IEs can work using svgweb) Any advice? Any other option I'm overlooking?

    Read the article

  • Is "programmatically" a word? [closed]

    - by Lo'oris
    I can't find it on any of the online dictionaries I know: dict.org, word reference, urban dictionary, oxford paravia, garzanti. To my ears of a non-native speaker, it sounds horrible. Actually it sounds like a word made-up by another non-native speaker that wanted to say something, didn't know how, and just hacked in a word of his language. The only place I've read it other then user-created-content is the android documentation, so this might or might not be related. Do you happen to know where did it start to be used, why by did it spread so much, what does it really mean?

    Read the article

  • Using sed to delete string

    - by wired
    I was hacked and have hundreds of .js files with this line of code that I'm trying to get rid of: ;document.write('<iframe src="http://sitecorporatemanagement.ru/pretzellogmeins.cgi?8" scrolling="auto" frameborder="no" align="center" height="3" width="3"></iframe>'); It is the last line of the file, but I think the file contains windows line endings, because when ever I do this: sed -i '/sitecorporatemanagement.ru/d' * it deletes the full content of the file. Can you help me get this to work? I just need that full string deleted. Thank you for all the help you can give.

    Read the article

  • What do you do when a client requires Rich Text Editing on their website?

    - by George Stocker
    As we all know by now, XSS attacks are dangerous and really easy to pull off. Various frameworks make it easy to encode HTML, like ASP.NET MVC does: <%= Html.Encode("string"); %> But what happens when your client requires that they be able to upload their content directly from a Microsoft Word document? Here's the scenario: People can copy and paste content from Microsoft word into a WYSIWYG editor (in this case tinyMCE), and then that information is posted to a web page. The website is public, but only members of that organization will have access to post information to a webpage. What is the best way to handle this requirement? Currently there is no checking done on what the client posts (since only 'trusted' users can post), but I'm not particularly happy with that and would like to lock it down further in case an account is hacked. The platform in question is ASP.NET MVC. The only conceptual method that I'm aware of that meets these requirements is to whitelist HTML tags and let those pass through. Is there another way? If not, is the best way to let them store it in the Database in any form, but only display it properly encoded and stripped of bad tags? NB: The questions differ in that he only assumes there's one way. I'm also asking the following questions: 1. Is there a better way that doesn't rely on HTML Whitelists? 2. Is there a better way that relies on a different view engine? 3. Is there a WYSIWYG editor that includes the ability to whitelist on the fly? 4. Should I even worry about this since it will only be for 'private posting' (Much in the same way that a private blog allows HTML From the author, but since only he can post, it's not an issue)? Edit #2: If suggesting a WYSIWYG editor, it must be free (as in speech, or as in beer). Update: All of the suggestions thus far revolve around a specific Rich Text Editor to use: Only provide an editor as a suggestion if it allows for sanitization of HTML tags; and it fulfills the requirement of accepting pasted documents from a WYSIWYG Editor like Microsoft Word. There are three methods that I know of: 1. Not allow HTML. 2. Allow HTML, but sanitize it 3. Find a Rich Text Editor that sanitizes and allows HTML. The previous questions remain (1-4 above). Related Question Preventing Cross Site Scripting (XSS)

    Read the article

  • Dawn of the Enterprise Social Developer

    - by Mike Stiles
    Social is not just for poking friends, posting videos of cats playing pianos, or even just for brand marketing anymore. It has become a key form of communication internally and externally across every area of the enterprise. As a Java developer, are you positioning yourself for the integration of social into enterprise business systems that’s on the near horizon? Because it’s the work you do and the applications you build that will influence what the social-enabled enterprise is going to look like and how it’s going to operate. But as a social developer, step one is wrapping your arms around all the things that are possible. Traditionally, the best exploration, brainstorming and innovation come from collaborating with other developers. That’s how the big questions can be hashed (or hacked) out. Is Java the best social development environment? If not, what is? What’s already being done in terms of application integration? The JavaOne Social Developer Program will offer up a series of talks and events on those very issues Tuesday, October 2 at the San Francisco Hilton. If you’re interested in embarking on this newest frontier of enterprise social development, you can connect with others who are thinking the same thing and get moving on your first project.Talks will include: Emergence Of The Social EnterpriseExtending Social into Enterprise Applications and Business ProcessesIntro to Open Graph and Facebook's APIs Building the Next Wave of Social Commerce Platforms Social Data and the Enterprise LinkedIn: A Professional Network Built with Java Technologies and Agile Practice Social Developer Hackathon In addition to these learning and discussion opportunities, you might consider joining the new Oracle Social Developer Community (OSDC), where the interaction and collaboration can continue indefinitely. It doesn’t take a lot of tea leaf reading to know that the cloud will house the enterprise technology of the future, and social (as well as the rich data it brings) is going to be a major part of that as social integrates across every business function as there’s proven value for consumer facing initiatives. The next phase of social development is going to involve combining enterprise data from multiple sources, new and existing, social and traditional, in order to tell compelling and usable stories. And social is coming to the enterprise quickly, meaning you as a development leader should seek to understand not just what's worked on the consumer side, but what aspects of those successes can be applied inside the organization. Get educated, get connected, and consider registering for this forward-looking event now to get started with enterprise social development.

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • Should I install ubuntu on USB instead of HDD dual-boot?

    - by user2147243
    I had Ubuntu 12.04 installed as dual-boot OS on top of Vista on my laptop. Hacked the grub settings to default to Vista (instead of the default Ubuntu -- pain) on startup, and all was OK for occasional Ubuntu use for past 6 months. Then last week I got a strange message about 'lack of disk space' (~50MB free) when installing pxyplot, even though there was still about 6GB free disk space when I checked later. Then today the Ubuntu wouldn't load at all, and checking the HDD partitions in Vista it looked like the 15GB Ubuntu partition was now three smaller partitions! So, I got rid of those partitions and expanded the Vista partition to use the reclaimed space. Now can't restart ('grub rescue' appears and doesn't 'rescue' anything), so I'll have to do a boot recovery using a Vista installation CD. (Not a particularly user-friendly failure mode of the dual-boot installation!) I now have to decide to either a) try installing ubuntu on the HDD again, but don't want to stuff up my Vista ever again, as that is my most used OS, or b) install Ubuntu on a 16GB USB 3.0 stick. Apparently performance from USB won't be as good as from HDD, and running OS from USB stick does lots of r/w so the stick may fail after a few years! Perhaps installing Ubuntu on live USB and setup to then run in RAM would alleviate the performance/USB lifespan problems? If I create a live-USB for Ubuntu OS, will it boot off that when I restart the laptop with it plugged in? Or will I have to change the laptop setting for boot-order whenever I want to boot Ubuntu instead of Vista (that would be even more painful than the grub default boot order putting Ubuntu ahead of the existing Vista OS!) -- update: I recovered my Vista setup using Iolo SystemMechanic Disaster Recovery Tool, and created a bootable USB of Ubuntu 13.10 on an 8GB USB3.0 pendrive, with 4GB of 'persistence' to allow saving of settings, install some packages etc. It worked OK for a couple of test boots, but once I changed the time and desktop wallpaper, the next Ubuntu reboot crashed and I then couldn't get it to boot successfully. So I decided to install Ubuntu 12.04 LTS as a dual-boot again, but this time instead of partitioning the HDD and installing from an ISO DVD I used the wubi.exe tool to install Ubuntu as a dual-boot. Worked very well, although one oddity was that, despite asking how big the make the partition (20GB), the installed Ubuntu appears to be happily installed somewhere within the Vista NTFS file system (no partition shows up in Windows disk manager, and in Ubuntu disk management tool the entire 133 GB of HDD is showing, with ~40GB free space). A nice feature of installing the dual-boot using wubi is that the laptop now uses Windows boot manager on startup, with Vista as the default OS and Ubuntu happily listed as second on the list. So far so good.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • Dynamically add Server 2008 NLB Nodes

    - by Nick Jacques
    Hi All, I have a small NLB cluster for Terminal Servers. One of the things we're looking at doing for this particular project (this is for a college class) is dynamically creating Terminal Servers. What we've done is create policies for a certain OU, that sets the proper TS Farm properties and installs the Terminal Server role and NLB feature. Now what we'd like to do is create a script to be run on our Domain Controller to add hosts to the preexisting NLB cluster. On our Server 2008 R2 Domain Controller, I was thinking of running the following PowerShell script I've kind of hacked together. Any thoughts on if this will work? Is there any way I can trigger this script to run on the DC once all the scripts to install roles are done on the various Terminal Servers? Thanks very much in advance!! Import-Module NetworkLoadBalancingClusters $TermServs = @() $Interface = "Local Area Connection" $ou = [ADSI]"LDAP://OU=Term Servs,DC=example,DC=com" foreach ($child in $ou.psbase.Children) { if ($child.ObjectCategory -like '*computer*') {$TermServs += $child.Name} } foreach ($TS in $TermServs) { Get-NlbCluster 172.16.0.254 | Add-NlbClusterNode -NewNodeName $TS -NewNodeInterface $Interface }

    Read the article

  • Why an empty MAIL FROM address can sent out email?

    - by garconcn
    We are using Smarter Mail system. Recently, we found that hacker had hacked some user accounts and sent out lots of spams. We have firewall to ratelimit the sender, but for the following email, the firewall couldn't do this because of the empty FROM address. Why an empty FROM address is consider OK? Actually, in our MTA(surgemail), we can see the sender in the email header. Any idea? Thanks. 11:17:06 [xx.xx.xx.xx][15459629] rsp: 220 mail30.server.com 11:17:06 [xx.xx.xx.xx][15459629] connected at 6/16/2010 11:17:06 AM 11:17:06 [xx.xx.xx.xx][15459629] cmd: EHLO ulix.geo.auth.gr 11:17:06 [xx.xx.xx.xx][15459629] rsp: 250-mail30.server.com Hello [xx.xx.xx.xx] 250-SIZE 31457280 250-AUTH LOGIN CRAM-MD5 250 OK 11:17:06 [xx.xx.xx.xx][15459629] cmd: AUTH LOGIN 11:17:06 [xx.xx.xx.xx][15459629] rsp: 334 VXNlcm5hbWU6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 334 UGFzc3dvcmQ6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 235 Authentication successful 11:17:07 [xx.xx.xx.xx][15459629] Authenticated as [email protected] 11:17:07 [xx.xx.xx.xx][15459629] cmd: MAIL FROM: 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK < Sender ok 11:17:07 [xx.xx.xx.xx][15459629] cmd: RCPT TO:[email protected] 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK Recipient ok 11:17:08 [xx.xx.xx.xx][15459629] cmd: DATA

    Read the article

  • Canonical Redirect on Dynamic Mass Virtual Hosts on Apache

    - by Josh
    I have a Web app on Apache that allows users to point their domain to the server. Right now I'm using Apache's dynamic mass virtual hosts with an entry VirtualDocumentRoot /www/hosts/%0/docs So with www.companydomain.com it points to /www/hosts/www.companydomain.com/docs The problem is when the user goes to companydomain.com it will point to /www/hosts/companydomain.com/docs Is there an easy way to automatically have Apache check to see if a directory exists for the virtual host, and if not, look for the host name with "www." in front of it? Other subdomains are fine (i.e. abc.domain.com should point to a diff. directory than def.domain.com) but the whole "www" issue is a mystery to me. I am using dynamic mass virtual hosts so the server does not have to restart after each registration for the application. If there is a different way that is fine as long as apache isn't restarted each time. How can I accomplish this? Worst case scenario if there were a way to redirect to a "default" location on the server if not found I could always do a check via PHP or something but I feel like that is a bit hacked together and there has to be a more efficient way. Thanks in advance!

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

  • Why an empty MAIL FROM address can sent out email?

    - by garconcn
    We are using Smarter Mail system. Recently, we found that hacker had hacked some user accounts and sent out lots of spams. We have firewall to ratelimit the sender, but for the following email, the firewall couldn't do this because of the empty FROM address. Why an empty FROM address is consider OK? Actually, in our MTA(surgemail), we can see the sender in the email header. Any idea? Thanks. 11:17:06 [xx.xx.xx.xx][15459629] rsp: 220 mail30.server.com 11:17:06 [xx.xx.xx.xx][15459629] connected at 6/16/2010 11:17:06 AM 11:17:06 [xx.xx.xx.xx][15459629] cmd: EHLO ulix.geo.auth.gr 11:17:06 [xx.xx.xx.xx][15459629] rsp: 250-mail30.server.com Hello [xx.xx.xx.xx] 250-SIZE 31457280 250-AUTH LOGIN CRAM-MD5 250 OK 11:17:06 [xx.xx.xx.xx][15459629] cmd: AUTH LOGIN 11:17:06 [xx.xx.xx.xx][15459629] rsp: 334 VXNlcm5hbWU6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 334 UGFzc3dvcmQ6 11:17:07 [xx.xx.xx.xx][15459629] rsp: 235 Authentication successful 11:17:07 [xx.xx.xx.xx][15459629] Authenticated as [email protected] 11:17:07 [xx.xx.xx.xx][15459629] cmd: MAIL FROM: 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK < Sender ok 11:17:07 [xx.xx.xx.xx][15459629] cmd: RCPT TO:[email protected] 11:17:07 [xx.xx.xx.xx][15459629] rsp: 250 OK Recipient ok 11:17:08 [xx.xx.xx.xx][15459629] cmd: DATA

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • Web hosting for multiple web sites providing system isolation

    - by Justin
    We have a small number of projects where we expect the client will not be maintaining the installed versions of applications we install to power the site (such as Drupal). Given that an important part of security is keeping things updated, we don't want to host these projects on our Plesk-powered dedicated servers that currently host lots of our other client's websites. Our goal is to find a host where we can deploy isolated instances (be these slices, virtual servers, grid servers, etc) for each individual (or groups of 2-3) web sites as we need them. These instances would be completely separate, so that if one web site were hacked it would not impact any other site. Typical hosting requirements: Linux Apache PHP 5 MySQL Supports Drupal Ability to setup a cron task (but we don't need SSH access) Daily backups Virtualized/cloud hosting (we want to avoid shared) Pricing per site is around $25/month OS is patched automatically Some options we have considered but won't work: MediaTemple: Two major data center-wide security incidents and recent downtime foster doubt about this host's technical ability. Slicehost: This would require us to manage the entire server, which we don't want to do. Rackspace Cloud Sites (formerly Mosso): No backup options. Do you have any recommended hosting options for given these requirements?

    Read the article

  • How Do I Secure WordPress Blogs Against Elemento_pcx Exploit?

    - by Volomike
    I have a client who has several WordPress 2.9.2 blogs that he hosts. They are getting a deface kind of hack with the Elemento_pcx exploit somehow. It drops these files in the root folder of the blog: -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.php -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.asp -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.aspx -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.html -rwxr-xr-x 1 userx userx 1459 Apr 16 04:25 index.php* It overwrites index.php. A keyword inside each file is "Elemento_pcx". It shows a white fist with a black background and the phrase "HACKED" in bold letters above it. We cannot determine how it gets in to do what it does. The wp-admin password isn't hard, but it's also not very easy either. I'll change it up a little to show you what the password sort of looks like: wviking10. Do you think it's using an engine to crack the password? If so, how come our server logs aren't flooded with wp-admin requests as it runs down a random password list? The wp-content folder has no changes inside it, but is run as chmod 777 because wp-cache required it. Also, the wp-content/cache folder is run as chmod 777 too.

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • Securing bash scripts

    - by minnur
    Hi There, Does anybody know what is the best way to secure bash scripts. I have a script which creates database and source code backup and ftp it to other server. And login/password for destination ftp are plain text. I need somehow encrypt it or hide it in case of website hacking. Or should i create script written on C to create bash file then run it and delete ? Thanks. Thanks for the answers and I am sorry, i wasn't clear enough. I would like to clarify my question in the following items. We are storing the data in Rackspace Cloud files. We can't pull as Cloud files doesn't allow you run a script. We can write the script to run on Server A and pull FTP and MySQL data on servers B, C, D, etc. And we want to protect the passwords on A from the situation where A is hacked. Can we compile our script file to hide them? Thanks

    Read the article

  • Continuous outbound connection from QNAP NAS

    - by user192702
    I notice on my firewall that my QNAP NAS is continuously sending UDP sessions out to the Internet. Every second I have 5 - 7 connections out to addresses like the following: 2013-11-10 23:17:54 Deny 192.168.60.5 93.215.212.162 6881/udp 6881 6881 2013-11-10 23:18:05 Deny 192.168.60.5 87.76.0.83 29872/udp 6881 29872 2013-11-10 23:18:05 Deny 192.168.60.5 5.164.188.224 6881/udp 6881 6881 2013-11-10 23:18:05 Deny 192.168.60.5 80.61.45.206 6881/udp 6881 6881 2013-11-10 23:18:34 Deny 192.168.60.5 37.117.204.129 6881/udp 6881 6881 2013-11-10 23:18:34 Deny 192.168.60.5 71.67.101.30 51413/udp 6881 51413 2013-11-10 23:18:34 Deny 192.168.60.5 89.28.92.191 8621/udp 6881 8621 2013-11-10 23:18:34 Deny 192.168.60.5 94.244.157.85 28221/udp 6881 28221 2013-11-10 23:18:34 Deny 192.168.60.5 213.241.61.240 9089/udp 6881 9089 2013-11-10 23:18:45 Deny 192.168.60.5 88.163.28.100 52721/udp 6881 52721 2013-11-10 23:18:45 Deny 192.168.60.5 37.55.190.20 10027/udp 6881 10027 2013-11-10 23:18:45 Deny 192.168.60.5 62.72.188.146 14306/udp 6881 14306 2013-11-10 23:19:14 Deny 192.168.60.5 85.53.244.205 51413/udp 6881 51413 2013-11-10 23:19:14 Deny 192.168.60.5 67.163.18.215 52130/udp 6881 52130 2013-11-10 23:19:14 Deny 192.168.60.5 86.172.105.140 9089/udp 6881 9089 2013-11-10 23:19:14 Deny 192.168.60.5 99.28.56.121 52383/udp 6881 52383 2013-11-10 23:19:14 Deny 192.168.60.5 109.60.184.249 46217/udp 6881 46217 2013-11-10 23:19:25 Deny 192.168.60.5 121.107.144.174 21135/udp 6881 21135 2013-11-10 23:19:25 Deny 192.168.60.5 84.39.116.180 48446/udp 6881 48446 2013-11-10 23:19:25 Deny 192.168.60.5 183.238.254.62 openvpn/udp 6881 1194 ......... This is frightening as it seems like it's been hacked to send information out. Has anyone observed this behaviour from their QNAP NAS?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >