Search Results

Search found 483 results on 20 pages for 'dangerous'.

Page 10/20 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • CA For A Large Intranet

    - by Tim Post
    I'm managing what has become a very large intranet (over 100 different hosts / services) and will be stepping down from my role in the near future. I want to make things easy for the next victim person who takes my place. All hosts are secured via SSL. This includes various portals, wikis, data entry systems, HR systems and other sensitive things. We're using self signed certificates which worked o.k. in the past, but are now problematic because: Browsers make it harder for users to understand exactly what is going on when a self signed certificate is encountered, much less accept them. Putting up a new host means 100 phone calls asking what "Add an exception" means What we were doing is just importing the self signed certs when we set up a new workstation. This was fine when we only had a dozen to deal with, but now its just overwhelming. Our I.T. Department has classified this as ya all's problem, all we get from them is support for switch and router configurations. Beyond the user having connectivity, everything else is up to the intranet administrators. We have a mix of Ubuntu and Windows workstations. We'd like to set up our own self signed CA root, which can sign certificates for each host that we deploy on the intranet. Client browsers would of course be told to trust our CA. My question is, would this be dangerous and would we be better off going with intermediate certificates from someone like Verisign? Either way, I still have to import the root for the intermediate CA, so I really don't see what the difference is? Other than charging us money, what would Verisign be doing that we could not, beyond protecting the root CA cert so it can't be used to make forgeries?

    Read the article

  • SEO - different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Shielded ethernet cable and ethernet sockets earthing how to?

    - by ageis23
    Hi I'm going to install 5 Ethernet sockets in my house using cat5e shielded cable. I decided to use this because the sockets will be on the second floor and the most practical way up is within a trunk along with some mains wiring. The cable will be terminated at the router and at the Ethernet faceplate. What can I use to earth then? The faceplate/router are both plastic hence no earth wire needed. I can't use the earth wire within the main socket can I? I figured it will be very very dangerous. I don't want to be connecting to the earth block on the mains either since I totally incompetent when it comes to mains electricity.

    Read the article

  • Issue with https:// url going to an unknown location

    - by Brandon
    We have a website (ASP.NET/Plesk 9.5.5) that can be accessed just fine through the regular URL (http://example.com). However when accessing the site through https://example.com the site displays the invalid security certificate warning, which is fine since we don't have an SSL certificate. If I add an exception, I'm sent to a completely separate site that is apparently hosting a malware script (I'm still on https://example.com though). Because of this Google has flagged the site as dangerous. I can't find anything in the Plesk panel that would help fix this, and as far as I can tell those files don't exist on our server. How do I tell where the https:// link is sending me? I'm not that familiar with DNS, but is that what is causing this behavior?

    Read the article

  • Is it safe to use consumer MLC SSDs in a server?

    - by Zypher
    We (and by we I mean Jeff) are looking into the possibility of using Consumer MLC SSD disks in our backup data center. We want to try to keep costs down and usable space up - so the Intel X25-E's are pretty much out at about 700$ each and 64GB of capacity. What we are thinking of doing is to buy some of the lower end SSD's that offer more capacity at a lower price point. My boss doesn't think spending about 5k for disks in servers running out of the backup data center is worth the investment. Just how dangerous of an approach is this and what can be done to mitigate these dangers?

    Read the article

  • Implementing new required feature after software release

    - by TiagoBrenck
    Fake Scenario There is a software that was released 1 year ago. The software is to map and register all kind of animals on our planet. When the software was released, the client only needed to know the scientific name of the animal, a flag if it is in risk of extinction and a scale of dangerous(that is a fake software and specification, I don't want to discuss this here). There are already 100.000 animals records saved on DB. New Feature One year later, the client wants a new feature. It is really important to him to know the animals classes, and this is a required field. So he asks me to put a field to input the animal class, and this field is required. Or maybe where this animal was discovered. Problem I have already 100.000 recorded animals without a class or where it was discovered, but I need to insert a new column to storage this information and this column can't be null. I don't have a default value for this situation (there isn't a default animal class or where it was discovered). I don't want to keep the requirement rule only on my software, my DB must have this requirement too(I like to keep business rules on DB too). What are the alternatives to solve this situation? I am on a situation that this new feature cannot be previewed or reviewed for the existing records. The time already passed and I can't go back on time to get it

    Read the article

  • Dreaded SQLs

    - by lavanyadeepak
    Dreaded SQLs We used to think that a SQL statement without a where clause is only dangerous right since running that on a server TSQL is just going to impact the entire table like waving the magic wand. For that reason we should cultivate the habit first to write the statement as select and then to modify the select portion as update. Within the T-SQL Window, I would normally prefer the following first: select * from employee where empid in (4,5) and then once I am satisfied with the results, I would go ahead with the following change: --select * delete from employee where empid in (4,5) Today I just discovered another coding horror. This would typically be applicable in a stored procedure and with respect to variable nomenclature. It is always desirable to have a suitable nomenclature for parameters distinct from the column names and internal variables. This would help quicker debugging of the stored procedures besides enhancing the readability. Else in a quick bout of enthusiasm a statement like   if (@CustomerID = @CustomerID) [when the latter is intended to denote the column name and there is a superflous @ prepended], zeroing in on the problem would be little tricky. Had there been a still powerful nomenclature rules then debugging would have been more straight-forward and simpler right?

    Read the article

  • Secure against c99 and similar shells

    - by Amit Sonnenschein
    I'm trying to secure my server as much as i can without limiting my options, so as a first step i've prevented dangerous functions with php disable_functions = "apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, mysql_pconnect, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_decode" but i'm still fighting directory travel, i can't seems to be able to limit it, by using a shell script like c99 i can travel from my /home/dir to anywhere on the disc. how can i limit it once and for all ?

    Read the article

  • PHP fopen fails - does not have permission to open file in write mode.

    - by George
    Hello. I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory. I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that. So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 * Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

    Read the article

  • Choosing a Linux distribution

    - by Luke Puplett
    Dangerous territory with this question so please try to be impartial and instead focus on what to look for when choosing a Linux distribution. I'm completely new to Linux. I thought it'd never happen but I need to have a Linux box to play with and I have a spare fanless Atom PC (32-bit only). I'll be using the machine as a non-commercial hobby server, the trouble is, I don't even know how to compare Linux distributions and why people pick one over another. If anything, I want to have an easy install from USB stick. My question is: what do you look for when choosing a (free?) Linux distribution for a server? If you can, please explain what sorts of things actually differ between one and another without saying which you think is better, just the facts. The way I see it, Linux as a server is just an SSH console and I find it hard to imagine what could be different between one and another.

    Read the article

  • Ubuntu boots in read-only filesystem after upgrade!

    - by akatzbreaker
    I got a serious problem here: I recently upgraded to the Latest Version of Ubuntu. Now, I boot to my Ubuntu Partition, and I get a Low-Graphics error! I boot to a Recovery-Mode to see what the Problem is. Then, I try to Fix any Damaged Packages, to run fsck but nothing solved the Problem. Then, from the Recovery Menu, I open a Root Shell. I try to create a File and I understand that the Filesystem is Read-Only. Then, I run: mount -o remount,rw / and it Worked for that Terminal Session! When I go back to recovery, I select to resume Boot Normally but I get the Same Error! a I also tried to Boot to my Root Shell again, remount as Read-Write and Start Gnome from there. It Worked! (But the user is ROOT, and is quite Dangerous!) However, I can't do all this proccess at every boot! Any Solution? (Note that when I try to create a new File in my Ubuntu Partition from another OS, I don't get any Errors!)

    Read the article

  • Ubuntu server is dropping SSH connections, then not allowing me to log back on

    - by wilhil
    I have an ESX box which I have loaded with two Ubuntu Server machines. During setup, I chose no additional packages to install as I just wanted a lightweight machine for testing. The first thing I did was change the root password via sudo passwd After ESX got on my nerves through lag, I decided to install OpenSSH via apt-get install openssh-server. It did it's business, and I then opened putty and could connect in to both machines fine. The first time it connected, it asked me to add the ssh key as obviously it did not know it. Anyway, the second server is working flawlessly, but, the first seems to be giving me trouble. I was in the middle of typing a sentence when it kicked me off for no reason and when I tried to reconnect, putty gave me a warning that the ssh key had changed and it is potentially dangerous. I attempted to log in anyway and it did not work, just the standard access denied message. Using the second machine, I SSHed in to the first machine and it worked straight away, I then killed the SSH sessions (and possibly SSH server), I then reconnected via putty and I again received the security warning message, but, it allowed me to log on fine. ... I thought "glitch" and nothing more of it, but, it just happened again! I really do not understand this and was hoping someone here can help?

    Read the article

  • Debian/Ubuntu apt or pbuilder without root privileges?

    - by Tem Pora
    I want to use apt or pbuilder to build a package in user's home directory. The home directory has enough space to hold the package's source, its dependencies and binary output. But the apt and pbuilder documents say that you have to be a root user (sudo) to use it. It's frustrating, as the only way now I have at my disposal is to build the package from source or use the dumba$$ (sorry for bad language) dpkg and in both cases figure out every dependency manually, create the dir layout manually and install the built things manually. Now if I can do all these things manually, why the tool writers (apt) think that doing so using their tool (apt) is somehow more special/dangerous? I don't want to use root privileges JUST to build and test a user-land package. If I am NOT allowed to do anything outside my home dir then why NOT the apt or pbuilder type commands be allowed to "build" something in my home dir without root privileges? I just want to use their functionality. It seems there is nothing like Gentoo Prefix from Debian

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • Long 'Wait' Time for three php/CSS files. Is something blocking them?

    - by William Pitcher
    I have been speed optimizing a Wordpress site to little effect. There are three files CSS-related php files from the Wordpress theme that are delaying page loads on the site. One of the three files is basically one line of custom CSS from the custom CSS feature in the theme. You can see what I am talking about with this Pingdom speed test: The yellow is 'Wait'. There are no slow items in the cut-off portion of the image. The full results are here: Pingdom Results Page 1. Any thoughts on what might be causing this? I understand that I have blocking CSS or JS files, but I don't see anything that would be causing that long of a wait. When I ran the P3 Plugin Profiler, Wordpress and all plugins appeared fine -- it is the theme that is taking all the time. GTmetrix recommends avoiding dynamic queries. I assume all the ver=3.61 references are to the version of Wordpress (which I am using). I noticed that my Wordpress sites using other themes don't make this query (at least not over and over). 2. Is this typical coding practice? 3. How much negative impact do these query-strings have -- a little or a lot? I tried searching for similar questions here, please excuse me if I missed something. Sometimes, I know just enough to be dangerous.

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • A Dozen USB Chargers Analyzed; Or: Beware the Knockoffs

    - by Jason Fitzpatrick
    When it comes to buying a USB charger one is just as good as another so you might as well buy the cheapest one, right? This interesting and detailed analysis of name brand, off-brand, and counterfeit chargers will have you rethinking that stance. Ken Shirriff gathered up a dozen USB chargers including official Apple chargers, counterfeit Apple chargers, as well as offerings from Monoprice, Belkin, Motorola, and other companies. After putting them all through a battery of tests he gave them overall rankings based on nine different categories including power stability, power quality, and efficiency. The take away from his research? Quality varied widely between brands but when sticking with big companies like Apple or HP the chargers were all safe. The counterfeit chargers (like the $2 Apple iPad charger knock-off he tested) proved to be outright dangerous–several actually melted or caught fire in the course of the project. Hit up the link below for his detailed analysis including power output readings for the dozen chargers. A Dozen USB Chargers in the Lab [via O'Reilly Radar] 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Many user stories share the same technical tasks: what to do?

    - by d3prok
    A little introduction to my case: As part of a bigger product, my team is asked to realize a small IDE for a DSL. The user of this product will be able to make function calls in the code and we are also asked to provide some useful function libraries. The team, together with the PO, put on the wall a certain number of user stories regarding the various libraries for the IDE user. When estimating the first of those stories, the team decided that the function call mechanism would have been an engaging but not completely obvious task, so the estimate for that user story raised up from a simple 3 to a more dangerous 5. Coming to the problem: The team then moved to the user stories regarding the other libraries, actually 10 stories, and added those 2 points of "function call mechanism" thing to each of those user story. This immediately raised up the total points for the product of 20 points! Everyone in the team knows that each user story could be picked up by the PO for the next iteration at any time, so we shouldn't isolate that part in one user story, but those 20 points feel so awfully unrealistic! I've proposed a solution, but I'm absolutely not satisfied: We created a "Design story" and put those annoying 2 points over it. However when we came to realize and demonstrate it to our customers, we were unable to show something really valuable for them about that story! Here the problem is whether we should ignore the principle of having isolated user stories (without any dependency between them). What would you do, or even better what have you done, in situations like this? (a small foot-note: following a suggestion I've moved this question from stackoverflow)

    Read the article

  • Is it a good idea to dynamically position and size controls on a form or statically set them?

    - by CrystalBlue
    I've worked mostly with interface building tools such as xCode's Interface Builder and Visual Studio's environment to place forms and position them on screens. But I'm finding that with my latest project, placing controls on the form through a graphical interface is not going to work. This more has to do with the number of custom controls I have to create that I can't visually see before hand. When I first tackled this, I began to position all of my controls relative to the last ones that I created. Doing this had its own pros and cons. On the one hand, this gave me the opportunity to set one number (a margin for example) and when I changed the margin, the controls all sized correctly to one another (such as shortening controls in the center while keeping controls next to the margin the same). But this started to become a spiders-web of code that I knew wouldn't go very far before getting dangerous. Change one number and everything re sizes, but remove one control and you've created many more errors and size problems for all the other controls. It became more surgery then small changes to controls and layout. Is there a good way or maybe a preferred way to determine when I should be using relative or absolute positioning in forms?

    Read the article

  • Do old package versions in CentOS mean that they do not have security fixes?

    - by user1421332
    We asked our admin to update SVN on our CentOS 6.5 server. He did so and the result was SVN 1.6.11. However the current version of SVN is 1.8.9. I know the CentOS yum repository is not always up-to-date. But in that case I am confused: SVN 1.6.x is not officially supported anymore. This means it does not get any security fixes! How can the official CentOS repository provide such an old (and dangerous) version? Is there something we (or our admin) understood the wrong way?

    Read the article

  • How to meet Windows 8 upgrade's 20 GB requirement on a 40 GB SSD with a 22 GB Windows 7 install?

    - by deryus
    A PC I have has Windows 7 installed on a 40 GB SSD, and I bought a Windows 8 upgrade for it. The current Windows folder on it however is 22 GB, that's after removing hibernation, turning off the pagefile and removing all extra programs/features. So even if I purge every other file and folder, the Windows folder itself takes more than half the disk. The PC also has a 1 TB HDD, but the upgrade installer didn't give me any options about choosing another drive. So, is my only option to reinstall Windows 7 on a larger drive, then proceed with the Windows 8 upgrade? Or is there anything I can remove from the Windows folder that while might be dangerous for long term usage, is fine for the few minutes I need to get Windows 8 installing?

    Read the article

  • Will Ubuntu Live CD move MFT to resize NTFS volumes?

    - by irwazr
    I have a feeling some will consider this a duplicate, but please hear me out. I've been reading tons of questions and threads around this but have never really found an answer for this specifically. I want to shrink my NTFS partition to make room for a Ubuntu install, so I can dual boot them. However when shrinking the NTFS volume in Windows disk management, it will only go so far as the MFT is sitting near the end of the volume. I've read plenty of posts about why it does this, and how difficult/dangerous it is to move the MFT etc. I've also read that Perfect Disk can apparently do it under it's trial period, but I remain cautious to try this method. I was wondering however if the disk partitioning utility included in the Ubuntu install wizard handles the moving of the MFT when dragging the partition boundaries. It all seems too simple that you simply tell it the new size you want it to be. Would it tell me if it couldn't resize by the amount you requested if the MFT was an issue, or move it for you if it were able. I'm concerned it might corrupt the MFT and the volume, even though I doubt the install wizard would be so daft. So what exactly is the deal with the partition resizing tool in the Ubuntu install wizard? Will it safely resize my NTFS volume despite the location of my MFT? Thanks in advance.

    Read the article

  • Kill all currently running cron jobs

    - by Adelphia
    For some reason my cron job scripts aren't exiting cleanly and they're backing up my server. There are currently a couple hundred processes running for one of my users. I can use the following command to kill all processes by that user, but how can I simplify this to kill only crons? pgrep -U username | while read id ; do kill -6 $id ; done It would be dangerous to run the above command as is, correct? Wouldn't that kill mysql and other important things?

    Read the article

  • Motherboard dual gfx power question

    - by user33931
    1st, I am software guy. I do not do hardware. So I know to you hardware geeks, this is a dumb question. I just inherited a box with a ASUS P5GZ-MX mother board. I have attempted to install two nVidia PCI video cards. I put a 750w power supply in the system to be sure I have enough power. With no extra video cards, the 3.3 v shows normal. When I put one card in, the 3.3 goes to 3.5-3.6 and flashes red (over voltage) about 30% of the time. When I put the 2nd card in, it goes to 3.73 v and stays red all the time. Any Ideas why the voltage goes up when I add cards instead of going down? More Importantly, is this dangerous to the system?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >