Search Results

Search found 17688 results on 708 pages for 'protect computer'.

Page 195/708 | < Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >

  • Where to draw the line between front end and back end

    - by Twincascos
    I was recently contracted to develop a smarty theme for an automated SOHO phone answering service. The team who had built the backend wouldn't allow me access to any of the back end nor tell me anything about it, their smarty set up, smarty plugins, data base interface api, server set-up, nothing. Nor could I have access to the server nor a beta domain, basically zero co-operation. So I set up a local server with Smarty and built the template based on what I guessed would be their best practice, commented my code like crazy, wrote all the needed javascript, css, and template files. Then I sent them packaged to the backend team and hoped for the best. With half of a project team failing to cooperate or even communicate I am now concerned that they may reply saying that everything is wrong and they may refuse to implement the new front end. I'm curious to know if others encounter this type of situation and what you may have done to protect yourselves.

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • Are programming languages perfect?

    - by mohabitar
    I'm not sure if I'm being naive, as I'm still a student, but a curious question came to my mind. In another thread here, a user stated that in order to protect against piracy of your software, you must have perfect software. So is it possible to have perfect software? This is an extremely silly hypothetical situation, but if you were to gather the most talented and gifted programmers in the world and have them spend years trying to create 'perfect' software, could they be successful? Could it be that not a single exploitable bug could be created? Or are there flaws in programming languages that can still, no matter how hard you try, cause bugs that allow your program to be hijacked? As you can tell, I know nothing about security, but essentially what I'm asking is: is the reason why software is easily exploitable the fact that imperfect human beings create it, or that imperfect programming languages are being used?

    Read the article

  • Where to download replacement "Explorerframe.DLL" Files for x64 Windows 7 Pro?

    - by Ben Franchuk
    After posting this question, I did some research to reveal what the problem likely was, and found what I need to fix. Following this is the original post, then my updated question. A few months ago I ended up requiring to change my computer's SID to fix a problem it had been having- Instead of fixing the problem, though, it screwed up my at-the-time current install of windows, to the point of me needing to do a fresh install. As I am in possession of an OEM copy of Windows 7 Pro 64 Bit, I successfully reinstalled over the dead copy with that (all the files that were on the computer previous to this windows install were put in a Windows.old folder). Everything installed and worked absolutely fine, except for one thing. The problem I am experiencing is that, in some Windows Explorer windows, the explorer pane doesn't show. Instead, it simply shows a white area where the pane would show. This makes some software not usable, I recently realized; Software such as Cubase, which use just the explorer pane to select file save locations, cannot save at all as the pane itself is... not operational. Below is a screenshot of this problem as it occurs in cubase; ...and again as it shows in UTorrent in the save location selector window. The highlighted area is where the sidebar would NORMALLY be. Pardon my scribbling over some of the things in the window- I would personally rather the internet did not get a glimpse of my files. I have yet to find a common reason why the pane works in some applications when they pull explorer, and others not. I have yet to see it go away, and the software affected by it has been affected since I reinstalled my copy of windows. Initially, I was able to live with it as I can type out save locations in the file name bar to navigate, but with software like Cubase, I do not have this option. Reinstalling windows again is NOT an option. Here's the updated question. After posting this question originally, I did some research on the problem in question, and it turns out that this is extremely easily fixable via replacing the file "ExplorerFrame.DLL" which is located in the System32 and SystemWOW64 Folders, in the windows folder, on the C:\ drive. As I quite frequently customize my computer, this is a normal thing for me to do and I know exactly how to safely and properly replace this file. The only problem is that I cannot for the life of me find a download of this file that actually works with my computer. I tried a couple from some different sites but they all caused explorer to not restart (I was given an error when starting the application from Task Manager) and was forced to revert to the broken .DLL files. Since there are 2 separate "ExplorerFrame.DLL" files; one for 64 bit and the other for 32 bit, I am assuming that I will need to download 2 separate versions to replace the corrupted ones. Where can I acquire these files? I am currently running Windows 7 Professional x64 Bit.

    Read the article

  • How to Restrict Android App Permissions

    - by Chris Hoffman
    Android forces you to agree to every permission an app wants, assuming you want to use the app. After rooting your device, you can manage permissions on a per-app basis. Restricting permissions allows you to protect your contacts and other private data from apps that demand access you’d rather not allow. Many apps will continue working properly after you revoke the permissions. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • 6 Ways To Secure Your Dropbox Account

    - by Chris Hoffman
    Dropbox is a hugely popular cloud storage service beloved by many. Unfortunately, it’s had a history of security problems, ranging from compromised accounts to once allowing access to every Dropbox account without requiring a password for several hours. If you’re using Dropbox, there are a variety of ways you can secure your account against unauthorized access and protect your files even if someone does gain access to your account. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Is it important to obfuscate C++ application code?

    - by user827992
    In the Java world, it seems to sometimes be a problem, but, what about C++? Are there different solutions? I was thinking about the fact that someone can replace the C++ library of a specific OS with a different version of the same library, but full of debug symbols to understand what my code does. IS tt a good thing to use standard or popular libraries? This can also happen with some dll library under Windows replaced with the "debug version" of that library. Is it better to prefer static compilation? In commercial applications, I see that for the core of their app they compile everything statically and for the most part the dlls (dynamic libraries in general) are used to offer some third party technologies like anti-piracy solutions (I see this in many games), GUI library (like Qt), OS libraries, etc. Is static compilation the equivalent to obfuscation in the Java world? In better terms, is it the best and most affordable solution to protect your code?

    Read the article

  • why write-enable ring

    - by SpashHit
    Here's an "interview question" that while ostensibly about hardware really does inform a software design principal as well. Computers used to (still do I guess, somewhere) use magnetic tape reels to store data. There was a plastic accessory you could attach to a tape reel called a "write-enable ring". If the tape had such a ring, the tape drive allowed writing to the tape... if not, it only allowed read access. Why was the choice to design the system in this way? Why not have a "write protect ring" instead, with the opposite effect?

    Read the article

  • PageRank sharing between domains?!

    - by Senthil
    I own three domains, say.. example.com, example.in, example.co.in I have bought the .in and .co.in TLDs only to protect the brand. But I have this question: If I make the other two also point to my hosting so that regardless of which one the user types, they are taken to the same website, will the PageRank be split into three and will each domain have one third the actual PR value? What should I do with the other two domains? Where should I point them to, if I don't intend to use them at all (i.e., what should I give in place of the ns1.myprovider.com, ns2.myprovider.com etc..?)

    Read the article

  • How to write a network game?

    - by TomWij
    Based on Why is so hard to develop a MMO?: Networked game development is not trivial; there are large obstacles to overcome in not only latency, but cheat prevention, state management and load balancing. If you're not experienced with writing a networked game, this is going to be a difficult learning exercise. I know the theory about sockets, servers, clients, protocols, connections and such things. Now I wonder how one can learn to write a network game: How to balance load problems? How to manage the game state? How to keep things synchronized? How to protect the communication and client from reverse engineering? How to work around latency problems? Which things should be computed local and which things on the server? ... Are there any good books, tutorials, sites, interesting articles or other questions regarding this? I'm looking for broad answers, but specific ones are fine too to learn the difference.

    Read the article

  • Passwords in the Password/Encryption Keys program

    - by Gaurav_Java
    I noticed that I have passwords in the Password/Encryption Keys program . It appears that anybody who walked up to my computer could go look at all my passwords without needing a master password. Did I do something wrong or is this the default behavior? And if so, why? and what if i lick my password is it get locked till i log out or for every time when i have 2to see password then i have to unlock keyrings . if then so how i protect my passwords from other . and why it is done so

    Read the article

  • How do you go about checking your open source libraries for keystroke loggers?

    - by asd
    A random person on the internet told me that a technology was secure(1), safe to use and didn't contain keyloggers because it is open source. While I can trivially detect the key stroke logger in this open source application, what can developers(2) do to protect themselves against rouge committers to open source projects? Doing a back of the envelope threat analysis, if I were a rogue developer, I'd fork a branch on git and promote it's download since it would have twitter support (and a secret key stroke logger). If it was an SVN repo, I'd create just create a new project. Even better would be to put the malicious code in the automatic update routines. (1) I won't mention which because I can only deal with one kind of zealot at a time. (2) Ordinary users are at the mercy of their virus and malware detection software-- it's absurd to expect grandma to read the source of code of their open source word processor's source code to find the keystroke logger.

    Read the article

  • Should I give preferential treatment to proxy users on my ecommerce site?

    - by Question Overflow
    I am setting up an ecommerce site that caters to a worldwide audience. I would imagine that visitors would come from everywhere, and for whatever reasons, some would be connecting through proxy servers. My site uses a server that is configured to rate limit connections from the same ip address to protect itself from a DOS attack. So, if a proxy server is heavily used by my visitors, then it would appear to be a DOS. This is problematic in a sense that it is hard to tell whether the users are genuinely browsing my site or if a DOS is taking place. So my question is, should I give preferential treatment to proxy users on my ecommerce site? If yes, how should this be done. If not, why not?

    Read the article

  • How does a government development shop transition to developing open source solutions?

    - by Rob Oesch
    Our shop has identified several reasons why releasing our software solutions to the open source community would be a good idea. However, there are several reasons from a business stand point why converting our shop to open source would be questioned. I need help from anyone out there who has gone through this transition, or is in the process. Specifically a government entity. About our shop: - We develop and support web and client applications for the local law enforcement community. - We are NOT a private company, rather a public sector entity Some questions that tend to come about when we have this discussion are: We're a government agency, so isn't our code already public? How do we protect ourselves from being 'hacked' if someone looks into our code? (There are obvious answers to this question like making sure you don't hard code passwords, etc. However, the discussion needs to consider an audience of executives who are very security conscience.)

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • chmod 700 and htaccess deny from all enough?

    - by John Jenkins
    I would like to protect a public directory from public view. None of the files will ever be viewed online. I chmoded the directory to 700 and created an htaccess file that has "deny from all" inside it. Is this enough security or can a hacker still gain access to the files? I know some people will say that hackers can get into anything, but I just want to make sure that there isn't anything else I can do to make it harder to hack. Reply: I am asking if chmod 700 and deny from all is enough security alone to prevent hackers from getting my files. Thanks.

    Read the article

  • T9 patented while QWERTY is not?

    - by Marco W.
    I've seen that there are lots of custom keyboards for Android, but all are QWERTY keyboards. I couldn't find any keyboard with T9 layout. Is this because T9 is patented and the QWERTY layout is not? So if I made a T9 keyboard, I would have to pay patent fees? So what does the patent protect when you look at T9? Only the layout? Or the prediction engine? The problem is, this way of predicting words is the only one that makes sense for this layout ...

    Read the article

  • How does Requiring users to Periodically Change their Passwords Improve Security? [closed]

    - by Bob Kaufman
    I've had the same password for some sites for years with no regrets. Meanwhile, at work, I find myself being forced to change passwords every two to three months. My thinking is that if a password gets compromised, requiring that I change it several weeks out isn't going to protect me or the network very much. Moreover, I find that by being required to change passwords frequently, I degenerate into a predictable password pattern (e.g., BearsFan111, BearsFan222, ...) which results in easier to remember and easier to guess passwords. Is there a sound argument for requiring that passwords be changed periodically?

    Read the article

  • Why is better to use external JavaScript or libraries ; and is it prefered to use jquery meaning more security?

    - by shareef
    I read this article Unobtrusive JavaScript with jQuery and I noticed these points in the slide page 11 some companies strip JavaScript at the firewall some run the NoScript Firefox extension to protect themselves from common XSS and CSRF attacks many mobile devices ignore JavaScript entirely screen readers do execute JavaScript but accessibility issues mean you may not want them to I did not understand the fourth point. What does it mean? I need your comment and responses on these points. Is not using JavaScript and switching to libraries like jQuery worth it? UPDATE 1 : whats the meaning of Unobtrusive JavaScript with jQuery ? and yes it does not say we should use libraries but we should have them on external files for that reason i asked my question.

    Read the article

  • Private domain purchase with paypal: how to prevent fraud?

    - by whamsicore
    I am finally going to buy a domain I have been looking at. The domain owner wants me to give him my Godaddy account information and send him the payment via Paypal gift, so that there will be no extra charges. Should this cause suspicion? Does Paypal offer any kind of fraud protection? What is the best way to protect myself from fraud in this situation, without the need for escrow services, such as escrow.com? Any advice welcomed. Thanks.

    Read the article

  • Is HTML5 more secure to develop for than Silverlight?

    - by King Chan
    I'm learning Silverlight, and I know that if I master it, I can apply the same concepts to WPF, which means I can do either web or desktop development pretty easily. But I've read articles and followed the discussion online, and I understand HTML5 is gaining traction for being cross-platform, and a lot of people seem to be moving to HTML5. From my understanding, any HTML5 application would be built with HTML and JavaScript (or Flash). But is it secure? It seems like anyone can easily use their browser's "view source" option and grab your code. Is this something I should be worried about, or is there a way to protect against it?

    Read the article

  • Why a write-enable ring?

    - by SpashHit
    Here's an "interview question" that while ostensibly about hardware really does inform a software design principal as well. Computers used to (still do I guess, somewhere) use magnetic tape reels to store data. There was a plastic accessory you could attach to a tape reel called a "write-enable ring". If the tape had such a ring, the tape drive allowed writing to the tape... if not, it only allowed read access. Why was the choice to design the system in this way? Why not have a "write protect ring" instead, with the opposite effect?

    Read the article

  • Where to put git "remote" repo on purely local git setup?

    - by Mittenchops
    I overwrote and lost some important scripts and would like to setup version control to protect my stuff. I've used git before, and am familiar with commands, but don't understand where I would put my "remote" repository on an install set up on my own machine---the place I push/pull to. I don't intend to share or access remotely, I just want a little source control for my files. I followed the instructions here for setting up my staging area: http://stackoverflow.com/questions/4249974/personal-git-repository But where do I put git "remote" repo on purely local git setup? How does the workflow work then? On the command in the above: git remote add origin ssh://myserver.com:/var/repos/my_repo.git Where should I put/name something like this? If I have multiple different projects, would they go in different places? I'm running 11.10.

    Read the article

  • Non-mathematical Project Euler (or similar)?

    - by Juha Untinen
    I checked the post (Where can I find programming puzzles and challenges?) where there's a lot of programming challenges and such, but after checking several of them, they all seem to be about algorithms and mathematics. Is there a similar site for purely logic/functionality-based challenges? For example: - Retrieve data using a web service - Generate output X from a CSV file - Protect this code against SQL injection - Make this code more secure - What is wrong with this code (where the error is in logic, not syntax) - Make this loop more efficient Does a challenge site like that exist? Especially one that provides hints and/or correct solutions. That would be a very helpful learning site.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

< Previous Page | 191 192 193 194 195 196 197 198 199 200 201 202  | Next Page >