Search Results

Search found 68155 results on 2727 pages for 'data security'.

Page 149/2727 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Best way to Store Passwords, User information/Profile data and Photo/Video albums for a social websi

    - by Nick
    Need some help figuring out how to best Store Passwords, User information/Profile data and Photo/Video albums for a social website? For photos/videos the actual photo/video + even encrypting the URL with the IDs to the photo/videos so other users cannot figure it out. Creating a site like myspace and designing retirement documents but i am unsure how to specify the security requirements for the database. Two things: 1) Protect from outside users 2) Protect all these from employees being able to access this info For #2, the additional question is: If we encrypt the user info and password so even the system admins cannot get in, how can we retrieve the user data tomorrow if someone flags the user's account as spam and admin needs to check it out or if law enforcement wants info on a user? Thanks.

    Read the article

  • Are there any frameworks for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view.

    Read the article

  • Best way to migrate servers without losing any data and with no downtime(?)

    - by ina
    This is a methodology question from a freelancer, with a corollary on MySQL.. Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh. Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?

    Read the article

  • R: How to write out a data.frame so that I can paste it into SO for others to read?

    - by John
    I have a large data.frame displaying some weird properties when plotted. I'd like to ask a question about it on Stackoverflow, to do that I'd like to write the data.frame out in a form that I can paste it into SO and somebody else can easily run it and have it back into a data.frame object again. Is there an easy way to accomplish this? Also, if it is really long, should I use paste bin instead of directly paste it here?

    Read the article

  • Is there an difference between transient properties defined in the data model, or in the custom subc

    - by mystify
    I was reading that setting the value of a transient property always results in marking the managed object as "dirty". However, what I don't get is this: If I make a subclass of NSManagedObject and use some extra properties which I don't need to be persistet, how does Core Data know about them and how can it mark the object as dirty when I access these? Again, they're not defined in the data model, so Core Data has no really good hint that they are there. Or does Core Data use some kind of introspection to analyze my custom class and figure out what properties I have in there?

    Read the article

  • C# or windows equivalent of OS X's Core Data?

    - by Nektarios
    I'm late to the boat and have only just now started using Core Data in OS X / Cocoa - it's incredible and is really changing the way I look at things. Is there an equivalent technology in C# or the modern Windows frameworks? i.e. having managed data types where you get saving, data management, deleting, searching all for free? Also wondering if there's anything like this on Linux.

    Read the article

  • Mass data store with SQL SERVER

    - by Leo
    We need management 10,000 GPS devices, each GPS device upload a GPS data every 30 seconds, these data need to store in the database(MS SQL Server 2005). Each GPS device daily data quantity is: 24 * 60 * 2 = 2,880 10 000 10,000 GPS devices daily data quantity is: 10000 * 2880 = 28,800,000 Each GPS data approximately 160Byte, the amount of data per day is: 28,800,000 * 160 = 4.29GB We need hold at least 3 months of GPS data in the database, My question is: 1, whether SQL Server 2005 can support such a large amount of data store? 2, How to plan data table? (all GPS data storage in one table? Daily table? Each GPS device with a GPS data table?) The GPS data: GPSID varchar(21), RecvTime datetime, GPSTime datetime, IsValid bit, IsNavi bit, Lng float, Lat float, Alt float, Spd smallint, Head smallint, PulseValue bigint, Oil float, TSW1 bigint, TSW1Mask bigint, TSW2 bigint, TSW2Mask, BSW bigint, StateText varchar(200), PosText varchar(200), UploadType tinyint

    Read the article

  • What is the most efficient way to use Core Data?

    - by Eric
    I'm developing an iPad application using Core Data, and was hoping someone could clarify something about Core Data. Right now, I populate my table by making a fetch request for all of my data in viewDidLoad. I'd rather make individual fetch requests in my tableView:cellForRowAtIndexPath:. Can anyone tell me which is more efficient, and why? In other words, is it much less efficient to make lots of small requests as opposed to one big request?

    Read the article

  • What happens if a user jumps over 10 versions before updating, and every version had a new data mode

    - by dontWatchMyProfile
    Example: User installs app v.1.0, adds data. Then the dev submits 10 updates in 10 weeks. After 11 weeks, the user wants v.11.0 and grabs a copy from the app store. Assuming that the app has got 11 .xcdatamodel versions inside, where ***11.xcdatamodel is the current one, what would happen now since the persistent store of the user is ages old? would the migration happen 10 times, step-by-step through every migration iteration? Or does the actual migration of data (lets assume gigabytes of data) happen exactly once, after Core Data (or the persistent store coordinator) has figured out precisely what to do to go from v.1.0 to v.11.0?

    Read the article

  • Iterating over a large data set in long running Python process - memory issues?

    - by user1094786
    I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher). Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc. What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually? Any advice would be appreciated. Thanks!

    Read the article

  • SQL SERVER – Difference Between GRANT and WITH GRANT

    - by pinaldave
    This was very interesting question recently asked me to during my session at TechMela Nepal. The question is what is the difference between GRANT and WITH GRANT when giving permissions to user. Let us first see syntax for the same. GRANT: USE master; GRANT VIEW ANY DATABASE TO username; GO WITH GRANT: USE master; GRANT VIEW ANY DATABASE TO username WITH GRANT OPTION; GO The difference between both of this option is very simple. In case of only GRANT – username can not grant the same permission to other users. In case, of the option of WITH GRANT – username will be able to give the permission it has received to other users. This is very basic definition of the subject. I would like to request my readers to come up with working script to prove this scenario. If can submit your script to me by email (pinal ‘at’ sqlauthority.com) or in comment field. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Permissions

    Read the article

  • Letöltheto az Oracle Database Firewall 5.0

    - by Lajos Sárecz
    2010 május 20-án jelentettük be, hogy megvettük az adatbázis tuzfal megoldást fejleszto Secerno céget. Azóta viszonylag keveset lehetett hallani errol a termékrol, idehaza egyedül az oszi ITBN konferencián tartott róla eloadást Stuart Sharp szuk fél órában. Ráadásul a felvásárlás óta a terméket sem lehetett megvásárolni, hiszen a merge után folyó fejlesztések még nem voltak készen. Január 11. óta azonban letötlheto az Oracle Database Firewall 5.0 telepítoje az Oracle edelivery oldaláról az Oracle Database Product Pack-en belül Linux x86 platformra. A Database Firewall az adatbázis védelem elso vonalának tekintheto. Valós idoben monitorozza az adatbázis aktivitását a hálózaton. SQL nyelvi elemzojével rendkívül pontosan képes detektálni a külso és belso támadásokat, a jogosultatlanul, támadó szándékkal végrehajtott tranzakciókat. Az SQL nyelvi elemzojének kifinomultsága lehetové teszi a szurés közel 100%-os pontosságát és megbízhatóságát, ami azért rendkívül fontos, mert nem elég minden támadó tranzakciót kiszurni, de fontos hogy a normál üzletmenetnek megfelelo tranzakciók közül egyet se szurjön, hiszen az is komoly üzleti károkat okozhat. Az adatbázis tuzfalról több részletet tudhat meg mindenki, aki regisztrál és ellátogat a január 27-i Oracle Security Summit rendezvényünkre, ahol a tervek szerint ismét Stuart Sharp tart majd eloadást, viszont ezúttal 1 órában sokkal több részletet tud megosztani a magyar ügyfelekkel és partnerekkel. A Database Firewall eloadást megelozoen egyébként én tartok egy kb. félórás áttekintést az Oracle Database biztonsági megoldásairól.

    Read the article

  • Booby Traps and Locked-in Kids: An Interview with a Safecracker

    - by Jason Fitzpatrick
    While most of our articles focus on security of the digital sort, this interview with a professional safecracker is an interesting look the physical side of securing your goods. As part of their Interviews with People Who Have Interesting or Unusual Jobs series over at McSweeney’s, they interviewed Ken Doyle, a professional a locksmithing and safecracking veteran with 30 years of industry experience. The interview is both entertaining and an interesting read. One of the more unusual aspects of safecracking he highlights: Q: Do you ever look inside? A: I NEVER look. It’s none of my business. Involving yourself in people’s private affairs can lead to being subpoenaed in a lawsuit or criminal trial. Besides, I’d prefer not knowing about a client’s drug stash, personal porn, or belly button lint collection. When I’m done I gather my tools and walk to the truck to write my invoice. Sometimes I’m out of the room before they open it. I don’t want to be nearby if there is a booby trap. Q: Why would there be a booby trap? A: The safe owner intentionally uses trip mechanisms, explosives or tear gas devices to “deter” unauthorized entry into his safe. It’s pretty stupid because I have yet to see any signs warning a would-be culprit about the danger. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • Is knowledge of hacking mechanisms required for an MMO?

    - by Gabe
    Say I was planning on, in the future (not now! There is alot I need to learn first) looking to participating in a group project that was going to make a massively multiplayer online game (mmo), and my job would be the networking portion. I'm not that familiar with network programming (I've read a very basic book on PHP, MYSQL and I messed around a bit with WAMP). In the course of my studying of PHP and MYSQL, should I look into hacking? Hacking as in port scanning, router hacking, etc. In MMOs people are always trying to cheat, bots and such, but the worst scenario would be having someone hack the databases. This is just my conception of this, I really don't know. I do however understand networking fairly well, like subnetting/ports/IP's (local/global)/etc. In your professional opinion, (If you understand the topic, enlighten me) Should I learn about these things in order to counter the possibility of this happening? Also, out of the things I mentioned (port scanning, router hacking) Is there anything else that pertains to hacking that I should look into? I'm not too familiar with the malicious/Security aspects of Networking. And a note: I'm not some kid trying to learn how to hack. I just want to learn as much as possible before I go to college, and I really need to know if I need to study this or not.

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Implications on automatically "open" third party domain aliasing to one of my subdomains

    - by Giovanni
    I have a domain, let's call it www.mydomain.com where I have a portal with an active community of users. In this portal users cooperate in a wiki way to build some "kind of software". These software applications can then be run by accessing "public.mydomain.com/softwarename" I then want to let my users run these applications from their own subdomains. I know I can do that by automatically modifying the.htaccess file. This is not a problem. I want to let these users create dns aliases to let them access one specific subdomain. So if a user "pippo" that owns "www.pippo.com" wants to run software HelloWorld from his own subdomains he has to: Register to my site Create his own subdomain on his own site, run.pippo.com From his DNS control panel, he creates a CNAME record "run.pippo.com" pointing to "public.mydomain.com" He types in a browser http://run.pippo.com/HelloWorld When the software(that is physically run on my server) is called, first it checks that the originating domain is a trusted one. I don't do any other kind of check that restricts software execution. From a SEO perspective, I care about Google indexing of www.mydomain.com but I don't care about indexing of public.mydomain.com What are the possible security implications of doing this for my site? Is there a better way to do this or software that already does this that I can use?

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Trigger IP ban based on request of given file?

    - by Mike Atlas
    I run a website where "x.php" was known to have vulnerabilities. The vulnerability has been fixed and I don't have "x.php" on my site anymore. As such with major public vulnerabilities, it seems script kiddies around are running tools that hitting my site looking for "x.php" in the entire structure of the site - constantly, 24/7. This is wasted bandwidth, traffic and load that I don't really need. Is there a way to trigger a time-based (or permanent) ban to an IP address that tries to access "x.php" anywhere on my site? Perhaps I need a custom 404 PHP page that captures the fact that the request was for "x.php" and then that triggers the ban? How can I do that? Thanks! EDIT: I should add that part of hardening my site, I've started using ZBBlock: This php security script is designed to detect certain behaviors detrimental to websites, or known bad addresses attempting to access your site. It then will send the bad robot (usually) or hacker an authentic 403 FORBIDDEN page with a description of what the problem was. If the attacker persists, then they will be served up a permanently reccurring 503 OVERLOAD message with a 24 hour timeout. But ZBBlock doesn't do quite exactly what I want to do, it does help with other spam/script/hack blocking.

    Read the article

  • Implicit OAuth2 endpoint vs. cookies

    - by Jamie
    I currently have an app which basically runs two halves of an API - a restful API for the web app, and a synchronisation API for the native clients (all over SSL). The web app is completely javascript based and is quite similar to the native clients anyway - except it currently does not work offline. What I'm hoping to do is merge the fragmented APIs into a single restful API. The web app currently authenticates by issuing a cookie to the client whereas the native clients work using a custom HMAC access token implementation. Obviously a public/private key scenario for a javascript app is a little pointless. I think the best solution would be to create an OAuth2 endpoint on the API (like Instagram, for example http://instagram.com/developer/authentication/) which is used by both the native apps and the web app. My question is, in terms of security how does an implicit OAuth2 flow compare (storing the access token in local storage) to "secure" cookies? Presumably although SSL solves man in the middle attacks, the user could theoretically grab the access token from local storage and copy it to another machine?

    Read the article

  • PHP safe_mode is a pain, looking for advice (Ubuntu 12.04 server, public webserver)

    - by user73279
    Maybe askUbuntu isn't the right forum or I haven't provided the right search query but I haven't seen anything in my searching of askUbuntu on PHP safe_mode. I get lots of Windows Safe Mode and Ubuntu Safe Mode results but not PHP safe_mode. So I keep running into one issue after another regarding PHP safe_mode. (I write a lot of my own PHP code for various site maintenance tools and such.) I know safe_mode is going away in the next version of PHP but I still see a fair amount of advice recommending that you leave it enabled. I've recently consolidated from 3 servers down to 1 and at least one of those old servers had safe_mode disabled without any issues. (The lack of issues may have simply been a matter of good luck.) None of the previous 3 gave me this much trouble so I'm guessing so additional php.ini/PHP safe_mode setting was turned on for the new server. I primarily run WordPress for my websites with a few MediaWiki sites sprinkled in. And I am currently running into an issue using WordPress's auto update feature as it doesn't seem to be able to use fopen. WordPress is not relaying the actual error message to me but since I was just able to update the plugins I'm using this is a safe_mode problem. I've had a lot of safe_mode issues since consolidating to this new server. Long story short, the advice I'd seen to use safe_mode was all at least 2 years old. Do I really need it? If I disable PHP safe_mode are there a good set of security measures I should implement - i.e. chmod 640 /var/www/..., add this to your .htaccess, etc - to protect my server/sites? Thanks

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

  • Is file permission secured when it transferred from Ubuntu to Windows?

    - by Gaurav_Java
    I am having 9GB text file which is encrypted . This file contains some confidential data . Which is on my system(Ubuntu) and my external HDD (ntfs) . This file get daily updated and then encrypted . But it has to be shared among 2-3 (Windows) person. I defined permission so that no other person can even read this file(chmod 660). It is too large file, so I can't upload it anywhere and it get updated daily basis. But this file travel on Windows OS and Ubuntu also. Even I am having copy of this on my personal computer. Recently it was deleted by some other user over Windows . I just want to know how can I set permission over that file so that it cannot be deleted from any other operating system. If someone delete this file, then I am having data old for couple of days, which is only on my system. I gone through this question it says there is nothing. And from this question I am not able to understand how can I protect it. Can I do anything for preventing this file from being deleted. Then how can I secure this files from getting deleted any suggestion or software or ideas. Maybe I sound silly or this is stupid question. Please don't close it, thanks for any suggestion or solution.

    Read the article

  • How do I dissuade users from using the same password with similar systems?

    - by Resorath
    I'm building a web application that connects to other web services (using strictly anonymous binding, so no user passwords are being used). However the web application maintains its own users itself, and is required to ask certain details such as e-mail addresses and public linking information to these other web services (for example, a username but not a password). I want to deter or prevent users from reusing passwords in my application that they have also used in the applications I'm linking to. For example, if I ask for their e-mail and provide me with their gmail address, I don't want them using their gmail password for my system. Another example would be reusing a password to a linked system in which they also gave me their username. One idea I had was to simply try using the information they gave me, along with the password they are trying to store and log in to these external web applications to test the password - then immediately unbind if I was successful and ask the user to use a different password. However I suspect there is a host of morale and legal issues there. The reason this is a big deal to me is accountability. My application is simply not funded enough to invest properly in security around user passwords. A salted, hashed password in a public SQL-like database is as secure as it gets. So if passwords and linked usernames or e-mails get out, I don't want my userbase compromised.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >