Search Results

Search found 68155 results on 2727 pages for 'data security'.

Page 143/2727 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • What is the best policy for allowing clients to change email?

    - by Steve Konves
    We are developing a web application with a fairly standard registration process which requires a client/user to verify their email address before they are allowed to use the site. The site also allows users to change their email address after verification (with a re-type email field, as well). What are the pros and cons of having the user re-verify their email. Is this even needed? EDIT: Summary of answers and comments below: "Over-verification annoys people, so don't use it unless critical Use a "re-type email" field to prevent typos Beware of overwriting known good data with potentially good data Send email to old for notification; to new for verification Don't assume that the user still has access to the old email Identify impact of incorrect email if account is compromised

    Read the article

  • Oracle Vanquisher: A Data Center Optimization Adventure to Debut at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    Heat. Downtime. Site-wide outages. Legacy hardware. Security holes. These are all threats to your data center. What if you could vanquish them to simplify your IT and accelerate business innovation and growth? Find out how - play Oracle Vanquisher, a new data center optimization video game that will be showcased at Oracle OpenWorld (Hardware DEMOgrounds, Moscone South Hall).Playing Oracle Vanquisher, you'll be armed with a cool Oracle vacuum pack suit and a strategic IT roadmap. You'll thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position. Of course, optimizing your data center is far more than a great game. For more information, visit the Oracle Optimized Data Center homepage or check out these targeted Oracle OpenWorld keynotes and sessions:KeynotesShift Complexity, with Oracle President Mark HurdMonday, October 1, 8:00 a.m. - 9:30 a.m.Moscone North, Hall DOracle Cloud Infrastructure and Engineered Systems: Fast, Reliable, Virtualized, with Oracle Executive Vice President John FowlerWednesday, October 3, 8:00 a.m. - 9:45 a.m.Moscone North, Hall DSessions Oracle Linux Oracle Optimized Solutions Oracle Solaris SPARC Servers Storage SPARC SuperCluster Oracle VM Server Virtualization Desktop Virtualization

    Read the article

  • How to restore missing calendar data from Lightning/Thunderbird

    - by dev9
    Today out of nowhere all my events and tasks disappeared from my Thunderbird. However, I have a full backup of .thunderbird folder. How can I restore my calendar data? I reverted these files to previous versions: /home/me/.thunderbird/xxx.default/calendar-data/local.sqlite /home/me/.thunderbird/xxx.default/prefs.js but I still cannot see any data in my Thunderbird. What else should I do?

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • What to choose API based server or Socket based server for data driven application

    - by Imdad
    I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server pooling is done. As there are lot of data to pool it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions.

    Read the article

  • Oracle Enterprise Data Quality: A Leader in Customer Satisfaction

    - by Mala Narasimharajan
    It’s always good to hear feedback from practitioners – the ones who are in the trenches who have experienced both the good and the bad sides of enterprise software. Gartner recently released a report which surveyed 260 data quality professionals from around the world and found that most expressed considerable satisfaction as a whole from their data quality tool vendors. However, a couple of key findings stand out which include, Datanomic (acquired by Oracle), leading the pack in terms of overall customer satisfaction among data quality tools. Read all about it right here http://bit.ly/Ay45SG

    Read the article

  • Looking for a Lead SQL Developer with a passion for data

    - by simonsabin
    Data is a huge part of what we do and I need someone that has a passion for data to lead our SQL team. If you’ve got experience with SQL and want to lead a team working in an agile environment with aggressive CI processes. Do you have a passion about data and want to use technology to solve problems then you are just the person I am looking for The role is based in London working for on of the top tech companies in Europe. Contact me though my blog or linkedin ( http://uk.linkedin.com/in/simonsabin...(read more)

    Read the article

  • An introduction to Oracle Retail Data Model with Claudio Cavacini

    - by user801960
    In this video, Claudio Cavacini of Oracle Retail explains Oracle Retail Data Model, a solution that combines pre-built data mining, online analytical processing (OLAP) and dimensional models to deliver industry-specific metrics and insights that improve a retailers’ bottom line. Claudio shares how the Oracle Retail Data Model (ORDM) delivers retailer and market insight quickly and efficiently, allowing retailers to provide a truly multi-channel approach and subsequently an effective customer experience. The rapid implementation of ORDM results in predictable costs and timescales, giving retailers a higher return on investment. Please visit our website for further information on Oracle Retail Data Model.

    Read the article

  • Why not expose a primary key

    - by Angelo Neuschitzer
    In my education I have been told that it is a flawed idea to expose actual primary keys (not only DB keys, but all primary accessors) to the user. I always thought it to be a security problem (because an attacker could attempt to read stuff not their own). Now I have to check if the user is allowed to access anyway, so is there a different reason behind it? Also, as my users have to access the data anyway I will need to have a public key for the outside world somewhere in between. Now that public key has the same problems as the primary key, doesn't it?

    Read the article

  • Ubuntu took away permissions from my Data partition

    - by RobinJ
    The pangolin has struck again. The bug of the day for today is Ubuntu taking away my permissions on my Data partition (NTFS). One moment everything worked fine, the next moment I couldn't chmod anything anymore. chown throws no errors or warnings at all, but nothing has changed either. chmod keeps saying Operation not permitted. I've been messing around with /etc/fstab as suggested by other answers on AskUbuntu, but none of them seem to have the desired effect. This is my current line: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=000,gid=46,permissions,users,auto,exec 0 0 For reference, this is the original one: UUID=25D7D681409A96B7 /media/Data ntfs defaults,umask=007,gid=46 0 0 (right after the problem started occuring) What do I need to do so I am the owner of my own hard drive again? I want to be able to just use chmod and chown (without sudo) without being told that some mysterious alien has taken over control of my Data partition. I can still read and write, but execution permissions seem to be the problem.

    Read the article

  • Create named criteria in EJB Data control

    - by shantala.sankeshwar
    This article gives the detailed steps on creating named criteria in EJB Data control.Note that this feature is available in Jdev version 11.1.2.0.0Use Case DescriptionSuppose we have defined an EJB Entity Object & we would like to filter the Entity object based on some criteria,then this filtering can be achieved by creating named criteria in EJB Data Control.Implementation stepsLet us suppose that we have created Java EE Web Application with Entities from Emp table Create session bean,generate data control for the same Edit empFindAll in DataControls.dcx fileCreate simple Named Criteria: deptno>=20Create on '+' icon to create Named Criteria:Refresh the Data Controls & create a new jspx page.Drop EmpCriteria as ADF Query Panel with TableRun the page,click on search button & we will see that Emp table shows filtered records

    Read the article

  • Recover files from NTFS drive with bad sectors

    - by Martin
    A few nights ago I have created a backup of my data on an external 500 GB NTFS USB hard drive. I have then formatted my computer, reinstalled Ubuntu and started transferring back the data from the external HDD. Unfortunately some files have became corrupted and Ubuntu is unable to copy them over. The same issue happens if I login using Windows 7. Disk Utility detects with SMART that there are "a few bad sectors". Some of files are perfectly intact, but other files cannot be accessed (nor read, copied...) although they are displayed within nautilus and show the correct file size. Is there anything I can do to recover this data? I have thought of using TestDisk but this utility seems more useful for repairing lost partitions or deleted files. I have also thought of using ddrescue so I could at least have a low level copy of the disk but I am not sure what use to make of it in order to recover the data!!!

    Read the article

  • Suggest-a-Session for Oracle Develop 2010: Last chance to get your paper submitted.

    - by olaf.heimburger
    While working with Oracle Technologies at customer projects we all come across solutions and ideas that are worth to share with a greater audience. When you missed the Call For Paper for Oracle OpenWorld and Oracle Develop you have the chance to get in. The Oracle Mix Community provides a tool called Suggest-a-Session for submitting and voting the sessions you would like to attend. My Suggestions When you pass by, do not forget to vote for my sessions. These are: Real-World Single Sign-On and ADF Security The Personal Newsletter Generator: Implement Cool Applications with ADF Faces Thank you for your support.

    Read the article

  • Warning about SSL ceritificate, am I under attack ?

    - by Bunny Rabbit
    Lately I've been getting a lot of warnings about SSL certifications on my pc, Empathy keeps telling me that Facebook's certificate is self-signed and can't be trusted, and also, there are occasional warnings in Google-Chrome about security. I remember the last one saying that that the page is secured but some of the resources that the page is using are not from a secure connection, something like that. Is my pc hacked / under attack? How can I check that, and if so, how can I safeguard myself? PS: One thing that comes to my mind is that I might be under an arp poisoning / spoofing attack.

    Read the article

  • How to implement a safe password history

    - by Lorenzo
    Passwords shouldn't be stored in plain text for obvious security reasons: you have to store hashes, and you should also generate the hash carefully to avoid rainbow table attacks. However, usually you have the requirement to store the last n passwords and to enforce minimal complexity and minimal change between the different passwords (to prevent the user from using a sequence like Password_1, Password_2, ..., Password_n). This would be trivial with plain text passwords, but how can you do that by storing only hashes? In other words: how it is possible to implement a safe password history mechanism?

    Read the article

  • Oracle Product Leader Named a Leader in Gartner MQ for MDM of Product Data Solutions

    - by Mala Narasimharajan
    Gartner recently Oracle as a leader in the MQ report for MDM of Product Data Solutions.  They named Oracle as a leader with the following key points:  Strong MDM portfolio covering multiple data domains, industries and use cases Oracle PDH can be a good fit for Oracle EBS customers and can form part of a multidomain solution: Deep MDM of product data functionality Evolving support for information stewardship For  more information on the report visit Oracle's Analyst Relations blog at  http://blog.us.oracle.com/dimdmar/.  To learn more about Oracle's product information solutions for master data management click here. 

    Read the article

  • Is there a way to change the date format used when InfoPath saves the form data to xml?

    - by Robert
    I have an InfoPath Form template that has some Date Picker controls in it bound to elements in an xml data source. I know I can change the display format of the date by going into the Date Picker Properties and setting the date format. This foramt is only used for display puposes when the form is being filled out. When the form is saved as an xml file the date is always stored in the format YYYY-MM-DD. Is there a way to change the date format that gets serialized to xml? I'm using InfoPath 2007.

    Read the article

  • How do I simplify terrain with tunnels or overhangs?

    - by KKlouzal
    I'm attempting to store vertex data in a quadtree with C++, such that far-away vertices can be combined to simplify the object and speed up rendering. This works well with a reasonably flat mesh, but what about terrain with overhangs or tunnels? How should I represent such a mesh in a quadtree? After the initial generation, each mesh is roughly 130,000 polygons and about 300 of these meshes are lined up to create the surface of a planetary body. A fully generated planet is upwards of 10,000,000 polygons before applying any culling to the individual meshes. Therefore, this second optimization is vital for the project. The rest of my confusion focuses around my inexperience with vertex data: How do I properly loop through the vertex data to group them into specific quads? How do I conclude from vertex data what a quad's maximum size should be? How many quads should the quadtree include?

    Read the article

  • MODX based site has been compromised, and tagged by Google as malware

    - by JAG2007
    I'm the webmaster (inherited the site from the developer) for a site called kenbrook.org. The site is currently being tagged as malware infected by Google, and gives the following details: http://www.google.com/safebrowsing/diagnostic?site=kenbrook.org Sadly, this is the second time it has occurred. I posted the issue when it happened last year originally on Stackoverflow on this post, shortly after I inherited the site. At the time the fix was a simple removal of a few lines of code from a .js file, but I never did discover or resolve the vulnerability. The site is built on MODX, which neither I, nor the original builder, have any familiarity with. I've tried to check for security updates from MODX, but updating that software has been a real pain also. Sooo...what's my next step to getting this whole issue resolved? Or steps?

    Read the article

  • OData Query Option top Forces Data To Be Sorted By Primary Key

    This post show a simple WCF Data Service (Formerly known as ADO.NET Data Services) project that retrieves data using the Reflection Provider for accessing data. It goes on to show that using $top... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What data counters / meters are available?

    - by Santosh
    Actually I have a wireless 3G modem that works well on Windows based operating system, its interface software were made Windows centric. It can still connect to internet on Ubuntu or other linux based operating system but it won't show the data counter (the interface which shows how much data has been transferred, at what speed). If I continue to surf internet in Linux then I won't have any idea how much data has been used and it would become heavy on my pocket. So I just want a software that let me know how much data has been transferred, if there is a limiter; that warns or disconnects me when I reach predefined MBs then its better. Please let me know if there is any software or script or something like that already there.

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Ask the Readers: How Do You Browse Securely Away From Home?

    - by Jason Fitzpatrick
    When you’re browsing away from home, be it on your smartphone, tablet, or laptop, how do you keep your browsing sessions secure? This week we’re interested in hearing all about your mobile security tips and tricks. When you’re out and about you often, out of necessity or convenience, need to connect to open Wi-Fi hotspots and otherwise put your data out there in ways that you don’t when you’re at home. This week we want to hear about your tips, tricks, and applications for keeping your data secure and private when you’re away from your home network. Sound off in the comments with your tips and then check back on Friday for the What You Said roundup. HTG Explains: Why Linux Doesn’t Need Defragmenting How to Convert News Feeds to Ebooks with Calibre How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More

    Read the article

  • What are the downsides of leaving automation tags in production code?

    - by joshin4colours
    I've been setting up debug tags for automated testing of a GWT-based web application. This involves turning on custom debug id tags/attributes for elements in the source of the app. It's a non-trivial task, particularly for larger, more complex web applications. Recently there's been some discussion of whether enabling such debug ids is a good idea to do across the board. Currently the debug ids are only turned on in development and testing servers, not in production. There have been points raised that enabling debug ids does cause performance to take a hit, and that debug ids in production may lead to security issues. What are benefits of doing this? Are there any significant risks for turning on debug tags in production code?

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >