Search Results

Search found 41147 results on 1646 pages for 'database security'.

Page 33/1646 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • WCF Security in a Windows Service

    - by Alphonso
    I have a WCF service which can run as Console App and a Windows Service. I have recently copied the console app up to a W2K3 server with the following security settings: <wsHttpBinding> <binding name="ServiceBinding_Security" transactionFlow="true" > <security mode="TransportWithMessageCredential" > <message clientCredentialType="UserName" /> </security> </binding> </wsHttpBinding> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="Common.CustomUserNameValidator, Common" /> </serviceCredentials> Security works fine with no problems. I have exactly the same code, but running in a windows service and I get the following error when I try to call any of the methods from a client: System.ServiceModel.Security.MessageSecurityException was unhandled Message="An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail." Source="mscorlib" StackTrace: Server stack trace: at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.ProcessReply(Message reply, SecurityProtocolCorrelationState correlationState, TimeSpan timeout) ...... (lots of stacktrace info - not very useful) InnerException: System.ServiceModel.FaultException Message="An error occurred when verifying security for the message." The exception tells me nothing. I'm assuming that it has something to do with acces to system resources from the Windows Service. I've tried running it under the same account as the console app, but no luck. Does anyone have any ideas?

    Read the article

  • "Cannot perform a differential backup for database "myDb", because a current database backup does no

    - by krimerd
    Hi there, I have what seems to be a pretty common problem when trying to take a differential backup. We have a SQL Server 2008 Standard (64bit) and we use Litespeed v 5.0.2.0 to take our backups. We take full backups once a week and a differential on a daily basis. The problem is, every time I try to take a diff backup I get the following error: "VDI open failed due to requested abort. BACKUP DATABASE is terminating abnormally. Cannot perform a differential backup for database "myDb", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option." The problem is that I know 100% I have a full backup because I just double checked. Only once I was able to take a diff backup and that was when I took it immediately after I took a full backup. I have searched around and noticed that this is pretty common (although mostly with SQL 2005) and a solution that a lot of ppl suggest and that I haven't tried yet is to disable the SQL Server VSS Writer service. The problem with this is #1 I think I might need this service since I am using a third party backup software and #2 I am not sure exactly what the service does and don't want to disable it just like that. Has any of you ever experienced this problem and how did you go about fixing it? Thank you,

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • I cannot connect to database from Drupal

    - by Patrick
    hi, I've uploaded my drupal website (and related database) to my new server. The database info is: host: localhost user: user pass: pass databaseName = database_name I've set the following line in settings.php file: $db_url = 'mysqli://user:password@localhost/database_name'; but what I get is this: If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. I guess the database is running, it always run and I can access with phpmyadmin so I think the problem is not there. The database and website files upload have also been succesfull.. so I dunno what to do to fix this issue. It is mysql on IIS Server thanks

    Read the article

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • Java SE 7u10: Enhanced Security Features and Support for New Platforms

    - by Tori Wieldt
    On December 11, 2012 Oracle released Java SE 7 Update 10 (Java SE 7u10). This release includes enhanced security features and support for new platforms. Enhanced Security Features The JDK 7u10 release includes the following security enhancements: The ability to disable any Java application from running in the browser. This mode can be set in the Java Control Panel or (on Microsoft Windows platform only) using a command-line install argument. The ability to select the desired level of security for unsigned applets, Java Web Start applications, and embedded JavaFX applications that run in a browser. Four levels of security are supported. This feature can be set in the Java Control Panel or (on Microsoft Windows platform only) using a command-line install argument. New dialogs to warn you when the JRE is insecure (either expired or below the security baseline) and needs to be updated. For more information, read Henrik Stahl's blog Oracle JDK 7u10 Released with New Security and the documentation Setting the Level of Security for the Java Client. New Supported Platforms Java SE 7 Update 10 (Java SE 7u10) supports Windows 8 Desktop Mode1 with IE 10, and Mac OS 10.8.? For more information, refer to the Oracle Certified System Configurations page.  Download and Release Notes Java SE 7u10 is available on OTN Download Page. To learn more about the release, please see the Java SE 7u10 Release Notes. For information about the other Java releases last week, read the Java Source blog "Java SE Updates." 

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

  • Oracle Exadata?????????? ??1

    - by takashi.hitomi
    2009?7?~2010?5????Oracle Exadata???????????????? 2010?4?15? ???? ???????????? ? ??????Exadata V2??????????????????????? 2010?4?13? ???? ??????????? ? ?????????????????????????? 2010?4?6? ?????·???????·??????? ? T???????????????????????????????????? 2010?3?1? ?????? ????? ?????????????????????????????????????? 2010?2?2? ???? ????????? ? ??????????????Intel???????????????Sun Oracle Database Machine??????? 2010?1?26? ???? ????(????·???) ? ?Oracle Exadata??????????·??????????????????????? 2009?7?14? ?????????? ? ???????????HP Oracle Database Machine????

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Storing revisions of a document

    - by dev.e.loper
    This is a follow up question to my original question. I'm thinking of going with generating diffs and storing those diffs in the database 'History' table. I'm using diff-match-patch library to generate what is called a 'patch'. On every save, I compare previous and new version and generate this patch. The patch could be used to generate a document at specific point in time. My dilemma is how to store this data. Should I: a Insert a new database record for every patch? b. Store these patches in javascript array and store that array in history table. So there is only one db History record for document with an array of all the patches. Concerns with: a. Too many db records generated. Will be slow and CPU intensive to query. b. Only one record. If record is somehow corrupted/deleted. Entire revision history is gone. I'm looking for suggestions, concerns with either approach.

    Read the article

  • Gmail: security warning icon

    - by Notetaker
    Hello, I just enabled some Gmail Labs programs in my Gmail account, and then I noticed the orange triangle icon with an exclamation mark in it at the end of the address bar of my Google Chrome browser. Clicking on it brought forth a "Security Information' dialog box, with the following messages: "--mail.google.com The identity of website has been verified by Thawlte SGC CA. --Your connection to mail.google.com is encrypted with 128-bit encryption. However, this page includes other resources which are not secure. These resources can be viewed by others while in transit, and can be modified by an attacker to change the look or behavior of the page." I then logged into two of my other Gmail accounts, one of which has no Gmail Labs programs enabled, and the other with 1 program enabled quite some time ago, both with the same result as above (i.e., with the appearance of the orange triangle warning sign in the address bar). I don't remember seeing the orange triangle before, but I'm not sure if it has ever appeared or not. I have "Always use https" enabled for my Gmail accounts. My questions are: Is there a way to identify and remove these un-secure "resources"? (Could enabling Gmail Labs programs have brought these on?) Meanwhile, are my Gmail accounts compromised and unsafe to use? If so, what should I being doing about that now? After this problem is solved, would I need to reset the password to my Gmail accounts, and/or take any other measures to restore their security? Many thanks for answering my questions!

    Read the article

  • Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is HDFS. In this article we will take a quick look at the importance of the Relational Database in Big Data world. A Big Question? Here are a few questions I often received since the beginning of the Big Data Series - Does the relational database have no space in the story of the Big Data? Does relational database is no longer relevant as Big Data is evolving? Is relational database not capable to handle Big Data? Is it true that one no longer has to learn about relational data if Big Data is the final destination? Well, every single time when I hear that one person wants to learn about Big Data and is no longer interested in learning about relational database, I find it as a bit far stretched. I am not here to give ambiguous answers of It Depends. I am personally very clear that one who is aspiring to become Big Data Scientist or Big Data Expert they should learn about relational database. NoSQL Movement The reason for the NoSQL Movement in recent time was because of the two important advantages of the NoSQL databases. Performance Flexible Schema In personal experience I have found that when I use NoSQL I have found both of the above listed advantages when I use NoSQL database. There are instances when I found relational database too much restrictive when my data is unstructured as well as they have in the datatype which my Relational Database does not support. It is the same case when I have found that NoSQL solution performing much better than relational databases. I must say that I am a big fan of NoSQL solutions in the recent times but I have also seen occasions and situations where relational database is still perfect fit even though the database is growing increasingly as well have all the symptoms of the big data. Situations in Relational Database Outperforms Adhoc reporting is the one of the most common scenarios where NoSQL is does not have optimal solution. For example reporting queries often needs to aggregate based on the columns which are not indexed as well are built while the report is running, in this kind of scenario NoSQL databases (document database stores, distributed key value stores) database often does not perform well. In the case of the ad-hoc reporting I have often found it is much easier to work with relational databases. SQL is the most popular computer language of all the time. I have been using it for almost over 10 years and I feel that I will be using it for a long time in future. There are plenty of the tools, connectors and awareness of the SQL language in the industry. Pretty much every programming language has a written drivers for the SQL language and most of the developers have learned this language during their school/college time. In many cases, writing query based on SQL is much easier than writing queries in NoSQL supported languages. I believe this is the current situation but in the future this situation can reverse when No SQL query languages are equally popular. ACID (Atomicity Consistency Isolation Durability) – Not all the NoSQL solutions offers ACID compliant language. There are always situations (for example banking transactions, eCommerce shopping carts etc.) where if there is no ACID the operations can be invalid as well database integrity can be at risk. Even though the data volume indeed qualify as a Big Data there are always operations in the application which absolutely needs ACID compliance matured language. The Mixed Bag I have often heard argument that all the big social media sites now a days have moved away from Relational Database. Actually this is not entirely true. While researching about Big Data and Relational Database, I have found that many of the popular social media sites uses Big Data solutions along with Relational Database. Many are using relational databases to deliver the results to end user on the run time and many still uses a relational database as their major backbone. Here are a few examples: Facebook uses MySQL to display the timeline. (Reference Link) Twitter uses MySQL. (Reference Link) Tumblr uses Sharded MySQL (Reference Link) Wikipedia uses MySQL for data storage. (Reference Link) There are many for prominent organizations which are running large scale applications uses relational database along with various Big Data frameworks to satisfy their various business needs. Summary I believe that RDBMS is like a vanilla ice cream. Everybody loves it and everybody has it. NoSQL and other solutions are like chocolate ice cream or custom ice cream – there is a huge base which loves them and wants them but not every ice cream maker can make it just right  for everyone’s taste. No matter how fancy an ice cream store is there is always plain vanilla ice cream available there. Just like the same, there are always cases and situations in the Big Data’s story where traditional relational database is the part of the whole story. In the real world scenarios there will be always the case when there will be need of the relational database concepts and its ideology. It is extremely important to accept relational database as one of the key components of the Big Data instead of treating it as a substandard technology. Ray of Hope – NewSQL In this module we discussed that there are places where we need ACID compliance from our Big Data application and NoSQL will not support that out of box. There is a new termed coined for the application/tool which supports most of the properties of the traditional RDBMS and supports Big Data infrastructure – NewSQL. Tomorrow In tomorrow’s blog post we will discuss about NewSQL. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Windows Server 2008 Create Symbolic Link, updated Security Policy still gives privilege error

    - by Matt
    Windows Server 2008, RC2. I am trying to create a symbolic/soft link using the mklink command: mklink /D LinkName TargetDir e.g. c:\temp\>mklink /D foo bar This works fine if I run the command line as Administrator. However, I need it to work for regular users as well, because ultimately I need another program (executing as a user) to be able to do this. So, I updated the Local Security Policy via secpol.msc. Under "Local Policies" "User Rights Management" "Create symbolic links", I added "Users" to the security setting. I rebooted the machine. It still didn't work. So I added "Everyone" to the policy. Rebooted. And STILL it didn't work. What on earth am I doing wrong here? I think my user is even an Administrator on this box, and running plain command line even with this updated policy in place still gives me: You do not have sufficient privilege to perform this operation.

    Read the article

  • SQL Server Installation Checklist

    - by Jonathan Kehayias
    The other night I was asked on Twitter by Todd McDonald (Twitter), for a build list for SQL Server 2005 and 2008.  My initial response was to provide a link to the SQL Server Build List Blog , which documents all of the builds of SQL Server and provides links to the KB articles associated with the builds.  However, this wasn’t what Todd was after, he actually wanted a reference for an installation checklist for SQL Server.  I have a number of these that I use in my job, and they vary...(read more)

    Read the article

  • Transparent Data Encryption Helps Customers Address Regulatory Compliance

    - by Troy Kitch
    Regulations such as the Payment Card Industry Data Security Standards (PCI DSS), U.S. state security breach notification laws, HIPAA HITECH and more, call for the use of data encryption or redaction to protect sensitive personally identifiable information (PII). From the outset, Oracle has delivered the industry's most advanced technology to safeguard data where it lives—in the database. Oracle provides a comprehensive portfolio of security solutions to ensure data privacy, protect against insider threats, and enable regulatory compliance for both Oracle and non-Oracle Databases. Organizations worldwide rely on Oracle Database Security solutions to help address industry and government regulatory compliance. Specifically, Oracle Advanced Security helps organizations like Educational Testing Service, TransUnion Interactive, Orbitz, and the National Marrow Donor Program comply with privacy and regulatory mandates by transparently encrypting sensitive information such as credit cards, social security numbers, and personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides organizations the most cost-effective solution for comprehensive data protection. Watch the video and learn why organizations choose Oracle Advanced Security with transparent data encryption.

    Read the article

  • Tracking Security Vulnerability remediation

    - by Zypher
    I've been looking into this for a little while, but havn't really found anything suitable. What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT" What I am looking for is something pretty simple that allows the following: batch entry of new vulnerabilities that need to be remediated Per user assignment AD/LDAP Authentiation Simple interface to track progress - research, change control status, remediated, etc. Historical search ability Ability to divide by division Ability to store proof of resolution for the Security Team to access Dependency tracking Linux based is best (that's my group :) ) Free is good, but cost doesn't matter so much if the system is worth it The systems doesn't have to have all of these features, but if it did that would be great. yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group. Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for. Server Faults input is greatly appreciated as always!

    Read the article

  • Security for university research lab systems

    - by ank
    Being responsible for security in a university computer science department is no fun at all. And I explain: It is often the case that I get a request for installation of new hw systems or software systems that are really so experimental that I would not dare put them even in the DMZ. If I can avoid it and force an installation in a restricted inside VLAN that is fine but occasionally I get requests that need access to the outside world. And actually it makes sense to have such systems have access to the world for testing purposes. Here is the latest request: A newly developed system that uses SIP is in the final stages of development. This system will enable communication with outside users (that is its purpose and the research proposal), actually hospital patients not so well aware of technology. So it makes sense to open it to the rest of the world. What I am looking for is anyone who has experience with dealing with such highly experimental systems that need wide outside network access. How do you secure the rest of the network and systems from this security nightmare without hindering research? Is placement in the DMZ enough? Any extra precautions? Any other options, methodologies?

    Read the article

  • Apache server configuration name resolution (virtual host naming + security)

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

  • Fix for php 5.3.9 libxsl security "bug" fix

    - by Question Mark
    just this morning i updated my debian server to php 5.3.9 , change log (last item in list) has a fix for this bug and now when running any hosted site using XSL transforms i get: Warning: XSLTProcessor::transformToXml(): Can't set libxslt security properties, not doing transformation for security reasons I'm not using any <sax:output> tags in my xslt at all. Does anybody have any information on this, current chatter about it is thin, so i'm i little lost. Using the suggestion about switching ini settings on and off either side of -transformToXml(): ini_set("xsl.security_prefs", XSL_SECPREFS_NONE) or $xsl->setSecurityPreferences(XSL_SECPREFS_NONE) brings me back to the same error Many thanks. Progress: - Upgrading libxml and recompiling libxslt against the new version was a good suggestion, though has not fixed the issue. - Compiling the latest php5.3 snapshot does not fix the issue. Solution: I'm unsure what actually solved this, very sorry for anyone else having the same problem. firstly i upgraded libxml, then applied a few patches, then went into php source for the xsl parser and added some debugging and a few tweaks, after a few compiles getting the configure args right the error went away and wasn't reproducible. I would definitely recommend upgrading libxml as Petr suggested below and then grabbing the latest snapshot from php.net.

    Read the article

  • Legalities of freelance security consultant (SQLi) [closed]

    - by Seidr
    Over the years I've gained a large amount of experience in Programming (my main occupation) and server admin, and as a result have a fairly decent backing in security practices. I'm also pretty good at spotting security flaws in software (including but not limited to SQLi), and have built up a list of sites that could definately use some looking at. My question is, what are the legalities of me contacting these sites saying something along the lines of "I've looked at your site and it appears vulnerable - customer data could be compromoised - would you like me to fix it?". Could me finding out that the site is infact vulnerable be construed as an attack itself? If the prospective client so wished, could they take me to court over this? When I find a vulnerable site, all I do is confirm and make a note of the vulnerability. I'm not in it for personal gain (getting paid for FIXING it would be nice!), just curiosity. Is this a viable way to go about finding clients for this kind of work, or would you recommend a more 'legitimate' way? Any suggestions/advice would be greatly appreciated :)

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >