Search Results

Search found 30293 results on 1212 pages for 'database insider'.

Page 130/1212 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • starting project with growth in mind.

    - by marabutt
    I have an idea for a web application and have some good people keen to get involved. I will be doing most of the code at the start and have a few years experience with some quite large projects. I have nearly 0 budget. What view should I take with regard to data storage / database? Get the project running quickly and inexpensively, then re-evaluate if it is a success? Does anyone have experience with this and advice?

    Read the article

  • what is the best age for programmer to hire [on hold]

    - by Mohamed Ahmed
    I'm graduated from Information systems institute since 2004 and I worked as a ICDL Instructor , but I know some SQL Server good and Database Design , now I'm in 30 age , and I want to start study computer programming and get MCSA SQL Server and MCSE certificates , but I have feel I'm old to start and the companies will not accept me for that reason and also because I don't have any experience yet in the field , I will start like a fresh graduated in 21 or 22 age , please help me what is the best age for programmer for accepted , and the age and late start will be a big obstacle for me or not

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • SQL: empty string vs NULL value

    - by Jacek Prucia
    I know this subject is a bit controversial and there are a lot of various articles/opinions floating around the internet. Unfortunatelly, most of them assume the person doesn't know what the difference between NULL and empty string is. So they tell stories about surprising results with joins/aggregates and generally do a bit more advanced SQL lessons. By doing this, they absolutely miss the whole point and are therefore useless for me. So hopefully this question and all answers will move subject a bit forward. Let's suppose I have a table with personal information (name, birth, etc) where one of the columns is an email address with varchar type. We assume that for some reason some people might not want to provide an email address. When inserting such data (without email) into the table, there are two available choices: set cell to NULL or set it to empty string (''). Let's assume that I'm aware of all the technical implications of choosing one solution over another and I can create correct SQL queries for either scenario. The problem is even when both values differ on the technical level, they are exactly the same on logical level. After looking at NULL and '' I came to a single conclusion: I don't know email address of the guy. Also no matter how hard i tried, I was not able to sent an e-mail using either NULL or empty string, so apparently most SMTP servers out there agree with my logic. So i tend to use NULL where i don't know the value and consider empty string a bad thing. After some intense discussions with colleagues i came with two questions: am I right in assuming that using empty string for an unknown value is causing a database to "lie" about the facts? To be more precise: using SQL's idea of what is value and what is not, I might come to conclusion: we have e-mail address, just by finding out it is not null. But then later on, when trying to send e-mail I'll come to contradictory conclusion: no, we don't have e-mail address, that @!#$ Database must have been lying! Is there any logical scenario in which an empty string '' could be such a good carrier of important information (besides value and no value), which would be troublesome/inefficient to store by any other way (like additional column). I've seen many posts claiming that sometimes it's good to use empty string along with real values and NULLs, but so far haven't seen a scenario that would be logical (in terms of SQL/DB design). P.S. Some people will be tempted to answer, that it is just a matter of personal taste. I don't agree. To me it is a design decision with important consequences. So i'd like to see answers where opion about this is backed by some logical and/or technical reasons.

    Read the article

  • What schema documentation tools exist for PostgreSQL

    - by Brad Koch
    MySQL has MySQL Workbench for designing and documenting your schema, and generates CREATE and ALTER scripts based on your design. We're looking at migrating to PostgreSQL in the near future, and we do need a practical way of documenting and modifying the schema structure. What similar tools exist for Postgres (that are OS X/Linux compatible)? Alternatively, what equivalent conventions would be followed for designing and documenting the structure of your Postgres database?

    Read the article

  • 24 Hours of PASS next week, pre-con preview style

    - by drsql
    I will be doing my Characteristics of a Great Relational Database , which is a session that I haven’t done since last PASS. When I was asked about doing this Summit Preview version of 24 hours of PASS, I decided that I would do this session, largely because it is kind of light and fun, but also because it is either going to be the basis of the end section of my pre-con at the summit or it is going to be the section of the pre-con we don’t get to because we are so involved in working out designs that...(read more)

    Read the article

  • Should I use multiple column primary keys or add a new colum?

    - by Covar
    My current database design makes use of a multiple column primary key to use existing data (that would be unique anyway) instead of creating an additional column assigning each entry an arbitrary key. I know that this is allowed, but was wondering if this is a practice that I might want to use cautiously and possibly avoid (much like goto in C). So what are some of the disadvantages I might see in this approach or reasons I might want a single column key?

    Read the article

  • Should users be deleted after inactivity on a website?

    - by Hovaness Bartamian
    When you have a social website or a website where you can register, would you eventually delete them after a certain time (after a year of inactivity) or would you rather keep their account records for ever? I know websites like Facebook have large amount of inactive, duplicated and fake accounts. So I'm wondering if after two years of inactivity it would be alright to send the account a warning email of deletion unless they log in. Just thinking of a clean and efficient database management or any implications this may cause to new potential users.

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Should I use multiple column primary keys or add a new column?

    - by Covar
    My current database design makes use of a multiple column primary key to use existing data (that would be unique anyway) instead of creating an additional column assigning each entry an arbitrary key. I know that this is allowed, but was wondering if this is a practice that I might want to use cautiously and possibly avoid (much like goto in C). So what are some of the disadvantages I might see in this approach or reasons I might want a single column key?

    Read the article

  • SQL Rally Voting Open

    - by AllenMWhite
    The voting for sessions for SQL Rally has been going on for a couple of weeks now. This week the Enterprise Database Administration & Deployment sessions are up for voting. I didn't go into politics because I don't feel comfortable telling people that they should vote for me but this is how the sessions are being decided for this conference, so here goes. I've submitted two abstracts, both grouped in the Summit Spotlight section. The first is a new session based on what I learned implementing...(read more)

    Read the article

  • Collecting the Information in the Default Trace

    The default trace is still the best way of getting important information to provide a security audit of SQL Server, since it records such information as logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.

    Read the article

  • Preventing Problems in SQL Server

    It is never a good idea to let your users be the ones to tell you of database server outages. It is far better to be able to spot potential problems by being alerted for the most relevant conditions on your servers at the best threshold. This will take time and patience, but the reward will be an alerting system which allows you to deal more effectively with issues before they involve system down-time

    Read the article

  • Should I have a separate method for Update(), Insert(), etc., or have a generic Query() that would be able to handle all of these?

    - by Prayos
    I'm currently trying to write a class library for a connection to a database. Looking over it, there are several different types of queries: Select From, Update, Insert, etc. My question is, what is the best practice for writing these queries in a C# application? Should I have a separate method for each of them(i.e. Update(), Insert()), or have a generic Query() that would be able to handle all of these? Thanks for any and all help!

    Read the article

  • NoSQL vs Ehcache caching advise for speeding up read only mysql Database

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb or Redis would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. What i need is quick access to the read only database.

    Read the article

  • MongoDb vs Ehcache caching advise for speeding up read only mysql Database

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. All i need is quick access to the read only database.

    Read the article

  • Cannot login to Postgrest database despite setting password for user 'postgres'

    - by Serg
    I'm trying to use pgAdmin III to manage my Postgres database. Here are the commands I've run on my machine: sudo apt-get install postgresql Then I installed the pgAdmin III application: sudo apt-get install pgadmin3 Next I focused on setting my username and password in order to login: sudo -u postgres psql postgres Here I set my password \password postgres Finally I just created my database: sudo -u postgres createdb repairsdatabase When I try to login using pgAdmin III, I get the error: An error has occurred: Error connecting to the server: FATAL: Peer authentication failed for user "postgres"

    Read the article

  • Export ASPNETDB data to another Database.

    - by raziiq
    Hi there. I am developing in Visual Web developer 2008. I have SQLEXPRESS 2005 and SQL Management Studio 2008 installed on my PC. I purchased a Database MS SQL 2008 on DiscountASP.net. Since the host provides only 1 database and my project has 2 database. One is the ASPNETDB that contains the roles and user etc (created using the Website Configuration Wizard) and the other is my database containing data to my website and is named MainDB. As Host allows only 1 database so i exported my ASPNETDB's tables and stored procedures to my MainDB using aspnet_regsql.exe, but the problem is that stored procedures and tables are exported to my MainDB but data is not exported, i mean there are no users in the tables. My Question is how to export everything of ASPNETDB including stored procedures, tables and data to my MainDB??

    Read the article

  • SQL Server 2008 Copy Database Wizard: Fail

    - by Nai
    I am trying to use the SQL Server 2008 Copy Database Wizard to copy a SQL Server 2008 database. I am using the SQL Management Object method. However, the copy fails with the following error: ERROR : errorCode=-1073548784 description=Executing the query "/* '==============================================..." failed with the following error: "Cannot use a CONTAINS or FREETEXT predicate on table or indexed view 'Product' because it is not full-text indexed." Any ideas on how I can proceed with this will be super helpful Kind Regards Nai

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >