Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 167/1252 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • Oracle Spatial renamed Oracle Spatial and Graph

    - by Cinzia Mascanzoni
    As of the July 19th, 2012 Global Price List, we have renamed "Oracle Spatial" to "Oracle Spatial and Graph". We have made this change to highlight the existing network and semantic graph capabilities in Oracle Spatial and in recognition of the increasing market demand for graph database capabilities. Oracle Spatial and Graph has the same pricing and features as the current Oracle Spatial. This is a product name change only.

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. * If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project? EDIT *This bit seemed to get a lot of attention. What I mean is I could probably do this for say, 30 users, spending a few tens of thousands of pounds. I'm not including stuff I don't know about the medical industry and government, but I think most people who've been around programming are familiar with that kind of database/front end kind of design. My point is the NHS project looks like a BIG version of this, with bells and whistles, notably security. But surely a budget millions of times larger than mine could provide this?

    Read the article

  • Custom Java Web Development vs Spreadsheet

    - by jacktrades
    Need some arguments why a small business should prefer a custom web developed solution using relational database (e.g. Java Servlet + MySQL) over standard Spreadsheet user programs like Excel. Specially now in these days that Office 365 is available in the cloud. As a Java programmer need good arguments to convince clients that this approach is better (if it really is) This is a generic situation, I understand that each case is different. Nevertheless answers so far has pinpointed right answers.

    Read the article

  • Transparent Data Encryption

    Transparent Data Encryption is designed to protect data by encrypting the physical files of the database, rather than the data itself. Its main purpose is to prevent unauthorized access to the data by restoring the files to another server. With Transparent Data Encryption in place, this requires the original encryption certificate and master key. It was introduced in the Enterprise edition of SQL Server 2008. John Magnabosco explains fully, and guides you through the process of setting it up.

    Read the article

  • Exploring In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP1

    The continuing drop in the price of memory has made fast in-memory OLTP increasingly viable. SQL Server 2014 allows you to migrate the most-used tables in an existing database to memory-optimised 'Hekaton' technology, but how you balance between disk tables and in-memory tables for optimum performance requires judgement and experiment. What is this technology, and how can you exploit it? Rob Garrison explains.

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • what is the best age for programmer to hire [on hold]

    - by Mohamed Ahmed
    I'm graduated from Information systems institute since 2004 and I worked as a ICDL Instructor , but I know some SQL Server good and Database Design , now I'm in 30 age , and I want to start study computer programming and get MCSA SQL Server and MCSE certificates , but I have feel I'm old to start and the companies will not accept me for that reason and also because I don't have any experience yet in the field , I will start like a fresh graduated in 21 or 22 age , please help me what is the best age for programmer for accepted , and the age and late start will be a big obstacle for me or not

    Read the article

  • starting project with growth in mind.

    - by marabutt
    I have an idea for a web application and have some good people keen to get involved. I will be doing most of the code at the start and have a few years experience with some quite large projects. I have nearly 0 budget. What view should I take with regard to data storage / database? Get the project running quickly and inexpensively, then re-evaluate if it is a success? Does anyone have experience with this and advice?

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • SQL: empty string vs NULL value

    - by Jacek Prucia
    I know this subject is a bit controversial and there are a lot of various articles/opinions floating around the internet. Unfortunatelly, most of them assume the person doesn't know what the difference between NULL and empty string is. So they tell stories about surprising results with joins/aggregates and generally do a bit more advanced SQL lessons. By doing this, they absolutely miss the whole point and are therefore useless for me. So hopefully this question and all answers will move subject a bit forward. Let's suppose I have a table with personal information (name, birth, etc) where one of the columns is an email address with varchar type. We assume that for some reason some people might not want to provide an email address. When inserting such data (without email) into the table, there are two available choices: set cell to NULL or set it to empty string (''). Let's assume that I'm aware of all the technical implications of choosing one solution over another and I can create correct SQL queries for either scenario. The problem is even when both values differ on the technical level, they are exactly the same on logical level. After looking at NULL and '' I came to a single conclusion: I don't know email address of the guy. Also no matter how hard i tried, I was not able to sent an e-mail using either NULL or empty string, so apparently most SMTP servers out there agree with my logic. So i tend to use NULL where i don't know the value and consider empty string a bad thing. After some intense discussions with colleagues i came with two questions: am I right in assuming that using empty string for an unknown value is causing a database to "lie" about the facts? To be more precise: using SQL's idea of what is value and what is not, I might come to conclusion: we have e-mail address, just by finding out it is not null. But then later on, when trying to send e-mail I'll come to contradictory conclusion: no, we don't have e-mail address, that @!#$ Database must have been lying! Is there any logical scenario in which an empty string '' could be such a good carrier of important information (besides value and no value), which would be troublesome/inefficient to store by any other way (like additional column). I've seen many posts claiming that sometimes it's good to use empty string along with real values and NULLs, but so far haven't seen a scenario that would be logical (in terms of SQL/DB design). P.S. Some people will be tempted to answer, that it is just a matter of personal taste. I don't agree. To me it is a design decision with important consequences. So i'd like to see answers where opion about this is backed by some logical and/or technical reasons.

    Read the article

  • What schema documentation tools exist for PostgreSQL

    - by Brad Koch
    MySQL has MySQL Workbench for designing and documenting your schema, and generates CREATE and ALTER scripts based on your design. We're looking at migrating to PostgreSQL in the near future, and we do need a practical way of documenting and modifying the schema structure. What similar tools exist for Postgres (that are OS X/Linux compatible)? Alternatively, what equivalent conventions would be followed for designing and documenting the structure of your Postgres database?

    Read the article

  • 24 Hours of PASS next week, pre-con preview style

    - by drsql
    I will be doing my Characteristics of a Great Relational Database , which is a session that I haven’t done since last PASS. When I was asked about doing this Summit Preview version of 24 hours of PASS, I decided that I would do this session, largely because it is kind of light and fun, but also because it is either going to be the basis of the end section of my pre-con at the summit or it is going to be the section of the pre-con we don’t get to because we are so involved in working out designs that...(read more)

    Read the article

  • Should I use multiple column primary keys or add a new colum?

    - by Covar
    My current database design makes use of a multiple column primary key to use existing data (that would be unique anyway) instead of creating an additional column assigning each entry an arbitrary key. I know that this is allowed, but was wondering if this is a practice that I might want to use cautiously and possibly avoid (much like goto in C). So what are some of the disadvantages I might see in this approach or reasons I might want a single column key?

    Read the article

  • Should users be deleted after inactivity on a website?

    - by Hovaness Bartamian
    When you have a social website or a website where you can register, would you eventually delete them after a certain time (after a year of inactivity) or would you rather keep their account records for ever? I know websites like Facebook have large amount of inactive, duplicated and fake accounts. So I'm wondering if after two years of inactivity it would be alright to send the account a warning email of deletion unless they log in. Just thinking of a clean and efficient database management or any implications this may cause to new potential users.

    Read the article

  • Set and Verify the Retention Value for Change Data Capture

    - by AllenMWhite
    Last summer I set up Change Data Capture for a client to track changes to their application database to apply those changes to their data warehouse. The client had some issues a short while back and felt they needed to increase the retention period from the default 3 days to 5 days. I ran this query to make that change: sp_cdc_change_job @job_type='cleanup', @retention=7200 The value 7200 represents the number of minutes in a period of 5 days. All was well, but they recently asked how they can verify...(read more)

    Read the article

  • Should I use multiple column primary keys or add a new column?

    - by Covar
    My current database design makes use of a multiple column primary key to use existing data (that would be unique anyway) instead of creating an additional column assigning each entry an arbitrary key. I know that this is allowed, but was wondering if this is a practice that I might want to use cautiously and possibly avoid (much like goto in C). So what are some of the disadvantages I might see in this approach or reasons I might want a single column key?

    Read the article

  • SQL Rally Voting Open

    - by AllenMWhite
    The voting for sessions for SQL Rally has been going on for a couple of weeks now. This week the Enterprise Database Administration & Deployment sessions are up for voting. I didn't go into politics because I don't feel comfortable telling people that they should vote for me but this is how the sessions are being decided for this conference, so here goes. I've submitted two abstracts, both grouped in the Summit Spotlight section. The first is a new session based on what I learned implementing...(read more)

    Read the article

  • Collecting the Information in the Default Trace

    The default trace is still the best way of getting important information to provide a security audit of SQL Server, since it records such information as logins, changes to users and roles, changes in object permissions, error events and changes to both database settings and schemas. The only trouble is that the information is volatile. Feodor shows how to squirrel the information away to provide reports, check for unauthorised changes and provide forensic evidence.

    Read the article

  • Preventing Problems in SQL Server

    It is never a good idea to let your users be the ones to tell you of database server outages. It is far better to be able to spot potential problems by being alerted for the most relevant conditions on your servers at the best threshold. This will take time and patience, but the reward will be an alerting system which allows you to deal more effectively with issues before they involve system down-time

    Read the article

  • Should I have a separate method for Update(), Insert(), etc., or have a generic Query() that would be able to handle all of these?

    - by Prayos
    I'm currently trying to write a class library for a connection to a database. Looking over it, there are several different types of queries: Select From, Update, Insert, etc. My question is, what is the best practice for writing these queries in a C# application? Should I have a separate method for each of them(i.e. Update(), Insert()), or have a generic Query() that would be able to handle all of these? Thanks for any and all help!

    Read the article

  • NoSQL vs Ehcache caching advise for speeding up read only mysql Database

    - by paddydub
    I'm building a Route Planner Webapp using Spring/Hibernate/Tomcat and a mysql database, I have a database containing read only data, such as Bus Stop Coordinates, Bus times which is never updated. I'm trying to make the app run faster, each time the application is run it will preform approx 1000 reads to the database to calculate a route. I have setup a Ehcache which greatly improves the read from database times. I'm now setting terracotta + Ehcache distributed caching to share the cache with multiple Tomcat JVMs. This seems a bit complicated. I've tried memcached but it was not performing as fast as ehcache. I'm wondering if a MongoDb or Redis would be better suited. I have no experience with nosql but I would appreciate if anyone has any ideas. What i need is quick access to the read only database.

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >