Search Results

Search found 3710 results on 149 pages for 'all databases'.

Page 23/149 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • How to make a database service in Netbeans 6.5 to connect to SQLite databases?

    - by farzad
    I use Netbeans IDE (6.5) and I have a SQLite 2.x database. I installed a JDBC SQLite driver from zentus.com and added a new driver in Nebeans services panel. Then tried to connect to my database file from Services Databases using this URL for my database: jdbc:sqlite:/home/farzad/netbeans/myproject/mydb.sqlite but it fails to connect. I get this exception: org.netbeans.modules.db.dataview.meta.DBException: Unable to Connect to database : DatabaseConnection[name='jdbc:sqlite://home/farzad/netbeans/myproject/mydb.sqlite [ on session]'] at org.netbeans.modules.db.dataview.output.SQLExecutionHelper.initialDataLoad(SQLExecutionHelper.java:103) at org.netbeans.modules.db.dataview.output.DataView.create(DataView.java:101) at org.netbeans.modules.db.dataview.api.DataView.create(DataView.java:71) at org.netbeans.modules.db.sql.execute.SQLExecuteHelper.execute(SQLExecuteHelper.java:105) at org.netbeans.modules.db.sql.loader.SQLEditorSupport$SQLExecutor.run(SQLEditorSupport.java:480) at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572) [catch] at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997) What should I do? :(

    Read the article

  • How do I list all tables in all databases in SQL Server in a single result set?

    - by msorens
    I am looking for T-SQL code to list all tables in all databases in SQL Server (at least in SS2005 and SS2008; would be nice to also apply to SS2000). The catch, however, is that I would like a single result set. This precludes the otherwise excellent answer from Pinal Dave: sp_msforeachdb 'select "?" AS db, * from [?].sys.tables' The above stored proc generates one result set per database, which is fine if you are in an IDE like SSMS that can display multiple result sets. However, I want a single result set because I want a query that is essentially a "find" tool: if I add a clause like WHERE tablename like '%accounts' then it would tell me where to find my BillAccounts, ClientAccounts, and VendorAccounts tables regardless of which database they reside in.

    Read the article

  • LINQ to SQL for tables across databases. Or View?

    - by BritishDeveloper
    I have a Message table and a User table. Both are in separate databases. There is a userID in the Message table that is used to join to the User table to find things like userName. How can I create this in LINQ to SQL? I can't seem to do a cross database join. Should I create a View in the database and use that instead? Will that work? What will happen to CRUD against it? E.g. if I delete a message - surely it won't delete the user? I'd imagine it would throw an error. What to do? I can't move the tables into the same database!

    Read the article

  • How would I implement separate databases for reading and writing operations?

    - by Matt
    I am interested in implementing an architecture that has two databases one for read operations and the other for writes. I have never implemented something like this and have always built single database, highly normalised systems so I am not quite sure where to begin. I have a few parts to this question. 1. What would be a good resource to find out more about this achitecture? 2. Is it just a question of replicating between two identical schemas, or would your schemas differ depending on the operations, would normalisation vary too? 3. How do you insure that data written to one database is immediately available for reading from the second? Any further help, tips, resources would be appreciated. Thanks.

    Read the article

  • Can we use union of two sqlite databases with same tables for Core Data?

    - by Tofrizer
    Hi All, I have an iPhone Core Data app with a pre-populated sqlite "baseline" database. Can I add a second smaller sqlite database with the same tables as my pre-populated "baseline" database but with additional / complementary data such that Core Data will happily union the data from both databases and, ultimately, present to me as if it was all a single data source? Idea that I had is: 1) the "baseline" database never changes. 2) I can download the smaller "complementary" sqlite database for additional data as and when I need to (I'm assuming downloading sqlite database is allowed, please comment if otherwise). 3) Core Data is then able to union data from 1 & 2. I can then reference this unified data by calling my defined Core Data managed object model. Hope this makes sense. Thanks in advance.

    Read the article

  • Which relational databases exist with a public API for a high level language?

    - by Jens Schauder
    We typically interface with a RDBMS through SQL. I.e. we create a sql string and send it to the server through JDBC or ODBC or something similar. Are there any RDBMS that allow direct interfacing with the database engine through some API in Java, C#, C or similar? I would expect an API that allows constructs like this (in some arbitrary pseudo code): Iterator iter = engine.getIndex("myIndex").getReferencesForValue("23"); for (Reference ref: iter){ Row row = engine.getTable("mytable").getRow(ref); } I guess something like this is hidden somewhere in (and available from) open source databases, but I am looking for something that is officially supported as a public API, so one finds at least a note in the release notes, when it changes. In order to make this a question that actually has a 'best' answer: I prefer languages in the order given above and I will prefer mature APIs over prototypes and research work, although these are welcome as well.

    Read the article

  • New Source Database Added for EBS 12 + 11gR2 Transportable Tablespaces

    - by John Abraham
    The Transportable Tablespaces (TTS) process was originally certified for the migration of E-Business Suite R12 databases going from a source database of 11gR1 or 11gR2 to a target of 11gR2. This requirement has now been expanded to include a source database of 10gR2 (10.2.0.5) - this will potentially save time for existing 10gR2 customers as they can remove on a crucial upgrade step prior to performing the platform migration. The migration process requires an updated Controlled patch delivered by the Oracle E-Business Suite Platform Engineering team, i.e. it requires a password obtainable from Oracle Support. We released the patch in this manner to gauge uptake, and help identify and monitor any customer issues due to the nature of this technology. This patch has been updated to now include supporting 10gR2 as a source database. Does it meet your requirements?Note that for migration across platforms of the same "endian" format, users are advised to use the Transportable Database (TDB) migration process instead for large databases. The "endian-ness" target platforms can be verified by querying the view V$DB_TRANSPORTABLE_PLATFORM using SQL*Plus (connected as sysdba) on the source platform:SQL>select platform_name from v$db_transportable_platform;If the intended target platform does not appear in the output, it means that it is of a different endian format from the source. Consequently. database migration will need to be performed via Transportable Tablespaces (for large databases) or export/import.The use of Transportable Tablespaces can greatly speed up the migration of the data portion of the database. However, it does not affect metadata, which must still be migrated using export/import. We recommend that users initially perform a test migration on their database, using export/import with the 'metrics=y' parameter. This will help identify the relative amounts of data and metadata, and provide a basis for assessing likely gains in timing. In general, the larger the amount of data (compared to metadata), the greater the reduction in downtime that can be expected from using TTS as a migration process. For smaller databases or for those that have relatively small data compared to metadata, TTS will not be as beneficial for cross endian migration and the use of export/import (datapump) for the whole database is recommended. Where can I find more information? Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 Enterprise Edition (My Oracle Support Document 1311487.1) Oracle Database Administrator's Guide 11g Release 2 (11.2) Related Articles Database Migration using 11gR2 Transportable Tablespaces Now Certified for EBS 12 New Source Databases Added for Transportable Tablespaces + EBS 11i 10gR2 Transportable Tablespaces Certified for EBS 11i Migrating E-Business Suite Release 11i Databases Between Platforms Migrating E-Business Suite Release 12 Databases Between Platforms

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Is running multiple databases on login going to make my Mac really slow?

    - by Walrus the Cat
    Sometime ago, I installed Postgres, and the Launch agent that causes it to run when I log in. Just now, I did the same thing for Mongo. I was just about to do it for Couch. I don't remember if I ever did it for MySQL, but I probably did. Mongo and Couch are just 'when I have time to look into it' sort of things, but I don't want to have to remember to start them when I do. I have a 2.4 Ghz processor and 8 GB ram. Is this sort of behavior going to significantly impact my computer's performance? Should I be scrambling to uninstall all but the database I'm currently using, or can I install all the things and run them all the time? Thanks

    Read the article

  • How can I automate or script daily downloads for any new anti- virus databases, and then have the program scan my drive?

    - by Macgrimm
    Howdy all Super Users" I humbly ask if any Super User can direct this long time, gray haired Apple Tech in the right direction on this issue. I believe there probably are many ways to skin this cat. But I am looking to find simply the best, most unattended way to get it done. Any help will be greatly appreciated. also (I know there are much better softwares out there for the Mac so please don't go there! The politics of this company dictate which Anti virus we have to use) anyway without any further wait: basically I am trying to automate 2 very important functions of Mc'Afee anti-virus for Mac. First I want to automate the process of retrieving new virus definition files, and second I want to automate the process of scanning for viruses. It turns out that Using Mc'Afee Anti-Virus for the Mac are both manual functions. And they left up to the user (per user account) to perform. Depending on all of about 150 MAc users to perform these 2 tasks themselves is around 65% compliance. My question then is: If I wanted to use the command line such as (open /Applications/McAfee\ Security.app) It will open up the Security Console. But how can I make command Mc'Afee go out and grab the definition files and scan the computer? I have to admit I am at a crossroad and Macaltimers has set in. I would really appreciate it if any of you "Super ~ Users" can help me out with this MacAltimers loss of how to what to do. Thanks to All up Front Macgrimm

    Read the article

  • Auto switching databases from a rails app gracefully from the ApplicationController?

    - by Zaqintosh
    I've seen this post a few times, but haven't really found the answer to this specific question. I'd like to run a rails application that based on the detected request.host (imagine I have two subdomains points to the same rails app and server ip address: myapp1.domain.com and myapp2.domain.com). I'm trying to have myapp1 use the default "production" database, and myapp2 requests always use the alternative remote database. Here is an example of what I tried to do in Application controller that did not work: class ApplicationController < ActionController::Base helper :all before_filter :use_alternate_db private def use_alternate_db if request.host == 'myapp1.domain.com' regular_db elsif request.host == 'myapp2.domain.com' alternate_db end end def regular_db ActiveRecord::Base.establish_connection :production end def alternate_db ActiveRecord::Base.establish_connection( :adapter => 'mysql', :host => '...', :username => '...', :password => '...', :database => 'alternatedb' ) end end The problem is when it switches databases using this method, all connections (including valid sessions across the different subdomains get interrupted...). All examples online have people controlling database connectivity at the model level, but this would involve adding code all over my application. Is there some way to globally switch database connections on a per-request basis in the manner I'm suggesting above WITHOUT having to inject code all over my application? The added complexity here is I'm using Heroku as a hosting provider, so I have no control at the apache / rails application server level. I have looked at solutions like dbcharmer and magicmodels, but none seem to show examples of doing it in the manner that I'm trying to. Thanks for any help!

    Read the article

  • how does one _model_ data from relational databases in clojure ?

    - by sandeep
    I have asked this question on twitter as well the #clojure IRC channel, yet got no responses. There have been several articles about Clojure-for-Ruby-programmers, Clojure-for-lisp-programmers.. but what is the missing part is Clojure for ActiveRecord programmers . There have been articles about interacting with MongoDB, Redis, etc. - but these are key value stores at the end of the day. However, coming from a Rails background, we are used to thinking about databases in terms of inheritance - has_many, polymorphic, belongs_to, etc. The few articles about Clojure/Compojure + MySQL (ffclassic) - delve right into sql. Of course, it might be that an ORM induces impedence mismatch, but the fact remains that after thinking like ActiveRecord, it is very difficult to think any other way. I believe that relational DBs, lend themselves very well to the object-oriented paradigm because of them being , essentially, Sets. Stuff like activerecord is very well suited for modelling this data. For e.g. a blog - simply put class Post < ActiveRecord::Base has_many :comments end class Comment < ActiveRecord::Base belongs_to :post end How does one model this in Clojure - which is so strictly anti-OO ? Perhaps the question would have been better if it referred to all functional programming languages, but I am more interested from a Clojure standpoint (and Clojure examples)

    Read the article

  • Database Mirroring of SQL server

    - by jbp117
    I have two databases that are mirrored to another server using database mirroring. The mirror server has to be down for some reason for few days. Now the production server is having principal databases in (PRINCIPAL/DISCONNECTED) State. Clients can access those databases. So what happens when they keep on adding data to these databases?? Will the data get committed or waits till the mirror comes up?

    Read the article

  • How to migrate the data directory for MSSQL Server?

    - by Ryan
    I have an installation of MSSQL where I would like to move the data directory to another drive so that all the existing databases are located there and all new databases are created there, as well as the backups, logs, etc. I know I can detach/attach the existing databases, but what about the rest of the settings (backup, new databases)? Is this possible without an uninstall/reinstall? Thank you.

    Read the article

  • Cross-database transactions from one SP

    - by Michael Bray
    I need to update multiple databases with a few simple SQL statement. The databases are configurared in SQL using 'Linked Servers', and the SQL versions are mixed (SQL 2008, SQL 2005, and SQL 2000). I intend to write a stored procedure in one of the databases, but I would like to do so using a transaction to make sure that each database gets updated consistently. Which of the following is the most accurate: Will a single BEGIN/COMMIT TRANSACTION work to guarantee that all statements across all databases are successful? Will I need multiple BEGIN TRANSACTIONS for each individual set of commands on a database? Are transactions even supported when updating remote databases? I would need to execute a remote SP with embedded transaction support. Note that I don't care about any kind of cross-database referential integrity; I'm just trying to update multiple databases at the same time from a single stored procedure if possible. Any other suggestions are welcome as well. Thanks!

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • Exadata support for ACFS (and thus, 10gR2) now available!

    - by Robert Freeman
    Really? Exadata, ACFS and 10gR2? If you work with Exadata you are probably aware that ACFS has not been supported - until now! ACFS is now supported on Exadata if you are running Grid Infrastructure version 12.1.0.2 or later. This new support is described in MOS note 1326938.1. Also Exadata support for ACFS is mentioned in MOS note 888828.1, which is the king of all Exadata notes on MOS. The upshot is that you can now run Oracle Database 10gR2 on Exadata using ACFS as the storage for the Oracle Database. Don’t Over React and just Throw Everything on ACFS!First, let’s be clear that ACFS is not an alternative for running your Exadata databases on ASM. If you are running any production or non-production performance sensitive Oracle databases on 11.2 or 12.1, then you should be running them on ASM disks that are associated with the storage cells. The use case for ACFS is generally limited to the following: Running any Oracle 10gR2 databases on Exadata. Running Oracle 11gR2 development or test databases that require rapid cloning, and that do not require the performance benefits of the Exadata storage cells. If you are running Oracle Database 12c and you need snapshot/clone kinds of capabilities, then you should be using Oracle MultiTennant and the features present in that option (remember though that MultiTennant is a licensed option). The Fine PrintThere are some requirements that you will need to meet If you are going to run ACFS on Exadata. These are: You have to use Oracle Linux You must use GI 12.1.0.2 or later If you wish to use HCC then you must apply the fix for bug 19136936 to your system. This bug, and it’s associated patch do not appear on MOS (as of the time that I wrote this) so you will need to open an SR and get support to provide the patch for you. The Best Use Case for ACFSEven though Oracle Database 10gR2 is at end of life, it remains in use in a large number of places. This has caused problems when choosing to implement Exadata as a consolidation platform, or when choosing it during a hardware refresh process. Now that ACFS is supported, Exadata has become even more flexible and affords customers greater flexibility when migrating to Exadata and Engineered Systems. While all of the features of Exadata might not be available to a 10.2.0.4 database, certainly just the improved processing capabilities of Exadata with its fast as heck infiniband network fabric, additional memory, reduced power requirements and a whole host of other features, justifies moving these databases to Exadata now. This will also make it easier to upgrade these databases when the time comes!

    Read the article

  • Big Data – Learning Basics of Big Data in 21 Days – Bookmark

    - by Pinal Dave
    Earlier this month I had a great time to write Bascis of Big Data series. This series received great response and lots of good comments I have received, I am going to follow up this basics series with further in-depth series in near future. Here is the consolidated blog post where you can find all the 21 days blog posts together. Bookmark this page for future reference. Big Data – Beginning Big Data – Day 1 of 21 Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21 Big Data – Evolution of Big Data – Day 3 of 21 Big Data – Basics of Big Data Architecture – Day 4 of 21 Big Data – Buzz Words: What is NoSQL – Day 5 of 21 Big Data – Buzz Words: What is Hadoop – Day 6 of 21 Big Data – Buzz Words: What is MapReduce – Day 7 of 21 Big Data – Buzz Words: What is HDFS – Day 8 of 21 Big Data – Buzz Words: Importance of Relational Database in Big Data World – Day 9 of 21 Big Data – Buzz Words: What is NewSQL – Day 10 of 21 Big Data – Role of Cloud Computing in Big Data – Day 11 of 21 Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21 Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21 Big Data – Operational Databases Supporting Big Data – Columnar, Graph and Spatial Database – Day 14 of 21 Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21 Big Data – Interacting with Hadoop – What is PIG? – What is PIG Latin? – Day 16 of 21 Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21 Big Data – Basics of Big Data Analytics – Day 18 of 21 Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21 Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21 Big Data – Final Wrap and What Next – Day 21 of 21 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Survey: How much data do you work with?

    - by James Luetkehoelter
    Andy isn't the only one that can ask a survey question. This is something I really curious about because many of the answers or recommendations or rants in blogs are not universably applicable to every database - small databases must sometimes be treated differently, and uber databases are just a pain (and fun at the same time). So, how would you classify most of the databases you work with: 1) Up to 50GB 2) 50-500GB 3) 500GB - 2TB 4) DEAR GOD THAT"S TOO MUCH INFORMATION! Share this post: email it!...(read more)

    Read the article

  • How can I manage SQL CE databases in SQL Server Management Studio?

    - by Edward Tanguay
    I created a SDF (SQL CE) database with Visual Studio 2008 (Add / New Item / Local Database). Is it possible to edit this database with SQL Server Management Studio? I tried to attach it but it only offered .mdf and attaching a .sdf file results in "failed to retrieve data for this request". If so, is it possible to create SDF files with Management Studio as well? Or are we stuck with the simple interface of the Visual Studio 2008 database manager?

    Read the article

  • How can I manage SQL CE databases in SQL Server Management Studio?

    - by Arul
    Dear all, I have Sqlserver 2005 Express Edition only. and VS 2005. How to i create my .sdf file. and how to create tables in that file... I am developing a SmartDevice Application. if any possible to access the Sql server 2000 DataBase without using .SDF file. Note: in my system i have VS 2005, SQL SERVER 2000, SQL SERVER 2005 Express Edition. And aslo i installed MS-SQL SERVER 2005 Compact Edition Developer SDK[ENU]. In my Sql server 2005 Studio, there is no any sqlserver compact edition in the EngineType Combo. what are the things i need to do.. to perfectly run my application with Data Base. Thanks, Thanks for previous one also.

    Read the article

  • Stairway to Database Source Control Level 2: Getting a Database into Source Control

    In this level, we're going to continue the philosophy of learning by example, and get a database into our SVN repository. We will also consider our overall approach to source control for databases, and the manner in which our team will develop these databases, concurrently. 24% of devs don’t use database source control – make sure you aren’t one of themVersion control is standard for application code, but databases haven’t caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out…

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >