Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 14/189 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Columnar Databases

    - by jchang
    Ingres just published a TPC-H benchmark for VectorWise , an analytic database technology employing 1) SIMD processing (Intel SSE 4.2), 2) better memory optimizations to leverage on-chip cache, 3) compression, 4) Column-based storage. Ingres originated as a research project at UC Berkeley (see Wikipedia ) in the 1970s, and has since become a commercially supported, open source database system. Apparently, Ingres project people later founded Sybase. So Ingres in a sense, is the grandfather (or perhap...(read more)

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • Actually utilizing relational databases for entity systems

    - by Marc Müller
    Recently I was researching several entity systems and obviously I came across T=Machine's fantastic articles on the subject. In Part 5 of the series the author uses a relational schema to explain how an entity system is built and works. Since reading this, I have been wondering whether or not actually using a compact SQL library would be fast enough for real-time usage in video games. Performance seems to be the main issue with a full blown SQL database for management of all entities and components. However, as mentioned in T=Machine's post, basically all access to data inside the SQLDB is done sequentlially by each system over each component. Additionally, using a library like SQLite, one could easily improve performance by storing the entity data exclusively in RAM to increase access speeds. Disregarding possible performance issues, using a SQL database, in my opinion, would allow for a very intuitive implementation of entity systems and bring a long certain other benefits like easy de/serialization of game states and consistency checks like the uniqueness of entity IDs. Edit for clarification: The main question was whether using a SQL database for the actual entity management (not just storing the game state on the disk) in a real-time game would still yield a framerate appropriate for a game or even if someone is aware of projects that demonstrate SQL in a video game.

    Read the article

  • How to get experience in large scale databases?

    - by Justin
    I have written applications that are very small scale and the code I write works fine for them. But I have often wondered how the server side code I write would scale up from 100s of queries per day to millions. Also when looking at possible jobs/projects, people are often looking for developers with experience in this sort of high traffic database design so I would at least like to be able to say, I havent gotten to work on a project that was this popular, but I at least have tried to simulate it. Are there tools or frameworks that can generate a lot of traffic or at least simulate what would happen with traffic on different orders of magnitude so I could get some practice writing optimized code for higher traffic applicaitons?

    Read the article

  • Auditing DDL Changes in SQL Server databases

    Even where Source Control isn't being used by developers, it is still possible to automate the process of tracking the changes being made to a database and put those into Source Control, in order to track what changed and when. You can even get an email alert when it happens. With suitable scripting, you can even do it if you don't have direct access to the live database. Grant shows how easy this is with SQL Compare.

    Read the article

  • Cloud consolidation handling multi databases

    - by llaszews
    I have spoken about virtualization and the different types of virtualization. Which includes OS zones, application server domains, database schemas, VLANS and other approaches. Another approach is to create a virtually federated database in the cloud. DBSpaces is a company that has a technology to created a virtually federated database in the cloud. DBSpaces is a Virtual Database technology that allows an organisation thru a single Virtual Database access multiple data sources (or database spaces) in real-time. Additionally dbSpaces can be configured to access an organisations data internally using a remote gateway so that their dbSpace is seamless across the Public and Private cloud.

    Read the article

  • Designing Databases for Rapid Resilience

    As the volume of data increases, DBAs need to plan more actively for rapid restores in the event of failure. For this, the intelligent use of filegroups is important, particularly when the Enterprise Edition of SQL Server offers the hope of online restores. How, though, should you arrange your data on the different filegroups? What happenens if the primary filegroup gets corrupted? Why backup and restore indexes?

    Read the article

  • Designing Databases for Rapid Resilience

    As the volume of data increases, DBAs need to plan more actively for rapid restores in the event of failure. For this, the intelligent use of filegroups is important, particularly when the Enterprise Edition of SQL Server offers the hope of online restores. How, though, should you arrange your data on the different filegroups? What happenens if the primary filegroup gets corrupted? Why backup and restore indexes?

    Read the article

  • The Rise of NoSQL Databases

    The NoSQL concept has been attracting a lot of attention in recent years, primarily due to big-name production implementations. Too many SQL Servers to keep up with?Download a free trial of SQL Response to monitor your SQL Servers in just one intuitive interface."The monitoringin SQL Response is excellent." Mike Towery.

    Read the article

  • Steps to Apply a Service Pack or Patch to Mirrored SQL Server Databases

    Planning on patching my SQL Servers to the latest service pack, but not sure how to complete this for a environment that is using database mirroring? In this tip, will outline the environment and then walk through the process of patching mirrored servers. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • SQL Azure - Creating backups and copies of your databases

    As a DBA you always followed a practice to back up your database (or take a snapshot of your database) before making any changes so that you can revert to your old database state if something goes wrong. Also to setup a development or test environment you use a backup of your database and restore it in the respective environment. If you are moving to SQL Azure, what would you do in these cases as backup / restore and database snapshots are not supported as of now?

    Read the article

  • Mimicking Network Databases in SQL

    Unlike the hierarchical database model, which created a tree structure in which to store data, the network model formed a generalized 'graph' structure that describes the relationships between the nodes. Nowadays, the relational model is used to solve the problems for which the network model was created, but the old 'network' solutions are still being implemented by programmers, even when they are less effective.

    Read the article

  • Repair Firefox SQLite databases

    - by Bobby
    I had some problems with my RAM (bluescreen several times, Windows XP) and now are my Firefox databases damaged. Firefox is working, but my history is gone and it's reporting several inconsistencies and errors when executing pragma integrity_check on places.sqlite. Now the question, how do I repair SQLite-Databases?

    Read the article

  • Partial sync of Mysql databases on two remote computers

    - by Beck
    Does mysql replication supports delayed sync if remote slave server is OFF? For example I want to make my developing server have a fresh copy of databases each morning. It should be partial sync, not full copy of databases each morning. And synchronization should be only ONE way, not duplex, to avoid deleting something of master server. What utilities are out there or native functionality of mysql itself? Thanks!

    Read the article

  • PHP (A few questions) OO, refactoring, eclipse

    - by jax
    I am using PHP in eclipse. It works ok, I can connect to my remote site, there is colour coding of code elements and some code hints. I realise this may be too long to answer all questions, if you have a good answer for one part, answering just that is ok. Firstly General Coding I have found that it is easy to loose track of included files and their variables. For example if there was a database $cursor it is difficult to remember or even know that it was declared in the included file (this becomes much worse the more files you include). How are people dealing with this? How are people documenting their code - in particular the required GET and POST data? Secondly OO Development: Should I be going full OO in my development. Currently I have a functions library which I can include and have separated each "task" into a separate file. It is a bit nasty but it works. If I go OO how do I structure the directories in PHP, java uses packages - what about php? How should I name my files, should I use all lower case with _ for spaces "hello_world.php"? Should I name classes with Uppercase like Java "HelloWorld.php"? Is there a different naming convention for Classes and regular function files? Thirdly Refactoring I must say this is a real pain. If I change the name of a variable in one place I have to go through whole document and each file that included this file and change the name their too. Of course, errors everywhere is what results. How are people dealing with this problem? In Java if you change the name in one place it changes everywhere. Are there any plugins to improve php refactoring? I am using the official PHP version of Eclipse from their website. thanks

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Exam 70-448 - TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance

    - by DigiMortal
    The another exam I passed was 70-448 - TS: Microsoft SQL Server 2008, Business Intelligence Development and Maintenance. This exam covers Business Intelligence (BI) solutions development and maintenance on SQL Server 2008 platform. It was not easy exam, but if you study then you can do it. To get prepared for 70-488 it is strongly recommended to read self-paced training kit and also make through all examples it contains. If you don’t have strong experiences on Microsoft BI platform and SQL Server then this exam is hard to pass when you just go there and hope to pass somehow. Self-paced training kit is interesting reading and you learn a lot of new stuff for sure when preparing for exam. Questions in exam are divided into topics as follows: SSIS – 32% SSAS – 38% SSRS – 30% Exam 70-448 gives you Microsoft Certified Technology Specialist certificate.

    Read the article

  • Getting MySQL work with Entity Framework 4.0

    - by DigiMortal
    Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) {     var myEvents = from e in context.Events                     from a in e.Attendees                     where a.Person.FirstName == "Gunnar" &&                             a.Person.LastName == "Peipman"                     select e;       Console.WriteLine("My events: ");       foreach(var e in myEvents)     {         Console.WriteLine(e.Title);     } }   Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. <?xml version="1.0" encoding="utf-8"?> <configuration> ...   <system.data>     <DbProviderFactories>         <add              name=”MySQL Data Provider”              invariant=”MySql.Data.MySqlClient”              description=”.Net Framework Data Provider for MySQL”              type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,                   Version=6.2.0.0, Culture=neutral,                   PublicKeyToken=c5687fc88969c44d”          />     </DbProviderFactories>   </system.data> ... </configuration> Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >