Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 26/2478 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to deal with transactions when creating a database connection for each query

    - by webnoob
    In line with this post here I am going to change my website to create a connection per query to take advantage of .NET's connection pooling. With this in mind, I don't know how I should deal with transactions. At the moment I do something like (psuedo code): GlobalTransaction = GlobalDBConnection.BeginTransaction(); try { ExecSQL("insert into table ..") ExecSQL("update some_table ..") .... GlobalTransaction.Commit(); }catch{ GlobalTransaction.Rollback(); throw; } ExecSQL would be like this: using (SqlCommand Command = GlobalDBConnection.CreateCommand()) { Command.Connection = GlobalDBConnection; Command.Transaction = GlobalTransaction; Command.CommandText = SQLStr; Command.ExecuteNonQuery(); } I'm not quite sure how to change this concept to deal with transactions if the connection is created within ExecSQL because I would want the transaction to be shared between both the insert and update routines.

    Read the article

  • Modular Database Structures

    - by John D
    I have been examining the code base we use in work and I am worried about the size the packages have grown to. The actual code is modular, procedures have been broken down into small functional (and testable) parts. The issue I see is that we have 100 procedures in a single package - almost an entire domain model. I had thought of breaking these packages down - to create sub domains that are centered around the procedure relationships to other objects. Group a bunch of procedures that have 80% of their relationships to three tables etc. The end result would be a lot more packages, but the packages would be smaller and I feel the entire code base would be more readable - when procedures cross between two domain models it is less of a struggle to figure which package it belongs to. The problem I now have is what the actual benefit of all this would really be. I looked at the general advantages of modularity: 1. Re-usability 2. Asynchronous Development 3. Maintainability Yet when I consider our latest development, the procedures within the packages are already reusable. At this advanced stage we rarely require asynchronous development - and when it is required we simply ladder the stories across iterations. So I guess my question is if people know of reasons why you would break down classes rather than just the methods inside of classes? Right now I do believe there is an issue with these mega packages forming but the only benefit I can really pin down to break them down is readability - something that experience gained from working with them would solve.

    Read the article

  • Looking for free, specific Ip2Location Database

    - by Andresch Serj
    I am searching for a free db (like an updated XML or CSV file) that relates IP addresses to specific locations. I want more information than just the country. I want some sort of region or city reference, even if that ends up to be a number that makes no sense to me. Doesn't have to be super correct or always up to date either. It is just to distinguish between user groups and not to monitor or spy on them.

    Read the article

  • Free, specific Ip2Location Database

    - by Andresch Serj
    I am searching for a free db (like an updated xml or csv file) that relates ip adresses to specific locations. I want more information than just the Country. I want some sort of region or city refference, even if that ends up to be a number that makes no sense to me. Doesn't have to be super correct or always up to date either. It is just to distinguish between usergroups and not to monitor or spy on them.

    Read the article

  • Continuous Delivery and the Database

    Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of software delivery and deployment by making build, integration and delivery into a routine. The way that databases fit into the Continuous Delivery story has been less-well defined. Phil Factor explains why he's an enthusiast for databases being full participants, and suggests practical ways of doing so.

    Read the article

  • Set modified date = created date or null on record creation?

    - by User
    I've been following the convention of adding created and modified columns to most of my database tables. I also have been leaving the modified column as null on record creation and only setting a value on actual modification. The other alternative is to set the modified date to be equal to created date on record creation. I've been doing it the former way but I recent ran into one con which is seriously making me think of switching. I needed to set a database cache dependency to find out if any existing data has been changed or new data added. Instead of being able to do the following: SELECT MAX(modified) FROM customer I have to do this: SELECT GREATEST(MAX(created), MAX(modified)) FROM customer The negative being that it's a more complicated query and slower. Another thing is in file systems I believe they usually use the second convention of setting modified date = created date on creation. What are the pros and cons of the different methods? That is, what are the issues to consider?

    Read the article

  • Database in the cloud?

    - by Jlouro
    Some of my recent clients are asking for remote connections to the office server, for standalone work, etc, in winForm applications. Since the concept of the web is remote connection to a server both of data and resources, it should be possible to place both of this in cloud and have the winForm apps connect to it as if web Apps. As any one tested this, is working like this? Is it fast enough? Is it secure? What is the best cloud host for this type of work ?

    Read the article

  • How should I implement Transaction database EJB 3.0

    - by JamesBoyZ
    In the CustomerTransactions entity, I have the following field to record what the customer bought: @ManyToMany private List<Item> listOfItemsBought; When I think more about this field, there's a chance it may not work because merchants are allowed to change item's information (e.g. price, discount, etc...). Hence, this field will not be able to record what the customer actually bought when the transaction occurred. At the moment, I can only think of 2 ways to make it work. I will record the transaction details into a String field. I feel that this way would be messy if I need to extract some information about the transaction later on. Whenever the merchant changes an item's information, I will not update directly to that item's fields. Instead, I will create another new item with all the new information and keep the old item untouched. I feel that this way is better because I can easily extract information about the transaction later on. However, the bad side is that my Item table may contain a lot of rows. I'd be very grateful if someone could give me an advice on how I should tackle this problem. UPDATE: I'd like to add more information about the current design. public class Customer implements Serializable { @OneToMany private List<CustomerTransactions> listOfTransactions; } public class CustomerTransactions implements Serializable { @ManyToMany private List<Item> listOfItemsBought; } public class Merchant implements Serializable { @OneToMany private List<Item> listOfSellingItems; }

    Read the article

  • SQL and Database: Where to start! [closed]

    - by Nizar
    First of all I just know HTML and CSS (this is my background in web development and design) and I have found that before I move to a server-side language I need to learn about databases and SQL. My first question: Do you think this order of learning is good (I mean to learn SQL after HTML and CSS)? My secod related question: Do I have to learn a lot about SQL and databases? or just the basics? and if you know any good beginners books please write their titles.

    Read the article

  • Oracle Text????~????????????????????????

    - by Yuichi Hayashi
    Oracle Text?? ????????????????????????????????????? ??????????????????????????????????????? ??????????????????????????????? Oracle Text ????????????????? ????? Oracle Text ??Oracle Database ????????????????????? Oracle Text ????????????????·?????????????? ???Edition???????? - Oracle Database Enterprise Edition(EE) - Oracle Database Standard Edition(SE) - Oracle Database Standard Edition One - Oracle Database Express Edition(XE) ?????? Oracle ??????? Database Configuration Assistant(DBCA)?????????????Oracle Text ?????????????????? ??????????????????????????????? ???????????????????(?????)????????????????????????????????? ?????????????????????????????????????? (1) ~ (4)???????????????????? (1) ????? Oracle Text???????(ctxsys)???????????????(????? SCOTT)???? CTXAPP?????????? SQL connect ctxsys/ SQL grant ctxapp to scott; (2) ???? ? ?????? SQL connect scott/tiger SQL create table test ( 2 id number primary key, 3 text varchar2(80) ); SQL insert into test ( id, text ) values ( 1, 'The cat sat on the mat' ); SQL insert into test ( id, text ) values ( 2, 'The dog barked like a dog' ); SQL insert into test ( id, text ) values ( 3, '??????????' ); SQL commit; (3) ???????(??) ??????????????????? ?????????: test_lexer ???????? JAPANESE_VGRAM_LEXER????????? SQL connect scott/tiger SQL execute ctx_ddl.create_preference('test_lexer','JAPANESE_VGRAM_LEXER'); ???? ???? OracleText???????????????????????????????????????????????????? ???? ???????????????????? ??????????????????????????????????? ??????? - JAPANESE_VGRAM_LEXER:????2???????????????????? - JAPANESE_LEXER (Oracle Text 9.0.1???????):???????????????????????????? ??????? ????????????????????????????????????????????????????????????? (4) ????????? TEST?????????????????? SQL create index test_idx on test ( text ) 2 indextype is ctxsys.context 3 parameters ('lexer test_lexer'); (5) ????????? "??"?????????????????? SQL col text for a30 SQL select id, text from test 2 where contains ( text, '??') 0; ID TEXT ---------- ------------------------------ 3 ?????????? ¦???? ???????/???Oracle Text ?? ????????Oracle Text ????

    Read the article

  • ??????Oracle Enterprise Manager???????

    - by Yusuke.Yamamoto
    ????? ??:2010/10/19 ??:???? ?????????????????????????????????????????????????? Oracle Enterprise Manager(EM)????????????4?????EM ???????????????????????? ?1? ???????????????/ ???????????? Oracle Database????????????????EM ??????????????????2? EM ??????????/ ????????????? EM?????????????????????3? ????????????·???/ ????????????????Enterprise Edition ?????????Standard Edition ?????????????????????????????????·???????4? ?????????????????/ Oracle Database ???????? EM ?????????????????????????????????????????·????·?????? ????????? ????????????????? http://oracletech.jp/products/pickup/000028.html

    Read the article

  • possible to make text messaging with php have a constant "telephone number" value?

    - by Rees
    hello, i have an iphone 3g and can successfully send text messages using the PHP mail() function. My issue is that for each message i receive, the "telephone number" associated with the incoming text message changes each time. If possible, I would like to somehow make this number constant so that I can take advantage of iphone's ability to aggregate all text messages from the same telephone number -Otherwise my iphone would be cluttered with messages. Is there a way to do this? an example of the numbers I receive would be 1(410) 000-001, 1(410) 000-002, 1(410) 000-003, etc... can i make this constant somehow? $message = stripslashes("new user just joined!"); mail("[email protected]", "Subject", "$message"); please let me know! thanks...

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • 4GB limitation on these embedded/express DBs good enough? what's next if limitation is reached?

    - by edwin.nathaniel
    I'm wondering how long a (theoretically) desktop-app can consume the full 4GB limitation of these express/embedded database products (SQL-Server Express, Oracle Express, SQLite3, etc) provided that big blobs will be stored in filesystem. Also what would be your strategy when it hits the 4GB? Archive the old DB Copy 1-3 months of data to the new DB (consider this as cache strategy?) Start using the new DB from this point onward (How do you access the old data?) I understand that the answer might varies depending on how much data you stored in the table/column. But please describe based on your experience (what kind of desktop-app, write/read heavy, how long will it reach according to your guess).

    Read the article

  • jquery prepend to textarea text()

    - by synapz
    I have a text area. I can set the text of it with $("#mytextarea").text("foo") I can prepend to the text area like this: $("#mytextarea").prepend("foo") But I cannot prepend to the jquery text() object like this: $("#mytextarea").text().prepend("foo") The reason I want to do this is so that if my user gets me to prepend this text: $("#mytextarea").prepend("<script>alert('lol i haxed uuu!')</script>") ...the script executes and I lose. Help?

    Read the article

  • "Cannot perform a differential backup for database "myDb", because a current database backup does no

    - by krimerd
    Hi there, I have what seems to be a pretty common problem when trying to take a differential backup. We have a SQL Server 2008 Standard (64bit) and we use Litespeed v 5.0.2.0 to take our backups. We take full backups once a week and a differential on a daily basis. The problem is, every time I try to take a diff backup I get the following error: "VDI open failed due to requested abort. BACKUP DATABASE is terminating abnormally. Cannot perform a differential backup for database "myDb", because a current database backup does not exist. Perform a full database backup by reissuing BACKUP DATABASE, omitting the WITH DIFFERENTIAL option." The problem is that I know 100% I have a full backup because I just double checked. Only once I was able to take a diff backup and that was when I took it immediately after I took a full backup. I have searched around and noticed that this is pretty common (although mostly with SQL 2005) and a solution that a lot of ppl suggest and that I haven't tried yet is to disable the SQL Server VSS Writer service. The problem with this is #1 I think I might need this service since I am using a third party backup software and #2 I am not sure exactly what the service does and don't want to disable it just like that. Has any of you ever experienced this problem and how did you go about fixing it? Thank you,

    Read the article

  • database replication for new user signup

    - by Jeff Storey
    I have a database that stores the users of my application. When a new user signs up, a record is inserted into the database for that user. I have a replicated version (slave) of this database (using mysql for now). What I'm concerned about is this scenario: step 1: user signs up and user record is inserted into the database step 2: user then tries to login, and the login process queries the database for the user. however, this query hits the slave database, but the user record has not yet been replicated in the slave and it returns an error that the user does not exist. This is a pretty trivial example, but I can see how it can apply to a lot of cases. Is there a strategy for configuring replicated databases to help prevent this situation?

    Read the article

  • Copy Database Wizard fails on creation of view into another not-yet-copied database

    - by user22037
    Update - I found that doing a manual detach/reattach using MSDN article "How to: Move a Database Using Detach and Attach (Transact-SQL)" got around this issue. I'll just be creating a script to dettach and reattach but do the file copies manually. Any info on how to overcome the problems with the wizard would be helpful in the future. I am in the process of moving around 20 databases from our current server to a new one. When performing the copies however I have found that some databases can not copy if they have views into other databases that have not yet been copied to the target system. The log file generated says "failed with the following error: "Invalid object name" in reference to the database in the view. If I first copy just the database referenced in the view and then in a separate step copy the database over containing the view it is successful. However some other database have views into each other so can't just adjust the order in which the copy occurs. Is there any way to ignore this error and just allow everything to copy?

    Read the article

  • I cannot connect to database from Drupal

    - by Patrick
    hi, I've uploaded my drupal website (and related database) to my new server. The database info is: host: localhost user: user pass: pass databaseName = database_name I've set the following line in settings.php file: $db_url = 'mysqli://user:password@localhost/database_name'; but what I get is this: If you are the maintainer of this site, please check your database settings in the settings.php file and ensure that your hosting provider's database server is running. For more help, see the handbook, or contact your hosting provider. I guess the database is running, it always run and I can access with phpmyadmin so I think the problem is not there. The database and website files upload have also been succesfull.. so I dunno what to do to fix this issue. It is mysql on IIS Server thanks

    Read the article

  • Oracle 10g Failover Database - How to fail back?

    - by rrkwells
    I want to know how the failover database concept works after recovery. We have defined our application to connect to a backup database in case the production database fails. If this happens, then all the transactions will be happening on that backup database. Once the production db server is running again, then how do we make sure the changes made in the backup database will be reflected on the production database? We want to make sure that any changes made while failed over are not lost. We are using Oracle 10g.

    Read the article

  • Event on SQL Server 2008 Disk IO and the new Complex Event Processing (StreamInsight) feature in R2

    - by tonyrogerson
    Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email [email protected]. To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx 18:15 SQL Server and Disk IOTony Rogerson, SQL Server MVPTony's Blog; Tony on TwitterIn this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.19:15 Break19:45 Complex Event Processing (CEP)Allan Mitchell, SQL Server MVPhttp://sqlis.com/sqlisStreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

    Read the article

  • Not able to compile dbus-ping-pong

    - by Mahipal
    I have downloaded files from http://cgit.collabora.com/git/user/alban/dbus-ping-pong.git/tree/ I am trying to compile it using the command gcc pkg-config --libs --cflags dbus-1 dbus-glib-1-2 glib-2.0 -o dbus-ping-pong dbus-ping-pong.c However, I get errors: /tmp/ccmJkxXb.o: In function g_once_init_enter: dbus-ping-pong.c:(.text+0x22): undefined reference to g_once_init_enter_impl /tmp/ccmJkxXb.o: In function dbus_glib_marshal_echo_srv__BOOLEAN__STRING_POINTER_POINTER: dbus-ping-pong.c:(.text+0x52): undefined reference to g_return_if_fail_warning dbus-ping-pong.c:(.text+0x79): undefined reference to g_return_if_fail_warning dbus-ping-pong.c:(.text+0x9d): undefined reference to g_value_peek_pointer dbus-ping-pong.c:(.text+0xac): undefined reference to g_value_peek_pointer dbus-ping-pong.c:(.text+0x109): undefined reference to g_value_set_boolean /tmp/ccmJkxXb.o: In function echo_ping_class_intern_init: dbus-ping-pong.c:(.text+0x122): undefined reference to g_type_class_peek_parent /tmp/ccmJkxXb.o: In function echo_ping_get_type: dbus-ping-pong.c:(.text+0x162): undefined reference to g_intern_static_string dbus-ping-pong.c:(.text+0x192): undefined reference to g_type_register_static_simple dbus-ping-pong.c:(.text+0x1a8): undefined reference to g_once_init_leave /tmp/ccmJkxXb.o: In function echo_ping_class_init: dbus-ping-pong.c:(.text+0x1cd): undefined reference to g_type_class_add_private dbus-ping-pong.c:(.text+0x1e2): undefined reference to dbus_g_object_type_install_info /tmp/ccmJkxXb.o: In function echo_ping_init: dbus-ping-pong.c:(.text+0x1fe): undefined reference to g_type_instance_get_private /tmp/ccmJkxXb.o: In function echo_ping: dbus-ping-pong.c:(.text+0x21d): undefined reference to g_strdup /tmp/ccmJkxXb.o: In function client: dbus-ping-pong.c:(.text+0x265): undefined reference to dbus_g_proxy_new_for_name dbus-ping-pong.c:(.text+0x2c3): undefined reference to dbus_g_proxy_call dbus-ping-pong.c:(.text+0x2d1): undefined reference to dbus_g_error_quark dbus-ping-pong.c:(.text+0x2f1): undefined reference to dbus_g_error_get_name dbus-ping-pong.c:(.text+0x305): undefined reference to g_printerr dbus-ping-pong.c:(.text+0x31d): undefined reference to g_printerr dbus-ping-pong.c:(.text+0x328): undefined reference to g_error_free dbus-ping-pong.c:(.text+0x358): undefined reference to g_print dbus-ping-pong.c:(.text+0x363): undefined reference to g_free /tmp/ccmJkxXb.o: In function main: dbus-ping-pong.c:(.text+0x38f): undefined reference to g_type_init dbus-ping-pong.c:(.text+0x3a3): undefined reference to dbus_g_bus_get dbus-ping-pong.c:(.text+0x3c7): undefined reference to g_object_new dbus-ping-pong.c:(.text+0x3df): undefined reference to g_type_check_instance_cast dbus-ping-pong.c:(.text+0x3f9): undefined reference to dbus_g_connection_register_g_object dbus-ping-pong.c:(.text+0x406): undefined reference to dbus_g_connection_get_connection dbus-ping-pong.c:(.text+0x426): undefined reference to dbus_bus_request_name dbus-ping-pong.c:(.text+0x43a): undefined reference to g_main_loop_new dbus-ping-pong.c:(.text+0x44a): undefined reference to g_main_loop_run How do I resolve this issue ?

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Best approach to accessing multiple data source in a web application

    - by ced
    I've a base web application developed with .net technologies (asp.net) used into our LAN by 30 users simultanousley. From this web application I've developed two verticalization used from online users. In future i expect hundreds users simultanousley. Our company has different locations. Each site use its own database. The web application needs to retrieve information from all existing databases. Currently there are 3 database, but it's not excluded in the future expansion of new offices. My question then is: What is the best strategy for a web application to retrieve information from different databases (which have the same schema) whereas the main objective performance data access and high fault tolerance? There are case studies in the literature that I can take as an example? Do you know some good documents to study? Do you have any tips to implement this task so efficient? Intuitively I would say that two possible strategy are: perform queries from different sources in real time and aggregate data on the fly; create a repository that contains the union of the entities of interest and perform queries directly on repository;

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >