Search Results

Search found 39503 results on 1581 pages for 'database tool'.

Page 38/1581 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Secure database connection. DAL .net architecture best practice

    - by Andrew Florko
    We have several applications that are installed in several departments that interact with database via Intranet. Users tend to use weak passwords or store login/password written on a shits of paper where everybody can see them. I'm worried about login/password leakage & want to minimize consequences. Minimizing database-server attack surface by hiding database-server from Intranet access would be a great idea also. I'm thinking about intermediary data access service method-based security. It seems more flexible than table-based or connection-based database-server one. This approach also allows to hide database-server from public Intranet. What kind of .net technologies and best practices would you suggest? Thank in you in advance!

    Read the article

  • Please suggest me the best way to design my database.

    - by Raymond Ho
    I have a table named "Pages" and a table named "Categories". Each entry of the table "Pages" is linked to the table "Categories". The "Categories" table have 5 entries, they are: "Car", "Websites", "Technology", "Mobile Phones", and "Interest". So each time I put an entry to the "Pages" table, I need to map it to the "Categories" table so are arranged properly. Here's my table: Pages ______ id [PK] name url Categories ______ id [PK] Categoryname Pages2Categories ______ Pages.id Categories.id So my question is, is this the most efficient way to create this kind of relationships between tables? It seems very amateur

    Read the article

  • Map null column as 0 in a legacy database (JPA)

    - by Indrek
    Using Play! framework and it's JPASupport class I have run into a problem with a legacy database. I have the following class: @Entity @Table(name="product_catalog") public class ProductCatalog extends JPASupport { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) public Integer product_catalog; @OneToOne @JoinColumn(name="upper_catalog") public ProductCatalog upper_catalog; public String name; } Some product catalogs don't have an upper catalog, and this is referenced as 0 in a legacy database. If I supply the upper_catalog as NULL, then expectedly JPA inserts a NULL value to that database column. How could I force the null values to be 0 when writing to the database and the other way around when reading from the database?

    Read the article

  • Is there a database with git-like qualities?

    - by Mat
    I'm looking for a database where multiple users can contribute and commit new data; other users can then pull that data into their own database repository, all in a git-like manner. A transcriptional database, if you like; does such a thing exist? My current thinking is to dump the database to a single file as SQL, but that could well get unwieldy once it is of any size. Another option is to dump the database and use the filesystem, but again it gets unwieldy once of any size.

    Read the article

  • Multiple user database design

    - by dieguitoweb
    I have to develop a basic social network for an academic purpose; but I need some tips for the users management.. The users are subdivided into 3 groups with different privilege: admins,analysts and standards users. For every user should be stored into the database the following information: name,lastname,e-mail,age,password. I'm not quite sure how I should design the database between theese two solutions: 1)one table called 'users' with the 'role' attribute that explain what a user can do and what can't do, and the permissions are managed via php 2)every application user is a database user created with the query 'CREATE ROLE' (It's a postgres database) and he has permissions on some tables granted with the 'GRANT' statement You should take into account that the project is for a database exam.. thanks

    Read the article

  • how to send database data to a MySQL server to update the server database.

    - by haisergeant
    Hi everyone, I am developing Smoking counter and I need to send all the smoking record (about the time user smoking) to database on the server. The server database is MySQL database. I knew that I must have send data to PHP page/script, and this page/script will run on the data to insert record to database. I would like to know: is there another way to update database, because I don't have knowledge about PHP. I used to work on java and c/c++/objective-c. If you know another way to do this task, please let me know. Any help would be appreciated.

    Read the article

  • Can Django flush its database(s) between every unit test

    - by mikem
    Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset. Consider a test which pulls data out of the database by primary key: class ChangeLogTest(django.test.TestCase): def test_one(self): do_something_which_creates_two_log_entries() log = LogEntry.objects.get(id=1) assert_log_entry_correct(log) log = LogEntry.objects.get(id=2) assert_log_entry_correct(log) This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails. This is actually a two part question: Is it possible to force ./manage.py test to flush the database between each test case? Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?

    Read the article

  • Designing Relational Survey Questionnaires Database

    - by user1213055
    I'm trying to build a simple sql database for following access database. Currently there is no relationship and I just have two tables male and female with 6 sections in each form. How can I design it a better way so end user can connect to the database and analyze using STATA or SPSS ? I'm really confused whether I should create one table with all fields or break down into different tables. The database is specific to this study only so I'm not looking for a generic survey database where user can create surveys and capture them. Any feedback or suggestion is much appreciated.

    Read the article

  • Connecting Android app to MySQL database

    - by opuhliyvladyslav
    Can somebody help me with question related with MySQL database using? Yesterday i was making app for getting some text data and sending them to database located on remote server! I was making POST request to database and sent few text fields to one table in database. So MY QUESTION IS: "Can I use MySQL directly (not though POST method )? and how to use it ?" (will be so glad to see url to solutions or examples) P.S. when i was sending data to server i have some errors in database fields when sending russian characters (what type of encoding sending from my app to server????)

    Read the article

  • Can I use an "IN" filter on an Advantage Database Table (TAdsTable)?

    - by Anthony
    I want to apply a filter to an advantage table using multiple values for an Integer field. The equivalent SQL would be: SELECT * FROM TableName WHERE FieldName IN (1, 2, 3) Is it possible to do the same on an AdsTable within having to repeat the field using an "OR"? I want to something like: Filter := 'FieldName IN (1, 2, 3)' Instead of: Filter := 'FieldName = 1 OR FieldName = 2 OR FieldName = 3'

    Read the article

  • Retrieving images when database is in remote location...

    - by sasidhar
    Hi everyone, I am developing an application using java, my application would be accessed by number of different users simultaneously and the database resides in a central server. The access of the database from remote server is handled by just giving the appropriate IP of the server in the hibernate configure file. My question is, i have to store a picture regarding each user of the database, i heard that storing the image in the database and retrieving it from the database is not advised and has negative impact on the performance. Is it so ? What are the other possible ways i can implement this ? What is the best way to do it..? Please help....

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Oracle Database 11gR2 11.2.0.3 Certified with E-Business Suite on HP-UX PA-RISC

    - by John Abraham
    As a follow up to our original announcement, Oracle Database 11g Release 2 (11.2.0.3) is now certified with Oracle E-Business Suite Release 11i and Release 12 on the following HP-UX platforms: Release 11i (11.5.10.2 + ATG PF.H RUP 6 and higher) : HP-UX PA-RISC (64-bit) (11.31) Release 12 (12.0.4 and higher, 12.1.1 and higher): HP-UX PA-RISC (64-bit) (11.31) This announcement for Oracle E-Business Suite 11i and R12 includes: Real Application Clusters (RAC) Oracle Database Vault Transparent Data Encryption (Column Encryption) TDE Tablespace Encryption Advanced Security Option (ASO)/Advanced Networking Option (ANO) Export/Import Process for Oracle E-Business Suite Release 11i and Release 12 Database Instances Transportable Database and Transportable Tablespaces Data Migration Processes for Oracle E-Business Suite Release 11i and Release 12 References MOS Document 881505.1 - Interoperability Notes - Oracle E-Business Suite Release 11i with Oracle Database 11g Release 2 (11.2.0) MOS Document 1058763.1 - Interoperability Notes - Oracle E-Business Suite Release 12 with Oracle Database 11g Release 2 (11.2.0) MOS Document 1091086.1 - Integrating Oracle E-Business Suite Release 11i with Oracle Database Vault 11gR2 MOS Document 1091083.1 - Integrating Oracle E-Business Suite Release 12 with Oracle Database Vault 11gR2 MOS Document 216205.1 - Database Initialization Parameters for Oracle E-Business Suite 11i MOS Document 396009.1 - Database Initialization Parameters for Oracle Applications Release 12 MOS Document 761570.1 - Database Preparation Guidelines for an Oracle E-Business Suite Release 12.1.1 Upgrade MOS Document 823586.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 11i MOS Document 823587.1 - Using Oracle 11g Release 2 Real Application Clusters with Oracle E-Business Suite Release 12 MOS Document 403294.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 11i MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 828223.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 11i MOS Document 828229.1 - Using TDE Tablespace Encryption with Oracle E-Business Suite Release 12 MOS Document 391248.1 - Encrypting Oracle E-Business Suite Release 11i Network Traffic using Advanced Security Option and Advanced Networking Option MOS Document 732764.1 - Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 MOS Document 557738.1 - Export/Import Process for Oracle E-Business Suite Release 11i Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 741818.1 - Export/Import Process for Oracle E-Business Suite Release 12 Database Instances Using Oracle Database 11g Release 1 or 11g Release 2 MOS Document 1366265.1 - Using Transportable Tablespaces to Migrate Oracle Applications 11i Using Oracle Database 11g Release 2 MOS Document 1311487.1 - Using Transportable Tablespaces to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 11g Release 2 MOS Document 729309.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 11i Using Oracle Database 10g Release 2 or 11g MOS Document 734763.1 - Using Transportable Database to Migrate Oracle E-Business Suite Release 12 Using Oracle Database 10g Release 2 or 11g Please also review the platform-specific Oracle Database Installation Guides for operating system and other prerequisites.

    Read the article

  • Best algorithm/practice when creating a search mechanism for your database?

    - by Alex Hope O'Connor
    I have been designing a database where it is very important to provide users with a good search mechanism. So I was wondering what some of the best practices are for using keywords to search over multiple database tables and return the relevent records? Some other things I am curious about: The users location, if they provide an address The speed of the algorithm Additional Information: I am using C# and LINQ-To-SQL.

    Read the article

  • What are the advantages of storing xml in a relational database?

    - by Chris
    I was poking around the AdventureWorks database today and I noticed that a number of tables (HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?

    Read the article

  • How to fix “Cannot connect to the configuration database.”

    - by ybbest
    The problem: When I browse to a SharePoint site, I got the Server Error in ‘/’ Application, Cannot connect to the configuration database. The Analysis: The reason you get the message is that SharePoint WFE cannot connect to the SQL database, you need to check the weather SQL server service is started as shown below. Solution: When checking the SQL Server service, I see it is not started. After starting the service, it works like a charm.

    Read the article

  • Creating database connections - Do it once or for each query?

    - by webnoob
    At the moment I create a database connection when my web page is first loaded. I then process the page and run any queries against that conection. Is this the best way to do it or should I be creating a database connection each time I run a query? p.s It makes more sense to me to create 1 connection and use it but I don't know if this can cause any other issues. I am using C# (ASP.NET) with MSSQL.

    Read the article

  • What are the advantages of storing xml in a relational database?

    - by Chris
    I was poking around the AdventureWorks database today and I noticed that a number of tables (HumanResources.JobCandidate and Sales.Individual for example) have a column which is storing xml data. What I would to know is, what is the advantage of storing basically a database table row's worth of data in another table's column? Doesn't this make it difficult to query off of this information? Or is the assumption that the data won't need to be queried and just needs to be stored?

    Read the article

  • What's the best way to cache a growing database table for html generation?

    - by McLeopold
    I've got a database table which will grow in size by about 5000 rows a hour. For a key that I would be querying by, the query will grow in size by about 1 row every hour. I would like a web page to show the latest rows for a key, 50 at a time (this is configurable). I would like to try and implement memcache to keep database activity low for reads. If I run a query and create a cache result for each page of 50 results, that would work until a new entry is added. At that time, the page of latest results gets new result and the oldest results drops off. This cascades down the list of cached pages causing me to update every cache result. It seems like a poor design. I could build the cache pages backwards, then for each page requested I should get the latest 2 pages and truncate to the proper length of 50. I'm not sure if this is good or bad? Ideally, the mechanism I use to insert a new row would also know how to invalidate the proper cache results. Has someone already solved this problem in a widely acceptable way? What's the best method of doing this? EDIT: If my understanding of the MYSQL query cache is correct, it has table level granularity in invalidation. Given the fact that I have about 5000 updates before a query on a key should need to be invalidated, it seems that the database query cache would not be used. MS SQL caches execution plans and frequently accessed data pages, so it may do better in this scenario. My query is not against a single table with TOP N. One version has joins to several tables and another has sub-selects. Also, since I want to cache the html generated table, I'm wondering if a cache at the web server level would be appropriate? Is there really no benefit to any type of caching? Is the best advice really to just allow a website site query to go through all the layers and hit the database every request?

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • how to open multiple projects into the CAST IRON integration tool?

    - by MIshal Shah
    Hi, I am learning the cast iron tool (http://www.castiron.com/) which is widely used now a days for integration purpose,but i can only open 1 project and if i want to open the other project at the same time than i have to close the 1st project and after that able to open the 2nd project. So many times i have to open 2 projects at the same time but i dont know in which way i can open the projects ? can any body give me any urgent solution for the same to open the multiple projects at the same time and to switch between them ? Thanks, Mishal Shah

    Read the article

  • Downloading Database dump from server

    - by Ctroy
    I have a mysql database on a server that is around 4 gb in size and I couldn't get it downloaded to my local machine. I tried getting a dump on the server, but the dump is not getting created probably because of the big size. Is there any way, I could get the dump downloaded on my local machine? I can syncing using Sqlyog but I know it will take ages. Is there a way, I can get a dump created on the server? By the way, my server is a linux server and its running php/mysql.

    Read the article

  • SQL Server 2000 msdb database loading/suspect

    - by Blake Parcell
    My SQL Server recently suffered a raid controller/hard drive crash. After getting my hard drive problem corrected I soon found that some of my databases were (suspect) namely msdb. I am not a DBA by any means however am somewhat familiar with the daily SQL activities that happen on my server. So I restored from backup, and tried to bring my msdb database online. It is now forever stuck in (Loading\Suspect) and I am unable to script backups for my important databases. I can recreate all of the backup plans etc if i can somehow get a working msdb. Any help would be greatly appreciated. I am currently using: Microsoft SQL Server 2000 Version: 8.00.194

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >