Search Results

Search found 30300 results on 1212 pages for 'temporal database'.

Page 39/1212 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Wordpress database migration

    - by Sean Cull
    Hi Everyone! I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here. If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?

    Read the article

  • Storing i18n data in a database using XML

    - by TigrouMeow
    Hello, I may have to store some i18n-ed data in my database using XML if I don't fight back. That's not my choice, but it's in the specifications I have to follow. We would have, by example, something like following in a 'Country' column: <lang='fr'>Etats-Unis</lang> <lang='en'>United States</lang> This would apply to many columns in the database. I don't think it's a good idea at all. I tend to think that a cell in a database should represent a single piece of data (better for look-up), and that the database should have two dimensions maximum and not 3 or more (one request more would be required per dimension / a dimension here would be equal to the number of XML attributes). My idea was to have a separate table for all the translations, with columns such as : ID / Language / Translation. However, I should admit that I'm really not sure what is the best way to store data in various languages in a DB... Thanks for your advices :)

    Read the article

  • Using views as a data interface between modules in a database

    - by Stefan
    Hello, I am working on the database layout of a straighforward small database in Mysql. We want to modularize this system in order to have more flexiblity for different implementations we are going to make. Now, the idea was to have one module in the database (simple a group of tables with constraints between them) pass its data to the next module via views. In this way, changes in one module would not affect the other ones, as we can make sure in the view that the right data is present there at any time, although the underlying structure of tables might be different. The structure of the App handling the database would likewise be modularized. Is this something that is sometimes done? On a technical side, as I understand views can't have primary keys - how would I then adress such a view? What other issues should be considered?

    Read the article

  • Top techniques to avoid 'data scraping' from a website database

    - by Addsy
    I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db. Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information. Does anyone have any good tactics for preventing or even just dettering this that they could share. Thanks

    Read the article

  • Best Language for the job? Database | C++, .NET, Java

    - by Randy E
    Ok, quick overview. I'm pretty brand new to software design. I have experience reading and editing/customizing PHP things for online scripts/software; Such as CMS, Wordpress, some forum solutions. I'm about to begin my degree in Software Design, the school I'm going to will allow us to kind of focus on an area, C++, Java, or .NET. I've played around a little with VB over the past week, mostly just trying to get a slight feel for it, however nothing extensive. I've been through Herbert Schildt's "C++, A Beginner's Guide." but I was mainly reading it, not doing anything with it beyond a couple basic Console Apps (and getting frustrated with auto-close :/ ). Now, where I decide to focus more in with my degree will depend on what the best language for the job is for my first piece of software I want to develop on my own. Assume I haven't looked at any of the languages at all, please help with the following: My first piece of software will be a database program. Everything has to do with users inputting and retrieving data, and calling that data to help with another function of the software, automatically calculating billing information based on information inputted in the other portion of the program. I won't go into too many details as I'm targeting a niche that doesn't have too much competition, but the competition that is there is established. I want to offer more features, scalable solutions, and the ability to port it to an online version. Ok, basically, it is a complete case management with integrated billing for Private Investigators. I would like the case management to be able to check the Database to see if certain information has been inputted before (such as Names/SSN's), and then the billing will pull hours inputted in the case portion for investigative work, multiplying by an already inputted amount for the fee, and then calculate sales tax. I also want to provide potential clients with an easily scalable solution, that is, a basic option for start ups that costs the least amount, with no additional users, ran on one machine. A middle option with the ability to create users and place them in two groups (User or Admin), as well as adding a few additional features, ran on one machine, but this will allow it to be accessed after being mapped on a network drive. And a third option to allow the placement into 4 different groups (Investigators, Billing, Managers, Admins) and more features. And then, a couple of years after launch, a 4th option that is browser based allowing the same 4 groups to login, as well as clients (view things concerning their case, with some admin customizable objects that can be added for clients view), over the internet. The only licensing security I would like to employ right off the bat will be serial key generated after ordering online (received in an email after the successful purchase). The program will access a database stored on a server periodically to verify license. I would like it to be able to check to make sure it's the most updated version and automatically update if not.

    Read the article

  • Advice Needed To Normalise Database

    - by c11ada
    hey all, im trying to create a database for a feedback application in ASP.net i have the following database design. Username (PK) QuestionNo (PK) QuestionText FeedbackNo (PK) Username UserFeedbackNo (PK) FeedbackNo (FK) QuestionNo (FK) Answer Comment a user has a unique username a user can have multiple feedbacks i was wondering if the database design i have here is normalised and suitable for the application

    Read the article

  • Cloud Database Service Latency/Performance

    - by Gcoop
    Hi All, I am running a heavy traffic site and our server is beginning to get to its limits, at the moment the entire LAMP stack is on one box (not ideal). I would like to move the database onto it's own box or onto a cloud service, but from my previous experience moving the database off the same box as the webserver increases the latency of reads quite dramatically slowing down the site. Is using a cloud service for this going to overcome this problem, because as far as I can tell its essentially the same situation (as moving it onto a separate box in my control)? In which case why is there so much popularity around cloud based database services at the moment? Are cloud based database services so quick that the latency of reads is so low that its almost like having it on the same box in the same datacentre?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • what database should i choose ?

    - by MemoryLeak
    I use winforms to develop a desktop application, and right now I plan to use SQL server express, but the problem is, if i use sql server express, then the installation is much trouble, i need to install sql server first, and install my own applicaiton. Then I tried to use access 2003 as my database, then I only need to copy the mdb file with my application. But the access 's function is not that strong, the text length is limited to 255 byte. Is there any other database solution, which is easy to integrate to my application, and easy to install after i develop my application ? Many many desktop application have their own database, and easy to install and easy to use, what database do they use ?

    Read the article

  • Designing a database file format

    - by RoliSoft
    I would like to design my own database engine for educational purposes, for the time being. Designing a binary file format is not hard nor the question, I've done it in the past, but while designing a database file format, I have come across a very important question: How to handle the deletion of an item? So far, I've thought of the following two options: Each item will have a "deleted" bit which is set to 1 upon deletion. Pro: relatively fast. Con: potentially sensitive data will remain in the file. 0x00 out the whole item upon deletion. Pro: potentially sensitive data will be removed from the file. Con: relatively slow. Recreating the whole database. Pro: no empty blocks which makes the follow-up question void. Con: it's a really good idea to overwrite the whole 4 GB database file because a user corrected a typo. I will sell this method to Twitter ASAP! Now let's say you already have a few empty blocks in your database (deleted items). The follow-up question is how to handle the insertion of a new item? Append the item to the end of the file. Pro: fastest possible. Con: file will get huge because of all the empty blocks that remain because deleted items aren't actually deleted. Search for an empty block exactly the size of the one you're inserting. Pro: may get rid of some blocks. Con: you may end up scanning the whole file at each insert only to find out it's very unlikely to come across a perfectly fitting empty block. Find the first empty block which is equal or larger than the item you're inserting. Pro: you probably won't end up scanning the whole file, as you will find an empty block somewhere mid-way; this will keep the file size relatively low. Con: there will still be lots of leftover 0x00 bytes at the end of items which were inserted into bigger empty blocks than they are. Rigth now, I think the first deletion method and the last insertion method are probably the "best" mix, but they would still have their own small issues. Alternatively, the first insertion method and scheduled full database recreation. (Probably not a good idea when working with really large databases. Also, each small update in that method will clone the whole item to the end of the file, thus accelerating file growth at a potentially insane rate.) Unless there is a way of deleting/inserting blocks from/to the middle of the file in a file-system approved way, what's the best way to do this? More importantly, how do databases currently used in production usually handle this?

    Read the article

  • Compare Database container and class container

    - by Mohit Deshpande
    I am using a SQL Server database in my current project. I was watching the MVC Storefront videos (specifically the repository pattern) and I noticed that Rob (the creator of MVC Storefront) used a class called Category and Product, instead of a database and I have notice that using LINQ-SQL or ADO.NET, that a class is generated. Is there an advantage to using a class over a database to store information? Or is it the other way around? Or are they even comparable?

    Read the article

  • Delete ONE SPECIFIC table of a database - leave the rest intact

    - by Jayomat
    Hi, I have a database where I store two different kinds of data. One table is for favorite routes, the other stores the retrieved routes from a server. I can retrieve the routes etc just fine. But after retrieving the first Route, pressing back or HOME, and then retrieving another route, the routes table is filled with all the old routes plus the new ones. So my question: how do I delete ONLY the routes table and not the whole database because I don't want to delete the added favorites....?! I found the following function in the android docs: public int delete (String table, String whereClause, String[] whereArgs) and I tried to implement it, but I must pass a SQLiteDataBase as an argument. But how? I implemented: public void deleteTableRoutes(SQLiteDataBase db){ db.delete("routes", null, null); } But I want to call this function from a different class where I have no reference to the database.. so what do I have to pass as an argument? Or how do I get a reference to my database? I build my database upon the code example of the NotePadExample from the dev docs. How to solve this problem? thanks

    Read the article

  • Automatic database generation / migration with perl

    - by pistacchio
    Hi, In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any perl module that does the same? Eg, it takes the YAML / XML / JSON description of a database as input and modifies / generates the database accordingly? Thanks in advance.

    Read the article

  • How do you handle objects that need custom behavior, and need to exist as an entity in the database?

    - by Scott Whitlock
    For a simple example, assume your application sends out notifications to users when various events happen. So in the database I might have the following tables: TABLE Event EventId uniqueidentifier EventName varchar TABLE User UserId uniqueidentifier Name varchar TABLE EventSubscription EventUserId EventId UserId The events themselves are generated by the program. So there are hard-coded points in the application where an event instance is generated, and it needs to notify all the subscribed users. So, the application itself doesn't edit the Event table, except during initial installation, and during an update where a new Event might be created. At some point, when an event is generated, the application needs to lookup the Event and get a list of Users. What's the best way to link the event in the source code to the event in the database? Option 1: Store the EventName in the program as a fixed constant, and look it up by name. Option 2: Store the EventId in the program as a static Guid, and look it up by ID. Extra Credit In other similar circumstances I may want to include custom behavior with the event type. That is, I'll want subclasses of my Event entity class with different behaviors, and when I lookup an event, I want it to return an instance of my subclass. For instance: class Event { public Guid Id { get; } public Guid EventName { get; } public ReadOnlyCollection<EventSubscription> EventSubscriptions { get; } public void NotifySubscribers() { foreach(var eventSubscription in EventSubscriptions) { eventSubscription.Notify(); } this.OnSubscribersNotified(); } public virtual void OnSubscribersNotified() {} } class WakingEvent : Event { private readonly IWaker waker; public WakingEvent(IWaker waker) { if(waker == null) throw new ArgumentNullException("waker"); this.waker = waker; } public override void OnSubscribersNotified() { this.waker.Wake(); base.OnSubscribersNotified(); } } So, that means I need to map WakingEvent to whatever key I'm using to look it up in the database. Let's say that's the EventId. Where do I store this relationship? Does it go in the event repository class? Should the WakingEvent know declare its own ID in a static member or method? ...and then, is this all backwards? If all events have a subclass, then instead of retrieving events by ID, should I be asking my repository for the WakingEvent like this: public T GetEvent<T>() where T : Event { ... // what goes here? ... } I can't be the first one to tackle this. What's the best practice?

    Read the article

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • How to distribute a unique database already in production?

    - by JVerstry
    Let's assume a successful web spring application running on a MySql or PostGre kind of database. The traffic is becoming so high and the amount of data is becoming so big that a distributed dataase solution needs to be implemented. It is a scalability issue. Let's assume this application is using Hibernate and the data access layer is cleanly separated with DAO objects. What would be the best strategy to scale this database? Does anyone have hands on experience to share? Is it possible to minimize sharding code (Shard) in the application? Ideally, one should be able to add or remove databases easily. A failback solution is welcome too. I am not looking for you could go for sharding or you could go no sql kind of answers. I am looking for deeper answers from people with experience.

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • How can I give my client "full access" to their PHP application's MySQL database?

    - by Micah Delane Bolen
    I am building a PHP application for a client and I'm seriously considering WordPress or a simple framework that will allow me to quickly build out features like forums, etc. However, the client is adamant about having "full access" to the database and the ability to "mine the data." Unfortunately, I'm almost certain they will be disappointed when they realize they won't be able to easily glean meaningful insight by looking at serialized fields in wp_usermeta, etc. One thought I had was to replicate a variation on the live database where I flatten out all of those ambiguous and/or serialized fields into something that is then parsable by a mere mortal using a tool as simple as phpMyAdmin. Unfortunately, the client is not going to settle for a simple backend dashboard where I create the custom reports for them even though I know that would be the easiest and most sane approach.

    Read the article

  • Database schemas WAY out of sync - need to get up to date without losing data

    - by Zind
    The problem: we have one application that has a portion which is used by a very small subset of the total users, and that part of the application is running off of a separate database as well. In a perfect world, the schemas of the two databases would be synced up, but such is not the case. Some migrations have been run on the smaller database, most haven't; and furthermore, there is nothing such as revision number to be able to easily identify which have and which haven't. We would like to solve this quandary for future projects. During a discussion we've come up with the following possible plan of action, and I am wondering if anyone knows of any project which has already solved this problem: What we would like to do is create an empty database from the schema of the large fully-migrated database, and then move all of the data from the smaller non-migrated database into that empty one. If it makes things easier, it can probably be assumed for the sake of this problem specifically that no migrations have ever removed anything, only added. Else, if there are other known solutions, I'd like to hear them as well.

    Read the article

  • Database Change Auditing - Part of or Abstracted from ORM / Application Layer?

    - by BrandonV
    My fellow developers and I are at a crossroads in how to go about continuing our auditing of database changes. Most of our applications log changes via INSERT, UPDATE, and DELETE triggers. A few of our newer applications audit at the ORM layer; specifically using Hibernate Envers. While ORM layer auditing provides a much cleaner interface and is much more maintainable, it will not capture any manual database changes that are made. ORM layer auditing also means that our libraries will currently require a dependency on our ORM implementation unless, specifically in our case for example, JPA plans on providing something in the near future. Is there a common paradigm that addresses this?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >