Search Results

Search found 47408 results on 1897 pages for 'database machine'.

Page 97/1897 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Two Tables Serving as one Model in Rails

    - by matsko
    Is is possible in rails to setup on model which is dependant on a join from two tables? This would mean that for the the model record to be found/updated/destroyed there would need to be both records in both database tables linked together in a join. The model would just be all the columns of both tables wrapped together which may then be used for the forms and so on. This way when the model gets created/updated it is just one form variable hash that gets applied to the model? Is this possible in Rails 2 or 3?

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Need theoretical help, how to comprehend an if-else dependency net

    - by macbie
    I am going to face a following issue: I'm writing a program that manages some properties, some of them are general and some are specific. Each property is a pair of key and value, and for example: if it is given a general property and other specific property with exactly the same key and value has been existed before then the general property will swap the specific one in the register. If there are two the same general properties - both will remain in the register. And so on; it is like a net of dependencies. In my case I can handle with it intuitively and foresee all cases, but only because the system is not too vast. What if it would? I have met such problems a few times in many different programs and languages (i.e working with C semaphores) and my question is: How to approach this kind of problem? Is this connected with finite state machine, graph theory or something similar? How to be sure that I have considered the whole system and each possible case? Could you recommend some resources (books, sites) to learn from?

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • Bulletproof way to DROP and CREATE a database under Continuous Integration.

    - by H. Abraham Chavez
    I am attempting to drop and recreate a database from my CI setup. But I'm finding it difficult to automate the dropping and creation of the database, which is to be expected given the complexities of the db being in use. Sometimes the process hangs, errors out with "db is currently in use" or just takes too long. I don't care if the db is in use, I want to kill it and create it again. Does some one have a straight shot method to do this? alternatively does anyone have experience dropping all objects in the db instead of dropping the db itself? USE master --Create a database IF EXISTS(SELECT name FROM sys.databases WHERE name = 'mydb') BEGIN ALTER DATABASE mydb SET SINGLE_USER --or RESTRICTED_USER --WITH ROLLBACK IMMEDIATE DROP DATABASE uAbraham_MapSifterAuthority END CREATE DATABASE mydb;

    Read the article

  • I've created a database table using Visual Studio for my C# program. Now what?

    - by Kevin
    Hi! I'm very new to C#, so please forgive me if I've overlooked something here. I've created a database using Visual Studio (add new item service-based database) called LoadForecast.mdf. I then created a table called ForecastsDB and added some fields. My main question is this: I've created a console application with the intention of writing some data to the newly created database. I've added LoadForecast.mdf as a data source for my program, but is there anything else I should do? I saw an example where the next step was adding a "data diagram", but this was for a visual application, not a console application. Do I still need to diagram the database for my console app? I just want to be able to write new records out to my database table and wasn't sure if there were any other things I needed to do for the VS environment to be "aware" of my database. Thanks for any advise!

    Read the article

  • How to deploy a Java Swing application with an embedded JavaDB database?

    - by Jonas
    I have implemented an Java Swing application that uses an embedded JavaDB database. The database need to be stored somewhere and the database tables need to be created at the first run. What is the preferred way to do these procedures? Should I always create the database in the local directory, and first check if the database file exist, and if it doesn't exist let the user create the tables (or at least show a message that the tables will be created). Or should I let the user choose a path? but then I have to save the path somewhere. Should I save the path with Preferences.systemRoot();, and check if that variable is set on startup? If the user choses a path and save it in the Preferences, can I get any problems with user permissions? or should it be safe wherever the user store the database? Or how do I handle this? Any other suggestions for this procedure?

    Read the article

  • Working with foreign keys - cannot insert

    - by Industrial
    Hi everyone! Doing my first tryouts with foreign keys in a mySQL database and are trying to do a insert, that fails for this reason: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails Does this mean that foreign keys restrict INSERTS as well as DELETES and/or UPDATES on each table that is enforced with foreign keys relations? Thanks! Updated description: Products ---------------------------- id | type ---------------------------- 0 | 0 1 | 3 ProductsToCategories ---------------------------- productid | categoryid ---------------------------- 0 | 0 1 | 1 Product table has following structure CREATE TABLE IF NOT EXISTS `alpha`.`products` ( `id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT , `type` TINYINT(2) UNSIGNED NOT NULL DEFAULT 0 , PRIMARY KEY (`id`) , CONSTRAINT `prodsku` FOREIGN KEY (`id` ) REFERENCES `alpha`.`productsToSku` (`product` ) ON DELETE CASCADE, ON UPDATE CASCADE) ENGINE = InnoDB;

    Read the article

  • How to optimize an SQL query with many thousands of WHERE clauses

    - by bugaboo
    I have a series of queries against a very mega large database, and I have hundreds-of-thousands of ORs in WHERE clauses. What is the best and easiest way to optimize such SQL queries? I found some articles about creating temporary tables and using joins, but I am unsure. I'm new to serious SQL, and have been cutting and pasting results from one into the next. SELECT doc_id, language, author, title FROM doc_text WHERE language='fr' OR language='es' SELECT doc_id, ref_id FROM doc_ref WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR ... SELECT ref_id, location_id FROM ref_master WHERE ref_id=098765 OR ref_id=987654 OR ref_id=876543 OR OR OR ... SELECT location_id, location_display_name FROM location SELECT doc_id, index_code, FROM doc_index WHERE doc_id=1234567 OR doc_id=1234570 OR doc_id=1234572 OR doc_id=1234596 OR OR OR x100,000 These unoptimized query can take over 24 hours each. Cheers.

    Read the article

  • SQL Server many-to-many design recommendation

    - by Jean-Philippe Brabant
    I have a SQL Server database with two table : Users and Achievements. My users can have multiple achievements so it a many-to-many relation. At school we learned to create an associative table for that sort of relation. That mean creating a table with a UserID and an AchivementID. But if I have 500 users and 50 achievements that could lead to 25 000 row. As an alternative, I could add a binary field to my Users table. For example, if that field contained 10010 that would mean that this user unlocked the first and the fourth achievements. Is their other way ? And which one should I use.

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • How to represent a Many-To-Many relationship in XML or other simple file format?

    - by CSharperWithJava
    I have a list management appliaction that stores its data in a many-to-many relationship database. I.E. A note can be in any number of lists, and a list can have any number of notes. I also can export this data to and XML file and import it in another instance of my app for sharing lists between users. However, this is based on a legacy system where the list to note relationship was one-to-many (ideal for XML). Now a note that is in multiple lists is esentially split into two identical rows in the DB and all relation between them is lost. Question: How can I represent this many-to-many relationship in a simple, standard file format? (Preferably XML to maintain backwards compatibility)

    Read the article

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Why do VMs need to be "stack machines" or "register machines" etc.?

    - by Prog
    (This is an extremely newbie-ish question). I've been studying a little about Virtual Machines. Turns out a lot of them are designed very similarly to physical or theoretical computers. I read that the JVM for example, is a 'stack machine'. What that means (and correct me if I'm wrong) is that it stores all of it's 'temporary memory' on a stack, and makes operations on this stack for all of it's opcodes. For example, the source code 2 + 3 will be translated to bytecode similar to: push 2 push 3 add My question is this: JVMs are probably written using C/C++ and such. If so, why doesn't the JVM execute the following C code: 2 + 3..? I mean, why does it need a stack, or in other VMs 'registers' - like in a physical computer? The underlying physical CPU takes care of all of this. Why don't VM writers simply execute the interpreted bytecode with 'usual' instructions in the language the VM is programmed with? Why do VMs need to emulate hardware, when the actual hardware already does this for us? Again, very newbie-ish questions. Thanks for your help

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • How should I build a simple database package for my python application?

    - by Carson Myers
    I'm building a database library for my application using sqlite3 as the base. I want to structure it like so: db/ __init__.py users.py blah.py etc.py So I would do this in Python: import db db.users.create('username', 'password') I'm suffering analysis paralysis (oh no!) about how to handle the database connection. I don't really want to use classes in these modules, it doesn't really seem appropriate to be able to create a bunch of "users" objects that can all manipulate the same database in the same ways -- so inheriting a connection is a no-go. Should I have one global connection to the database that all the modules use, and then put this in each module: #users.py from db_stuff import connection Or should I create a new connection for each module and keep that alive? Or should I create a new connection for every transaction? How are these database connections supposed to be used? The same goes for cursor objects: Do I create a new cursor for each transaction? Create just one for each database connection?

    Read the article

  • How to find foreign-key dependencies pointing to one record in Oracle?

    - by daveslab
    Hi folks, I have a very large Oracle database, with many many tables and millions of rows. I need to delete one of them, but want to make sure that dropping it will not break any other dependent rows that point to it as a foreign key record. Is there a way to get a list of all the other records, or at least table schemas, that point to this row? I know that I could just try to delete it myself, and catch the exception, but I won't be running the script myself and need it to run clean the first time through. I have the tools SQL Developer from Oracle, and PL/SQL Developer from AllRoundAutomations at my disposal. Thanks in advance!

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Need a field / flag / status number for mutliple use ?

    - by Jules
    I want to create a field in my database which will be easy to query. I think if I give a bit of background this will make more sense. My table has listings shown on my website. I run a program which looks at the listings a decides whether to hide them from being shown on the site. I also hide listings manually for various reasons. I want to store these reasons in a field, so more than one reason could be made for hiding. So I need some form of logic to determine which reasons have been used. Can anyone offer me any guidance on what will be future-proof aka new reasons and what will be quick and easy to query upon ?

    Read the article

  • how to design a db like Facebook where users can update their status and of the fb page as admin

    - by Harsha M V
    i am designing a database where users can update status messages of theirs and they can create pages groups like facebook fan page and post status like the admin of the page and not as a user. user(id, name..) group(id, name...) group_admin(group_id, user_id) this is my set up. Is this the way to do it. How to post under the group as an admin. will i need to make a check to every user if he is the admin or not ?

    Read the article

  • Three customer addresses in one table or in separate tables?

    - by DR
    In my application I have a Customer class and an Address class. The Customer class has three instances of the Address class: customerAddress, deliveryAddress, invoiceAddress. Whats the best way to reflect this structure in a database? The straightforward way would be a customer table and a separate address table. A more denormalized way would be just a customer table with columns for every address (Example for "street": customer_street, delivery_street, invoice_street) What are your experiences with that? Are there any advantages and disadvantages of these approaches?

    Read the article

  • Hyper-V Machine drifts time all over, even with NTP

    - by MichaelGG
    Resolved The problem was Hyper-V on that machine. I removed Hyper-V, installed VMware Server, ran the same VM. Time sync issues went away (< 100ms difference after a day). My setup is like this: HYV1 - HyperV machine (non domain) - sync irrelevant AD1 - VM AD server on HYV1, sync'd to time.nist.gov. HyperV time sync off. S1 - Physical machine, sync'd to domain. S2 - Physical machine running HyperV, sync'd to domain. V1 - Linux VM machine on S2, sync'd to AD1. No HyperV integration. AD1 and S1 have fine sync -- stripchart shows less than 100ms difference. S2 drifts like crazy. Here's a bit of the stripchart against AD1: 18:33:22 d:+00.0010138s o:+05.4101899s 18:33:24 d:+00.0010138s o:+05.4319765s 18:33:26 d:+00.0000000s o:+05.4788429s 18:33:28 d:+00.0000000s o:+05.6089942s 18:33:30 d:+00.0010138s o:+05.7240269s 18:33:32 d:+00.0000000s o:+06.0421911s 18:33:34 d:+00.0081104s o:+06.5613708s 18:33:37 d:+00.0000000s o:+06.9096594s 18:33:39 d:+00.0000000s o:+06.8867838s 18:33:41 d:+00.0010127s o:+06.8936401s In 20 seconds, it drifted over a second. If I manually reset it to within 1s, within a few minutes it'll be back drifting about 2 seconds. Overnight it went from ~2s to ~5s. The Linux VM inside S2 has perfect sync with AD1. Here's the config: C:\Users\mgg>w32tm /dumpreg /subkey:Parameters Value Name Value Type Value Data ------------------------------------------------------------ ServiceDll REG_EXPAND_SZ %systemroot%\system32\w32time.dll ServiceMain REG_SZ SvchostEntry_W32Time ServiceDllUnloadOnStop REG_DWORD 1 Type REG_SZ NT5DS NtpServer REG_SZ ad01.mydomain ad02.mydomain C:\Users\mgg>w32tm /dumpreg /subkey:Config Value Name Value Type Value Data ----------------------------------------------------------- FrequencyCorrectRate REG_DWORD 4 PollAdjustFactor REG_DWORD 5 LargePhaseOffset REG_DWORD 50000000 SpikeWatchPeriod REG_DWORD 900 LocalClockDispersion REG_DWORD 9 HoldPeriod REG_DWORD 5 PhaseCorrectRate REG_DWORD 1 UpdateInterval REG_DWORD 30000 EventLogFlags REG_DWORD 2 AnnounceFlags REG_DWORD 5 TimeJumpAuditOffset REG_DWORD 28800 MinPollInterval REG_DWORD 2 MaxPollInterval REG_DWORD 8 MaxNegPhaseCorrection REG_DWORD -1 MaxPosPhaseCorrection REG_DWORD -1 MaxAllowedPhaseOffset REG_DWORD 300 I looked at the event log, and apart from warnings about sync (after it gets way out of sync), there's no other warnings. How can I go about troubleshooting this? It's the only machine that is having this problem. All the other machines (physical and virtual) are doing fine. Edit: To clarify: The VM (AD1) has integration turned off and syncs to time.nist.gov. AD1 is fine. It's the physical machine S1 that can't sync to AD1 and drifts all over. All the other physical servers are able to sync to AD1 just fine. Update So, it appears to be an issue of running the VM. The clock slips slowly with the VM off. Turned on, it immediately starts losing seconds. I swt the VM to only use half the resources, and that seems to have slightly mitigated it, for now. Thanks!

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >