Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 72/1322 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • CakePHP Missing Database Table Error

    - by BRADINO
    I am baking a new project management application at work and added a couple new tables to the database today. When I went into the console to bake the new models, they were not in the list... php /path/cake/console/cake.php bake all -app /path/app/ So I manually typed in the model name and I got a missing database table for model error. I checked and double-checked and the database table was named properly. Turns out that some files inside the /app/tmp/cache/ folder were causing Cake not to recognize that I had added new tables to my database. Once I deleted the cache files cake instantly recognized my new database tables and I was baking away! rm -Rf /path/app/tmp/cache/cake*

    Read the article

  • PHP OCI8 and Oracle 11g DRCP Connection Pooling in Pictures

    - by christopher.jones
    Here is a screen shot from a PHP OCI8 connection pooling demo that I like to run. It graphically shows how little database host memory is needed when using DRCP connection pooling with Oracle Database 11g. Migrating to DRCP can be as simple as starting the pool and changing the connection string in your PHP application. The script that generated the data for this graph was a simple "Parts" query application being run under various simulated user loads. I was running the database on a small Oracle Linux server with just 2G of memory. I used PHP OCI8 1.4. Apache is in pre-fork mode, as needed for PHP. Each graph has time on the horizontal access in arbitrary 'tick' time units. Click the image to see it full sized. Pooled connections Beginning with the top left graph, At tick time 65 I used Apache's 'ab' tool to start 100 concurrent 'users' running the application. These users connected to the database using DRCP: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl:pooled'); A second hundred DRCP users were added to the system at tick 80 and a final hundred users added at tick 100. At about tick 110 I stopped the test and restarted Apache. This closed all the connections. The bottom left graph shows the number of statements being executed by the database per second, with some spikes for background database activity and some variability for this small test. Each extra batch of users adds another 'step' of load to the system. Looking at the top right Server Process graph shows the database server processes doing the query work for each web user. As user load is added, the DRCP server pool increases (in green). The pool is initially at its default size 4 and quickly ramps up to about (I'm guessing) 35. At tick time 100 the pool increases to my configured maximum of 40 processes. Those 40 processes are doing the query work for all 300 web users. When I stopped the test at tick 110, the pooled processes remained open waiting for more users to connect. If I had left the test quiet for the DRCP 'inactivity_timeout' period (300 seconds by default), the pool would have shrunk back to 4 processes. Looking at the bottom right, you can see the amount of memory being consumed by the database. During the initial quiet period about 500M of memory was in use. The absolute number is just an indication of my particular DB configuration. As the number of pooled processes increases, each process needs more memory. You can see the shape of the memory graph echoes the Server Process graph above it. Each of the 300 web users will also need a few kilobytes but this is almost too small to see on the graph. Non-pooled connections Compare the DRCP case with using 'dedicated server' processes. At tick 140 I started 100 web users who did not use pooled connections: $c = oci_pconnect('phpdemo', 'welcome', 'myhost/orcl'); This connection string change is the only difference between the two tests. At ticks 155 and 165 I started two more batches of 100 simulated users each. At about tick 195 I stopped the user load but left Apache running. Apache then gradually returned to its quiescent state, killing idle httpd processes and producing the downward slope at the right of the graphs as the persistent database connection in each Apache process was closed. The Executions per Second graph on the bottom left shows the same step increases as for the earlier DRCP case. The database is handling this load. But look at the number of Server processes on the top right graph. There is now a one-to-one correspondence between Apache/PHP processes and DB server processes. Each PHP processes has one DB server processes dedicated to it. Hence the term 'dedicated server'. The memory required on the database is proportional to all those database server processes started. Almost all my system's memory was consumed. I doubt it would have coped with any more user load. Summary Oracle Database 11g DRCP connection pooling significantly reduces database host memory requirements allow more system memory to be allocated for the SGA and allowing the system to scale to handled thousands of concurrent PHP users. Even for small systems, using DRCP allows more web users to be active. More information about PHP and DRCP can be found in the PHP Scalability and High Availability chapter of The Underground PHP and Oracle Manual.

    Read the article

  • Apache cannot find mysql database modules

    - by user809857
    I've created a simple django project and setup a mysql database. My simple project just creates an entry on the database. The project works fine when I use the built in development server provided by django (runserver) and it works well. But when I deployed the project on Apache and mod_Wsgi (Ubuntu server), django could not find 'books', which is in this case my table in the database. The mysql database that I use in runserver and apache are just the same. I also did rebuild the database using sqlall,validate and syncdb of django but still i get the error. What could be wrong with what I'm doing? Thanks

    Read the article

  • Apache cannot find mysql database modules

    - by user809857
    I've created a simple django project and setup a mysql database. My simple project just creates an entry on the database. The project works fine when I use the built in development server provided by django (runserver) and it works well. But when I deployed the project on Apache and mod_Wsgi (Ubuntu server), django could not find 'books', which is in this case my table in the database. The mysql database that I use in runserver and apache are just the same. I also did rebuild the database using sqlall,validate and syncdb of django but still i get the error. What could be wrong with what I'm doing? Thanks

    Read the article

  • Oracle Database 12c

    - by jgelhaus
    What's your pain? Cost of IT and downtime, Complexity of IT, Poor database application performance? All of the above and more? These are real challenges caused by today's demands on data centers and their IT teams.  Oracle Database 12c provides a breakthrough architecture that makes it easy to deploy and manage databases in the cloud. Oracle partners will leverage Database 12c innovation to provide additional options for Oracle customer success and ROI. Download Oracle Database 12c and plug into the cloud! Join us for our July 10th webcast to learn about this database breakthrough.

    Read the article

  • Lower Your Application Infrastructure Costs w/Oracle Database 11g

    - by john.brust
    Oracle Database 11g is designed to support enterprise applications, including Oracle E-Business Suite, Oracle PeopleSoft, and Oracle Siebel. And every Oracle customer can benefit from the performance, reliability, and security that Oracle Database 11g brings to these applications. Plus, Oracle Database 11g, helps you drive down your IT infrastructure costs. Join us next Friday for a webcast conversation with database expert Mark Townsend, Vice President of Oracle's Server Technology Division, to learn how you can benefit from running your applications on Oracle Database 11g. At the end of the presentation, we'll open up for live Q&A for approximately 30 minutes. Register now for our Friday, April 23rd, 2010 9:30am PT | 12:30pm ET live webcast.

    Read the article

  • How should calculations be handled in a document database

    - by Morten
    Ok, so I have a program that basically logs errors into a nosql database. Right now there is just a single model for an error and its stored as a document in the nosql database. Basically I want to summarize across different errors and produce a summary of the "types" of errors that occured. Traditionally in a SQL database the this normalization would work with groupings, sums and averages but in a NoSQL database I assume I need to use mapreduce. My current model seems unfit for the task, how should I change the way I store "models" in order to make statistical analysis easy? Would a NoSQL database even be the right tool for this type of problem? I'm storing things in Google AppEngine's BigTable, so there are some limitations to think of as well.

    Read the article

  • Podcast: Dell Perot Systems Relies on Oracle In-Memory Database Cache

    - by john.brust
    Recently we spoke with Bill Binko, Technology Consultant at Dell Perot Systems, about a high volume web-based content delivery system they implemented for a client with Oracle In-Memory Database Cache. Their client needed to respond to ~1 billion hits (web requests) per day, but hadn't been able to support this load. Oracle In-Memory Database Cache allowed for multiple & complicated queries to take place without ever hitting the disk...providing sub-millisecond response time and ability to manage much higher high volumes of data. Old System: Old SQL Server Database, over 300 servers, difficult to maintain. New System: One Oracle Database 11g instance, multiple Oracle RAC nodes, backed up by Oracle Data Guard, and Oracle In-Memory Database Cache to cut query response times by 10x. Listen to the podcast.

    Read the article

  • 9/18 Live Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g

    - by jgelhaus
    Webcast: Three Compelling Reasons to Upgrade to Oracle Database 11g Date: Tuesday, September 18, 2012 Time: 10 a.m. PT/1 p.m. ET If you or your organization is still working with Oracle Database 10g or an even older version, now is the time to upgrade. Oracle Database 11g offers a wide variety of advantages to enhance your operation. Join us for this live Webcast and learn about what you’re missing: the business, operational, and technical benefits. With Oracle Database 11g, you can: Upgrade with zero downtime Improve application performance and database security Reduce the amount of storage required Save time and money Register today here

    Read the article

  • Building ASP.NET Web Forms to Use a MySQL Database

    The MySQL database is the best open source database which means it can be used for free without obtaining or paying for a license. In ASP.NET 3.5 hosting there are some hosting packages that let you use the MySQL database because it can be a cheaper hosting alternative when compared to using the MS SQL database. However things can be a bit complicated when querying a MySQL database in an ASP.NET environment.... Advance Your IT Career Online IT Degree Programs. Advance Your IT Career While You Work. Search now.

    Read the article

  • Eliminating Downtime During Database Upgrades: A Customer Case Study

    - by irem.radzik(at)oracle.com
    Planned outages, such as database, OS, hardware upgrades and migrations, are a fact of life. Even though they are "planned" and many of them are performed during "off business hours", they can still interrupt operations-- especially for global operations and online businesses. For this reason many IT organizations postpone these critical infrastructure improvement projects, which in turn result in delays in advancing business operations. This week, on Thursday January 13th, we will host a free webcast on this topic, and will feature Oracle GoldenGate's customer Atmos Energy. Atmos Energy implemented Oracle GoldenGate for eliminating downtime during their database upgrade from Oracle Database 8.1.7 to Oracle Database 11.1.0.7. Jos Francis, Lead DBA for Atmos, and Ronald Nedd, Sr. DBA for Atmos, will be presenting their database upgrade project and their solution architecture. Join us at this live webcast and hear from our customer and product management how to eliminate planned outages with Oracle GoldenGate's real-time, heterogeneous data replication capabilities.

    Read the article

  • Webcast Series: Accelerate Business-Critical Database Deployments with Oracle Optimized Solutions

    - by ferhat
    Join us for this two-part Webcast series and learn how to safely consolidate business-critical databases and deliver quantifiable benefits to the business: Save up to 75% in operational and acquisition costs Save millions of dollars consolidating legacy infrastructure Leverage best practices from thousands of customer environments Increase end user productivity with 75% faster time to operations and 4x faster throughput   The Oracle Optimized Solution for Oracle Database  provides extensive guidelines for architecting and deploying complete database solutions that deliver superior performance and availability while minimizing cost and risk. Oracle’s world-class engineering teams work together to define these optimal architectures using Oracle's powerful SPARC M-Series and SPARC T-Series servers together with Oracle Solaris and Oracle's SAN, NAS, and flash-based storage to run the industry-leading Oracle Database. Quite simply, the Oracle Optimized Solution for Oracle Database makes it easier for you to deliver and manage business critical database environments that are fast, secure and cost-effective. Available On-Demand PART 1: Why Architecture Matters When Deploying Business-Critical Databases PART 2: How To Consolidate Databases Using Oracle Optimized Solutions   Presented by: Lawrence McIntosh, Principal Enterprise Architect, Oracle Optimized Solutions Ken Kutzer, Principal Product Manager, Infrastructure Solutions, Oracle  

    Read the article

  • I want a trivial example of where MongoDB can scale but a relational database will have trouble

    - by Ryan Weir
    I'm just learning to use MongoDB, and when discussing with other programmers would like a quick example of why NoSQL can be a good choice compared to a traditional RDBMS - however the scenarios I come up with and can find online seem pretty contrived. E.g. a blog with lots of traffic could be represented relationally, but will require some performance tuning and joins across tables (assuming full denormalization is being used). Whereas MongoDB would allow direct retrieval from one collection to the same effect. But the response I'm getting from other programmers is "why not just keep it relational and then add some trivial caching later?" Does anybody have a less contrived example where MongoDB will really shine and a relational db will fall over much quicker? The smaller the project/system the better, because it leaves less room for disagreement. Something along the lines of the complexity of the blog example would be really useful. Thanks.

    Read the article

  • Should custom data elements be stored as XML or database entries?

    - by meteorainer
    There are a ton of questions like this, but they are mostly very generalized, so I'd like to get some views on my specific usage. General: I'm building a new project on my own in Django. It's focus will be on small businesses. I'd like to make it somewhat customizble for my clients so they can add to their customer/invoice/employee/whatever items. My models would reflect boilerplate items that all ModelX might have. For example: first name last name email address ... Then my user's would be able to add fields for whatever data they might like. I'm still in design phase and am building this myself, so I've got some options. Working on... Right now the 'extra items' models have a FK to the generic model (Customer and CustomerDataPoints for example). All values in the extra data points are stored as char and will be coerced/parced into their actual format at view building. In this build the user could theoretically add whatever values they want, group them in sets and generally access them at will from the views relavent to that model. Pros: Low storage overhead, very extensible, searchable Cons: More sql joins My other option is to use some type of markup, or key-value pairing stored directly onto the boilerplate models. This coul essentially just be any low-overhead method weather XML or literal strings. The view and form generated from the stored data would be taking control of validation and reoganizing on updates. Then it would just dump the data back in as a char/blob/whatever. Something like: <datapoint type='char' value='something' required='true' /> <datapoint type='date' value='01/01/2001' required='false' /> ... Pros: No joins needed, Updates for validation and views are decoupled from data Cons: Much higher storage overhead, limited capacity to search on extra content So my question is: If you didn't live in the contraints impose by your company what method would you use? Why? What benefits or pitfalls do you see down the road for me as a small business trying to help other small businesses? Just to clarify, I am not asking about custom UI elements, those I can handle with forms and template snippets. I'm asking primarily about data storage and retreival of non standardized data relative to a boilerplate model.

    Read the article

  • SQL Server architecture - they want to move my database to new instance...Why?

    - by O'MALLEY
    Our current production database environment contains about 10 similarily managed databases. Our agency has just purchased and is installing new blade chasses and wants to move my database to a new instance (leaving the other 9 on another). This decision is being driven by one of our IT staff, not a DBA. I am a project manager, not a DBA but I know enough to not necesarrily have a good feeling about this decision and I am urging our IT department to make a sound decision based on what is best for the database. Our IT department has stated that it is not good to have all our eggs in one basket, and has also stated that my database contains "regulatory data" so it should be on its own instance. A couple of truths: - None of the databases on the current instance are OLTP databases nor are any of them data warehouses - My database currently has joins/views to a couple of the other databases in the production environment So my questions are as follows: Am I wrong to disregard a statement about eggs in baskets? (hello, this is why we have maintenance plans/disaster recovery plans). I'll mention that other databases also have regulatory data too. What types of questions do I need to ask to determine if this is a sound decicion? (A DBA friend mentioned that if the service level agreement of said database does not radically differ from the others then why do they want to do this?) I have done some research on linked servers. What arguments should I bring forth about the fact that I have views setup that rely on data from other DBs currently?

    Read the article

  • Oracle Database 12c

    - by David Allan
    Exciting day today as Oracle Database 12c is released. You can find lots of information on the release on OTN here. With this release comes another milestone on Oracle's Data Integration roadmap - OWB is no longer shipped with the database. You will notice that the OWB documentation is no longer included with the Oracle Database documentation, you can compare and contrast the 11.2 and 12.1 documentation below. OWB 11gR2 is still supported with Oracle Database 12c, you will need 11.2.0.3 plus at least CP2 which has been certified with Oracle Database 12c. The 11.2.0.4 release will wrapper this into one install.

    Read the article

  • If all variables are a subset of the superkey, is the database design 5NF? [migrated]

    - by Lukazoid
    I have a table called LogMessages, which has the following columns: Level A numeric value which represents Trace, Debug, Info, Warning, Error or Fatal Time A UTC time Message Foreign key to a Messages table Source Foreign key to a Sources table User Foreign key to a Users table From what I can see, all of these columns are a part of the super key; if any single value differs to an existing row, a new row can be created. My question is, does this design comply to fifth normal form? I am unsure as some groups of data will be repeating, however I don't believe this violates 5NF? (correct me if I'm wrong)

    Read the article

  • What is the best database design for managing historical information? [closed]

    - by Emmad Kareem
    Say you have a Person table with columns such as: ID, FirstName, LastName, BirthCountry, ...etc. And you want to keep track of changes on such a table. For example, the user may want to see previous names of a person or previous addresses, etc. The normalized way is to keep names in separate table, addresses in a separate table,...etc. and the main person table will contain only the information that you are not interested in monitoring changes for (such information will be updated in place). The problem I see here, aside form the coding hassle due to the extensive number of joins required in a real-life situation, is that I have never seen this type of design in any real application (maybe because most did not provide this feature!). So, is there a better way to design this? Thanks.

    Read the article

  • WP: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency

    - by uwes
    Consolidation in the data center is the driving factor in reducing capital and operational expense in IT today. This is particularly relevant as customers invest more in cloud infrastructure and associated service delivery. Database consolidation is a strategic component in this effort. Oracle Database 12 c introduces Oracle Multitenant , a new database consolidation model in which multiple Pluggable Databases (PDBs) are consolidated within a Container Database (CDB). While keeping many of the isolation aspects of single databases, it allows PDBs to share the system global area (SGA) and background processes of a common CDB . The white paper recently published on OTN: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency analyzes and quantifies savings in compute resources, efficiencies in transaction processing, and consolidation density of Oracle Multitenant compared to consolidated single instance databases (SIDBs) running in a bare-metal environment.

    Read the article

  • How to deploy ASP.NET application with MS SQL server database

    - by Maddy
    I want to deploy my website with MS SQL server database. It's my first time and I have never done it before. What I have come to know from my googling is that I must have a domain(.com/.net/.co) and a host(for my web pages .aspx & .cs(confusion here if I can also deploy my database)). Now, I am not getting to where I have to deploy my database. If I also have to buy a seperate SQL Server database or a host consisting of every thing (means I can deploy both my ASP.NET application & database as well).

    Read the article

  • How would one build a relational database on a key-value store, a-la Berkeley DB's SQL interface?

    - by coleifer
    I've been checking out Berkeley DB and was impressed to find that it supported a SQL interface that is "nearly identical" to SQLite. http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbsqlbasics.html#identicalusage I'm very curious, at a high-level, how this kind of interface might have been architected. For instance: since values are "transparent", how do you efficiently query and sort by value how are limits and offsets performed efficiently on large result sets how would the keys be structured and serialized for good average-case performance

    Read the article

  • why my code still cannot connect with database? [closed]

    - by Wen Teng
    package com.mems.travis; import java.util.ArrayList; import java.util.List; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONObject; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.AsyncTask; import android.os.Bundle; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.RadioButton; public class UserRegister extends Activity { JSONParser jsonParser = new JSONParser(); EditText inputName; EditText inputUsername; EditText inputEmail; EditText inputPassword; RadioButton button1; RadioButton button2; Button button3; int success = 0; // url to create new product private static String url_register_user = "http://192.168.1.100/MEMS/add_user.php"; // JSON Node names private static final String TAG_SUCCESS = "success"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_user_register); // Edit Text inputName = (EditText) findViewById(R.id.nameTextBox); inputUsername = (EditText) findViewById(R.id.usernameTextBox); inputEmail = (EditText) findViewById(R.id.emailTextBox); inputPassword = (EditText) findViewById(R.id.pwTextBox); // Create button //RadioButton button1 = (RadioButton) findViewById(R.id.studButton); // RadioButton button2 = (RadioButton) findViewById(R.id.shopownerButton); Button button3 = (Button) findViewById(R.id.regSubmitButton); // button click event button3.setOnClickListener(new View.OnClickListener() { public void onClick(View view) { String name = inputName.getText().toString(); String username = inputUsername.getText().toString(); String email = inputEmail.getText().toString(); String password = inputPassword.getText().toString(); if (name.contentEquals("")||username.contentEquals("")||email.contentEquals("")||password.contentEquals("")) { AlertDialog.Builder builder = new AlertDialog.Builder(UserRegister.this); // 2. Chain together various setter methods to set the dialog characteristics builder.setMessage(R.string.nullAlert) .setTitle(R.string.alertTitle); builder.setPositiveButton(R.string.ok, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int id) { // User clicked OK button } }); // 3. Get the AlertDialog from create() AlertDialog dialog = builder.show(); } else { new RegisterNewUser().execute(); } } }); } class RegisterNewUser extends AsyncTask<String, String, String>{ protected String doInBackground(String... args) { String name = inputName.getText().toString(); String username = inputUsername.getText().toString(); String email = inputEmail.getText().toString(); String password = inputPassword.getText().toString(); // Building Parameters List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("name", name)); params.add(new BasicNameValuePair("username", username)); params.add(new BasicNameValuePair("email", email)); params.add(new BasicNameValuePair("password", password)); // getting JSON Object // Note that create product url accepts POST method JSONObject json = jsonParser.makeHttpRequest(url_register_user, "GET", params); // check log cat for response Log.d("Send Notification", json.toString()); try { int success = json.getInt(TAG_SUCCESS); if (success == 1) { // successfully created product Intent i = new Intent(getApplicationContext(), StudentLogin.class); startActivity(i); finish(); } else { // failed to register } } catch (Exception e) { e.printStackTrace(); } return null; } } }

    Read the article

  • Is there other ways to do insert/update/delete on a remote oracle database?

    - by gunbuster363
    I asked a question recently concerning the speed of execution of insert/update/delete using JDBC driver in a remote machine, but the problem cannot be solved easily. I would like to ask, is there any other way to execute the insert/update/delete to the oracle? The current situation is this: the DB is on a seperate machine than the java program used to update the DB. I looked up the internet and found people suggesting using pure sql or pl/sql to do the update, is that possible? And do we need to operate the sql or pl/sql in a local machine? Because I have no knowledge about pl/sql, so I am not sure if we can create some kind of script and call it on a remote machine. Let say the situation is like this: the input data is on machine A, and the original java program are also on machine A, but the oracle is on machine B. is there any other approach other than JDBC?

    Read the article

  • when should a database table be broken into multiple tables with relations?

    - by GSto
    I have an application that needs to store client data, and part of that is some data about their employer as well. Assuming that a client can only have one employer, and that the chance of people having identical employer data is slim to none, which schema would make more sense to use? Schema 1 Client Table: ------------------- id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), employer_name varchar(255), employer_phone varchar(255), employer_address varchar(255), employer_city varchar(255), employer_state char(2), employer_zip varchar(16) **Schema 2** Client Table ------------------ id int name varchar(255), email varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16), Employer Table --------------------- id int name varchar(255), phone varchar(255), address varchar(255), city varchar(255), state char(2), zip varchar(16) patient_id int Part of me thinks that since are clearly two different 'objects' in the real world, seperating them out into two different tables makes sense. However, since a client will always have an employer, I'm also not seeing any real benefits to seperating them out, and it would make querying data about clients more complex. Is there any benefit / reason for creating two tables in a situation like this one instead of one?

    Read the article

  • Search For a Query in RDL Files with PowerShell

    - by AllenMWhite
    In tracking down poorly performing queries for clients I often encounter the query text in a trace file I've captured, but don't know the source of the query. I've found that many of the poorest performing queries are those written into the reports the business users need to make their decisions. If I can't figure out where they came from, usually years after the queries were written, I can't fix them. First thing I did was find a great utility called RSScripter , which opens up a Windows dialog...(read more)

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >