Search Results

Search found 34093 results on 1364 pages for 'database architecture'.

Page 131/1364 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • PHP/MySQL Database application development tool

    - by RCH
    I am an amateur PHP coder, and have built a couple of dozen projects from scratch (including fairly simple e-commerce systems with user authentication, PayPal integration etc - all coded by hand from a clean page. Have also done a price comparison engine that takes data from multiple sites etc.). But I am no expert with OO and other such advanced techniques - I just have a fairly decent grasp of the basics of data processing, logic, functions and trying to optimize code as much as possible. I just want to make this clear so you have some idea of where I'm coming from. I have a couple of fairly large new projects on my plate for corporate clients - both require bespoke database-driven applications with complex relationships, many tables and lots of different front-end functions to manipulate that data for the internal staff in these companies. I figured building these systems from scratch would probably be a huge waste of time. Instead, there must be tools out there that will allow me to construct MySQL databases and build the pages with things like pagination, action buttons, table construction etc. Some kind of database abstraction layer, or system generator, if you will. What tool do you recommend for such a purpose for someone at my level? Open source would be great, but I don't mind paying for something decent as well. Thanks for any advice.

    Read the article

  • Partitioning Strategies for P6 Reporting Database

    - by Jeffrey McDaniel
    Prior to P6 Reporting Database version 3.2 sp1 range partitioning was used. This was applied only to the history tables. The ranges were defined during installation and additional ranges would need to be added once your date range entered the final defined range. As of P6 Reporting Database version 3.2 sp1, interval partitioning was implemented. Interval partitioning was applied to the existing History table as well as Slowly Changing Dimension tables. One of the major advantages of interval partitioning is there is no more manual addition of ranges. The interval partitioning will automatically create partitions for the defined interval when data is inserted into the table and it exceeds the existing partitions. In 3.2 sp1 there are steps on how to update your partitioning. For all versions after 3.2 sp1 interval partitioning is the only partitioning option used. When upgrading it is important to be aware of these changes. Here is a link with more information on partitioning -the types and the advantages. http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

  • Where to find common database abbreviations in Spanish

    - by jmh_gr
    I'm doing a little pro bono work for an organization in Central America. I'm ok at Spanish and my contacts are perfectly fluent but are not techincal people. Even if they don't care what I call some fields in a database I still want to make as clean a schema as possible, and I'd like to know what some typical abbreviations are for field / variable names in Spanish. I understand abbreviations and naming conventions are entirely personal. I'm not asking for the "correct" or "best" way to abbreviate database object names. I'm just looking for references to lists of typical abbreviations that would be easily recognizable to a techincally competent native Spanish speaker. I believe I am a decent googler but I've had no luck on this one. For example, in my company (where English is the primary language) 'Date' is always shortened to 'DT', 'Code' to 'CD', 'Item' to 'IT', etc. It's easy for the crowds of IT temp workers who revolve through on various projects to figure out that 'DT' stands for 'Date', 'YR' for 'Year', or 'TN' for 'Transaction' without even having to consult the official abbreviations list.

    Read the article

  • Methods for getting static data from obj-c to Parse (database)

    - by Phil
    I'm starting out thinking out how best to code my latest game on iOS, and I want to use parse.com to store pretty much everything, so I can easily change things. What I'm not sure about is how to get static data into parse, or at least the best method. I've read about using NSMutableDictionary, pLists, JSON, XML files etc. Let's say for example in AS3 I could create a simple data object like so... static var playerData:Object = {position:{startX:200, startY:200}}; Stick it in a static class, and bingo I've got my static data to use how I see fit. Basically I'm looking for the best method to do the same thing in Obj-c, but the data is not to be stored directly in the iOS game, but to be sent to parse.com to be stored on their database there. The game (at least the distribution version) will only load data from the parse database, so however I'm getting the static data into parse I want to be able to remove it from being included in the eventual iOS file. So any ideas on the best methods to sort that? If I had longer on this project, it might be nice to use storyboards and create a simple game editor to send data to parse....actually maybe that's a good idea, it's just I'm new to obj-c and I'm looking for the most straightforward (see quickest) way to achieve things. Thanks for any advice.

    Read the article

  • Should I encrypt data in database?

    - by Tio
    I have a client, for which I'm going to do an Web application about patient care, managing patients, consults, history, calendars, everything about that basically. The problem is that this is sensitive data, patient history and such. The client insists on encrypting the data at the database level, but I think this is going to deteriorate the performance of the web app. ( But maybe I shouldn't be worried about this ) I've read the laws about data protection on health issues ( Portugal ), but isn't very specific about this ( I just questioned them about this, I'm waiting for their response ). I've read the following link, but my question is different, should I encrypt the data in the database, or not. One problem that I foresee in encrypting data, is that I'm going to need a key, this could be the user password, but we all know how user passwords are ( 12345 etc etc ), and generating a key I would have to store it somewhere, this means that the programmer, dba, whatever could have access to it, any thoughts on this? Even adding an random salt to the user password isn't going to solve the problem since I can always access it, and therefore decrypt the data.

    Read the article

  • Why can't I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • What are some good ways to store performance statistics in a database for querying later?

    - by Nathan
    Goal: Store arbitrary performance statistics of stuff that you care about (how many customers are currently logged on, how many widgets are being processed, etc.) in a database so that you can understand what how your servers are doing over time. Assumptions: A database is already available, and you already know how to gather the information you want and are capable of putting it in the database however you like. Some Ideal Attributes of a Solution Causes no noticeable performance hit on the server being monitored Has a very high precision of measurement Does not store useless or redundant information Is easy to query (lends itself to gathering/displaying useful information) Lends itself to being graphed easily Is accurate Is elegant Primary Questions 1) What is a good design/method/scheme for triggering the storing of statistics? 2) What is a good database design for how to actually store the data? Example answers...that are sort of vague and lame... 1) I could, once per [fixed time interval], store a row of data with all the performance measurements I care about in each column of one big flat table indexed by timestamp and/or server. 2) I could have a daemon monitoring performance stuff I care about, and add a row whenever something changes (instead of at fixed time intervals) to a flat table as in #1. 3) I could trigger either as in #2, but I could store information about each aspect of performance that I'm measuring in separate tables, opening up the possibility of adding tons of rows for often-changing items, and few rows for seldom-changing items. Etc. In the end, I will implement something, even if it's some super-braindead approach I make up myself, but I'm betting there are some really smart people out there willing to share their experiences and bright ideas!

    Read the article

  • Why is my django bulk database population so slow and frequently failing?

    - by bryn
    I decided I'd like to use django's model system rather than coding raw SQL to interface with my database, but I am having a problem that surely is avoidable. My models.py contains: class Student(models.Model): student_id = models.IntegerField(unique = True) form = models.CharField(max_length = 10) preferred = models.CharField(max_length = 70) surname = models.CharField(max_length = 70) and I'm populating it by looping through a list as follows: from models import Student for id, frm, pref, sname in large_list_of_data: s = Student(student_id = id, form = frm, preferred = pref, surname = sname) s.save() I don't really want to be saving this to the database each time but I don't know another way to get django to not forget about it (I'd rather add all the rows and then do a single commit). There are two problems with the code as it stands. It's slow -- about 20 students get updated each second. It doesn't even make it through large_list_of_data, instead throwing a DatabaseError saying "unable to open database file". (Possibly because I'm using sqlite3.) My question is: How can I stop these two things from happening? I'm guessing that the root of both problems is that I've got the s.save() but I don't see a way of easily batching the students up and then saving them in one commit to the database.

    Read the article

  • How do I create a simple Windows form to access a SQL Server database?

    - by NoCatharsis
    I believe this is a very novice question, and if I'm using the wrong forum to ask, please advise. I have a basic understanding of databasing with MS SQL Server, and programming with C++ and C#. I'm trying to teach myself more by setting up my own database with MS SQL Server Express 2008 R2 and accessing it via Windows forms created in C# Express 2010. At this point, I just want to keep it to free or Express dev tools (not necessarily Microsoft though). Anyway, I created a database using the instructions provided here and I set the data types appropriately for each column (no errors in setup at least). Now I'm designing the GUI in C# Express but I've kind of hit a wall as far as the database connection. Is there a simple way to access the database I created locally using C# Express? Can anyone suggest a guide that has all this spelled out already? I am a self-learner so I look forward to teaching myself how to use these applications, but any pointers to start me off in the right direction would be greatly appreciated.

    Read the article

  • Is it possible that two requests at the same time double this code? (prevent double database entry)

    - by loostro
    1) The controller code (Symfony2 framework): $em = $this->getDoctrine()->getEntityManager(); // get latest toplist $last = $em->getRepository('RadioToplistBundle:Toplist')->findOneBy( array('number' => 'DESC') ); // get current year and week of the year $week = date('W'); $year = date('Y'); // if: // [case 1]: $last is null, meaning there are no toplists in the database // [case 2]: $last->getYear() or $last->getWeek() do not match current // year and week number, meaning that there are toplists in the // database, but not for current week // then: // create new toplist entity (for current week of current year) // else: // do nothing (return) if($last && $last->getYear() == $year && $last->getWeek() == $week) return; else { $new = new Toplist(); $new->setYear($year); $new->setWeek($week); $em->persist($new); $em->flush(); } This code is executed with each request to view toplist results (frontend) or list of toplists (backend). Anytime someone wants to access the toplist we first check if we should create a new toplist entity (for new week). 2) The question is: Is it possible that: User A goes to mydomain.com/toplist at 00:00:01 on Monday - the code should generate new entity the server slows down and it takes him 3 seconds to execute the code so new toplist entity is saved to database at 00:00:04 on Monday User B goes to mydomain.com/toplist at 00:00:02 on Monday at 00:00:02 there the toplist is not yet saved in database, thus UserB's request triggers the code to create another toplist entity And so.. after a few seconds we have 2 toplist entities for current week. Is this possible? How should I prevent this?

    Read the article

  • Why can’t I create a database in an empty ASP MVC 2 project using Project->Add->New Item->SQL Server

    - by Dr Dork
    I'm diving head first into ASP MVC and am playing around with creating and manipulating a database. I did a search and found this tutorial for creating a database, however when I follow it, I get this error right at the start when trying to add a new database to my fresh, empty ASP MVC 2 project... A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) The only requirement the tutorial mentioned was SQL Server Express, but when I went to download it, it said it was already installed. I'm assuming it was part of the VS 2010 RC I installed and am running. So I don't know what else I need if I am missing something. This is all new to me, so I'm sure I'm missing something obvious here and after I'm done posting this question, I plan to do some more research into the topic of databases and how they work with ASP MVC. In the meantime, I was you could help me answer a couple high level questions... What am I missing/forgetting to do that is causing this error? Any suggestions for good resources/tutorials that focus on using databases with ASP MVC? I've done a lot of database programming in the past, so I'm familiar with the concepts of relational databases and the SQL language. I wish I could find a good resource for learning how to work with them in an ASP dev environment, as well as a good breakdown of all the related technologies used for working with them (i.e. LINQ to SQL). Thanks so much in advance for all your help! I'm going to start researching these questions right now.

    Read the article

  • Consolidate Data in Private Clouds, But Consider Security and Regulatory Issues

    - by Troy Kitch
    The January 13 webcast Security and Compliance for Private Cloud Consolidation will provide attendees with an overview of private cloud computing based on Oracle's Maximum Availability Architecture and how security and regulatory compliance affects implementations. Many organizations are taking advantage of Oracle's Maximum Availability Architecture to drive down the cost of IT by deploying private cloud computing environments that can support downtime and utilization spikes without idle redundancy. With two-thirds of sensitive and regulated data in organizations' databases private cloud database consolidation means organizations must be more concerned than ever about protecting their information and addressing new regulatory challenges. Join us for this webcast to learn about greater risks and increased threats to private cloud data and how Oracle Database Security Solutions can assist in securely consolidating data and meet compliance requirements. Register Now.

    Read the article

  • Retrieve Performance Data from SOA Infrastructure Database

    - by fip
    My earlier blog posting shows how to enable, retrieve and interpret BPEL engine performance statistics to aid performance troubleshooting. The strength of BPEL engine statistics at EM is its break down per request. But there are some limitations with the BPEL performance statistics mentioned in that blog posting: The statistics were stored in memory instead of being persisted. To avoid memory overflow, the data are stored to a buffer with limited size. When the statistic entries exceed the limitation, old data will be flushed out to give ways to new statistics. Therefore it can only keep the last X number of entries of data. The statistics 5 hour ago may not be there anymore. The BPEL engine performance statistics only includes latencies. It does not provide throughputs. Fortunately, Oracle SOA Suite runs with the SOA Infrastructure database and a lot of performance data are naturally persisted there. It is at a more coarse grain than the in-memory BPEL Statistics, but it does have its own strengths as it is persisted. Here I would like offer examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time. You can run it immediately after you modify the date range to match your actual system. 1. Asynchronous/one-way messages incoming rates The following query will show number of messages sent to one-way/async BPEL processes during a given time period, organized by process names and states select composite_name composite, state, count(*) Count from dlv_message where receive_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; 2. Throughput of BPEL process instances The following query shows the number of synchronous and asynchronous process instances created during a given time period. It list instances of all states, including the unfinished and faulted ones. The results will include all composites cross all SOA partitions select state, count(*) Count, composite_name composite, component_name,componenttype from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype order by count(*) desc; 3. Throughput and latencies of BPEL process instances This query is augmented on the previous one, providing more comprehensive information. It gives not only throughput but also the maximum, minimum and average elapse time BPEL process instances. select composite_name Composite, component_name Process, componenttype, state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime from cube_instance where creation_date >= to_timestamp('2012-10-24 21:00:00','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('2012-10-24 21:59:59','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;   4. Combine all together Now let's combine all of these 3 queries together, and parameterize the start and end time stamps to make the script a bit more robust. The following script will prompt for the start and end time before querying against the database: accept startTime prompt 'Enter start time (YYYY-MM-DD HH24:MI:SS)' accept endTime prompt 'Enter end time (YYYY-MM-DD HH24:MI:SS)' Prompt "==== Rejected Messages ===="; REM 2012-10-24 21:00:00 REM 2012-10-24 21:59:59 select count(*), composite_dn from rejected_message where created_time >= to_timestamp('&&StartTime','YYYY-MM-DD HH24:MI:SS') and created_time <= to_timestamp('&&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_dn; Prompt " "; Prompt "==== Throughput of one-way/asynchronous messages ===="; select state, count(*) Count, composite_name composite from dlv_message where receive_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and receive_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, state order by Count; Prompt " "; Prompt "==== Throughput and latency of BPEL process instances ====" select state, count(*) Count, trunc(Max(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MaxTime, trunc(Min(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) MinTime, trunc(AVG(extract(day from (modify_date-creation_date))*24*60*60 + extract(hour from (modify_date-creation_date))*60*60 + extract(minute from (modify_date-creation_date))*60 + extract(second from (modify_date-creation_date))),4) AvgTime, composite_name Composite, component_name Process, componenttype from cube_instance where creation_date >= to_timestamp('&StartTime','YYYY-MM-DD HH24:MI:SS') and creation_date <= to_timestamp('&EndTime','YYYY-MM-DD HH24:MI:SS') group by composite_name, component_name, componenttype, state order by count(*) desc;  

    Read the article

  • How to make this design closer to proper DDD?

    - by Seralize
    I've read about DDD for days now and need help with this sample design. All the rules of DDD make me very confused to how I'm supposed to build anything at all when domain objects are not allowed to show methods to the application layer; where else to orchestrate behaviour? Repositories are not allowed to be injected into entities and entities themselves must thus work on state. Then an entity needs to know something else from the domain, but other entity objects are not allowed to be injected either? Some of these things makes sense to me but some don't. I've yet to find good examples of how to build a whole feature as every example is about Orders and Products, repeating the other examples over and over. I learn best by reading examples and have tried to build a feature using the information I've gained about DDD this far. I need your help to point out what I do wrong and how to fix it, most preferably with code as "I would not recomment doing X and Y" is very hard to understand in a context where everything is just vaguely defined already. If I can't inject an entity into another it would be easier to see how to do it properly. In my example there are users and moderators. A moderator can ban users, but with a business rule: only 3 per day. I did an attempt at setting up a class diagram to show the relationships (code below): interface iUser { public function getUserId(); public function getUsername(); } class User implements iUser { protected $_id; protected $_username; public function __construct(UserId $user_id, Username $username) { $this->_id = $user_id; $this->_username = $username; } public function getUserId() { return $this->_id; } public function getUsername() { return $this->_username; } } class Moderator extends User { protected $_ban_count; protected $_last_ban_date; public function __construct(UserBanCount $ban_count, SimpleDate $last_ban_date) { $this->_ban_count = $ban_count; $this->_last_ban_date = $last_ban_date; } public function banUser(iUser &$user, iBannedUser &$banned_user) { if (! $this->_isAllowedToBan()) { throw new DomainException('You are not allowed to ban more users today.'); } if (date('d.m.Y') != $this->_last_ban_date->getValue()) { $this->_ban_count = 0; } $this->_ban_count++; $date_banned = date('d.m.Y'); $expiration_date = date('d.m.Y', strtotime('+1 week')); $banned_user->add($user->getUserId(), new SimpleDate($date_banned), new SimpleDate($expiration_date)); } protected function _isAllowedToBan() { if ($this->_ban_count >= 3 AND date('d.m.Y') == $this->_last_ban_date->getValue()) { return false; } return true; } } interface iBannedUser { public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date); public function remove(); } class BannedUser implements iBannedUser { protected $_user_id; protected $_date_banned; protected $_expiration_date; public function __construct(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function remove() { $this->_user_id = ''; $this->_date_banned = ''; $this->_expiration_date = ''; } } // Gathers objects $user_repo = new UserRepository(); $evil_user = $user_repo->findById(123); $moderator_repo = new ModeratorRepository(); $moderator = $moderator_repo->findById(1337); $banned_user_factory = new BannedUserFactory(); $banned_user = $banned_user_factory->build(); // Performs ban $moderator->banUser($evil_user, $banned_user); // Saves objects to database $user_repo->store($evil_user); $moderator_repo->store($moderator); $banned_user_repo = new BannedUserRepository(); $banned_user_repo->store($banned_user); Should the User entitity have a 'is_banned' field which can be checked with $user->isBanned();? How to remove a ban? I have no idea.

    Read the article

  • using remote MS Access database which connects to remote SQL server

    - by Manjot
    Hi, We have a Microsoft Access database + application (on Server A) which connects to a remote SQL server (Server B) using System DSN ODBC connection (on Server A) to the SQL database server. The users are open this Access database remotely as it is on a shared location on the server A. They still have to create a local ODBC connection on their computers to connect to Server B. Is there anyway that they can access the Access database and not have to create a local ODBC connection? thanks in advance

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • Optimize SAP SQL Server database using DTA

    - by Danilo Brambilla
    Is it safe to optimize a SQL Server 2005 SAP R/3 database using Database Tuning Advisor raccomandatations? We are experiencing very low performance on a dedicated SAP database because of intense read operations ad the db and DTA suggest to create about 25 indexes and 100 stats. I am not an expert of SAP and I am quite surprised to see that this database has about 56.000 tables and 6500 views (120 GB of data). Thank you all for help

    Read the article

  • Delete data from a SQL Server database on a full partition

    - by aleroot
    I have a SQL Server 2005 Database on a dedicated partition, during the time the database grown and now it have occupied all the space on the partition, now the problem is that the only operation I can do on the database is detach, but i want to remove old data from some tables to save space ... How can I remove old data from the database if SQL Server interface doesn't allow to run queries on it ?

    Read the article

  • Determining Azure SQL Database requirements

    - by Gerald
    I'm looking into moving an SQL Server database project to the cloud using Azure SQL Database. I'm just wondering what metrics I can use from SQL Server to help determine what my needs will be on Azure. The size of the database is around 150GB, so I understand what my needs are in terms of storage, I'm just not sure what metrics I can use to translate my database usage to the DTU benchmark metrics that the various service tiers on Azure SQL use.

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • ??Database Replay Capture????

    - by Liu Maclean(???)
    Database Replay?11g??????,??workload capture??????????????,???????? ??Workload Capture???????: ???????????????,???????2????,??????,???????????OLTP???????capture 10????1G???? ?????: ????????????????????? ??startup restrict????,?????????? ??capture???restrict?? ????????????? ???????????????: ??scn???????? ???????? ???????? Capture???????????workload????? ???????SYSDBA?SYSOPER????OS?? ????: ?TPCC???capture??????4.5% ????session????64KB??? ???Workload Capture?????????? ????????2?, ??RAC????workload capture  file??????????????,??start_capture????? ????session????64KB???,??????????????workload  capture file????Server Process??????,?????????parse???execution????,Server Process??LOGON?LOGOFF?SQL??????????PGA?,???WCR Capture PG?WCR Capture PGA?,?PGA?????????????????,Server Process???????????WCR???,?????WCR???Server Process??’WCR: capture file IO write’????? ?WCR?????????: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select name from v$event_name where name like '%WCR%'; NAME ---------------------------------------------------------------- WCR: replay client notify WCR: replay clock WCR: replay lock order WCR: replay paused WCR: RAC message context busy WCR: capture file IO write WCR: Sync context busy latch: WCR: sync latch: WCR: processes HT 11g????????WCR???LATCH 1* select name,gets from v$latch where name like '%WCR%' SQL> / NAME GETS ------------------------------ ---------- WCR: kecu cas mem 3 WCR: kecr File Count 37 WCR: MMON Create dir 1 WCR: ticker cache 0 WCR: sync 495 WCR: processes HT 0 WCR: MTS VC queue 0 7 rows selected. ????????????Database Replay Capture????? 1. ????capture dbms_workload_capture.start_capture CREATE OR REPLACE DIRECTORY dbcapture AS '/home/oracle/dbcapture'; execute dbms_workload_capture.start_capture('CAPTURE','DBCAPTURE',default_action=>'INCLUDE'); SQL> select id,name,status,start_time,end_time,connects,user_calls,dir_path from dba_workload_captures where id = (select max(id) from dba_workload_captures) ; ID ---------- NAME -------------------------------------------------------------------------------- STATUS START_TIM END_TIME CONNECTS ---------------------------------------- --------- --------- ---------- USER_CALLS ---------- DIR_PATH -------------------------------------------------------------------------------- 1 CAPTURE IN PROGRESS 08-DEC-12 11 ID ---------- NAME -------------------------------------------------------------------------------- STATUS START_TIM END_TIME CONNECTS ---------------------------------------- --------- --------- ---------- USER_CALLS ---------- DIR_PATH -------------------------------------------------------------------------------- 167 /home/oracle/dbcapture 2. ?? capture file?? [oracle@mlab2 dbcapture]$ ls -lR .: total 8 drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 cap drwxr-xr-x 3 oracle oinstall 4096 Dec 8 07:24 capfiles -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:24 wcr_cap_00001.start ./cap: total 4 -rw-r--r-- 1 oracle oinstall 91 Dec 8 07:24 wcr_scapture.wmd ./capfiles: total 4 drwxr-xr-x 12 oracle oinstall 4096 Dec 8 07:24 inst1 ./capfiles/inst1: total 40 drwxr-xr-x 2 oracle oinstall 4096 Dec 8 08:31 aa drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ab drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ac drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ad drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ae drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 af drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ag drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ah drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ai drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 aj ./capfiles/inst1/aa: total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec ./capfiles/inst1/ab: total 0 ./capfiles/inst1/ac: total 0 ./capfiles/inst1/ad: total 0 ./capfiles/inst1/ae: total 0 ./capfiles/inst1/af: total 0 ./capfiles/inst1/ag: total 0 ./capfiles/inst1/ah: total 0 ./capfiles/inst1/ai: total 0 ./capfiles/inst1/aj: total 0 [oracle@mlab2 dbcapture]$ cd ./capfiles/inst1/aa [oracle@mlab2 aa]$ ls -l total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec [oracle@mlab2 aa]$ ls -l |wc -l 14 ???????14??? 3. ??LOGON????Server Process [oracle@mlab2 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 8 08:37:40 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options ?????wcr?? [oracle@mlab2 aa]$ ls -ltr total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:37 wcr_c6hp4h0000018.rec ??????wcr_c6hp4h0000018.rec ??? SQL> select spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); SPID ------------------------ 14293 ????????????????14293, ???????????????,??????wcr_c6hp4h0000018.rec [oracle@mlab2 ~]$ ls -l /proc/14293/fd total 0 lr-x------ 1 oracle oinstall 64 Dec 8 08:39 0 -> /dev/null l-wx------ 1 oracle oinstall 64 Dec 8 08:39 1 -> /dev/null lrwx------ 1 oracle oinstall 64 Dec 8 08:39 10 -> /u01/app/oracle/product/11201/db_1/rdbms/audit/CRMV_ora_14293_1.aud l-wx------ 1 oracle oinstall 64 Dec 8 08:39 11 -> /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc l-wx------ 1 oracle oinstall 64 Dec 8 08:39 12 -> pipe:[34585895] l-wx------ 1 oracle oinstall 64 Dec 8 08:39 13 -> /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trm l-wx------ 1 oracle oinstall 64 Dec 8 08:39 2 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 3 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 4 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 5 -> /u01/app/oracle/product/11201/db_1/rdbms/mesg/oraus.msb lr-x------ 1 oracle oinstall 64 Dec 8 08:39 6 -> /proc/14293/fd lr-x------ 1 oracle oinstall 64 Dec 8 08:39 7 -> /dev/zero lrwx------ 1 oracle oinstall 64 Dec 8 08:39 8 -> /home/oracle/dbcapture/capfiles/inst1/aa/wcr_c6hp4h0000018.rec lr-x------ 1 oracle oinstall 64 Dec 8 08:39 9 -> pipe:[34585894] ?????lsof?? [root@mlab2 ~]# lsof|grep wcr_c6hp4h0000018.rec oracle 14293 oracle 8u REG 8,1 0 17629644 /home/oracle/dbcapture/capfiles/inst1/aa/wcr_c6hp4h0000018.rec ????????,??Server Process????WCR REC??,?Server Process LOGON?????? 3.????SQL??: SQL> select 1 from dual; 1 ---------- 1 SQL> / 1 ---------- 1 [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec ==»????SQL????, ??????? ??????SQL???,???????????????WCR??????,LOGON???????????SQL????,????????? [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec 11.2.0.3.0 *File header info. (Shadow process='14293') D0576B5D710A34F4E043B201A8C0ECFE SYS; NLS_LANGUAGE? AMERICAN> NLS_TERRITORY? AMERICA> NLS_CURRENCY? NLS_ISO_CURRENCY? AMERICA> NLS_NUMERIC_CHARACTERS? NLS_CALENDAR? GREGORIAN> NLS_DATE_FORMAT? DD-MON-RR> NLS_DATE_LANGUAGE? AMERICAN> NLS_CHARACTERSET? AL32UTF8> NLS_SORT? BINARY> NLS_TIME_FORMAT? HH.MI.SSXFF AM> NLS_TIMESTAMP_FORMAT? DD-MON-RR HH.MI.SSXFF AM> NLS_TIME_TZ_FORMAT? HH.MI.SSXFF AM TZR> NLS_TIMESTAMP_TZ_FORMAT? DD-MON-RR HH.MI.SSXFF AM TZR> NLS_DUAL_CURRENCY? NLS_SPECIAL_CHARS? NLS_NCHAR_CHARACTERSET? UTF8> NLS_COMP? BINARY> NLS_LENGTH_SEMANTICS? BYTE> NLS_NCHAR_CONV_EXCP? FALSE (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/11201/db_1/bin/oracle)(ARGV0=oracleCRMV)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(DETACH=NO))(CONNECT_DATA=(CID=(PROGRAM=sqlplus)(HOST=mlab2.oracle.com)(USER=oracle)))) ,[email protected] (TNS V1-V3)U tselect spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)) ` _ select 1 from dual select 1 from dual ??????????????????? [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec 9`9_^B create table vva(t1 int) `:_i :`:_iB `;_^ ;`;_^B create table vva(t1 int) `_i >`>_iB FusC `?_^ ?`?_^B FvWC _begin for i in 1..50000 loop execute immediate 'select 1 from dual where 2='||i; end loop; end; ?SERVER PROCESS LOGOFF ??????? C`E_ B k^2C ????Server Process????parse?execution???WCR??,??????????PGA?,????????????,????????,?????WCR???????????,???????? 4. ?????? SQL> oradebug setmypid Statement processed. SQL> oradebug dump processstate 10; Statement processed. SQL> oradebug tracefile_name /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc ?processstate ??????????????? WCR: capture file IO write,??Server process??WCR ?? 3: waited for 'SQL*Net message to client' driver id=0x62657100, #bytes=0x1, =0x0 wait_id=139 seq_num=140 snap_id=1 wait times: snap=0.000007 sec, exc=0.000007 sec, total=0.000007 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 0.934091 sec of elapsed time 4: waited for 'latch: shared pool' address=0x60106b20, number=0x133, tries=0x0 wait_id=138 seq_num=139 snap_id=1 wait times: snap=0.000066 sec, exc=0.000066 sec, total=0.000066 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 1.180690 sec of elapsed time 5: waited for 'WCR: capture file IO write' =0x0, =0x0, =0x0 wait_id=137 seq_num=138 snap_id=1 wait times: snap=0.000189 sec, exc=0.000189 sec, total=0.000189 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 3.122783 sec of elapsed time 6: waited for 'WCR: capture file IO write' =0x0, =0x0, =0x0 wait_id=136 seq_num=137 snap_id=1 wait times: snap=0.000191 sec, exc=0.000191 sec, total=0.000191 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 3.053132 sec of elapsed time 7: waited for 'WCR: capture file IO write' 5.??PGA???? SQL> oradebug dump heapdump 536870917; Statement processed. grep WCR /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc Chunk 7fb1b606bfc0 sz= 65600 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6111e18 sz= 4224 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6112e98 sz= 4184 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6113ef0 sz= 4224 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6114f70 sz= 4104 recreate "WCR Capture PG " latch=(nil) Chunk 7fb1b6115f78 sz= 160 freeable "WCR Capture PGA" Chunk 7fb1b6116018 sz= 3248 freeable "WCR Capture PGA" Subheap ds=0x7fb1b6115f90 heap name= WCR Capture PG size= 82336 HEAP DUMP heap name="WCR Capture PG" desc=0x7fb1b6115f90 FIVE LARGEST SUB HEAPS for heap name="WCR Capture PG" desc=0x7fb1b6115f9 PGA???WCR Capture PG ?WCR Capture PGA?freeable or recreate??chunk,???????Server Process???OS Chunk 7fb1b606bfc0 sz= 65600 freeable "WCR Capture PG " ds=0x7fb1b6115f90 sz= 65600=» 64k ??????????64k??,???????????????64k WCR????????????:)! 6.???? ??WCR CAPTURE????????2? SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm in ('_capture_buffer_size','_wcr_control'); NAME VALUE DESCRIB -------------------- -------------------- ------------------------------------------------------------ _wcr_control 0 Oracle internal test WCR parameter used ONLY for testing! _capture_buffer_size 65536 To set the size of the PGA I/O recording buffers ??_capture_buffer_size ??PGA?WCR BUFFER?SIZE,???64k _wcr_control ??WCR?????,?????? ????,??????: 1. ???WCR WORKLOAD CAPTURE???????????,??Server Process????(????)2. ???server process????WCR??3. Server Proess???LOGON?LOGOFF?SQL?????????WCR???4. Server Process????????Immediate mode,????????PGA?(WCR Capture) subheap?,??????????????(timeout?????)5. ????, Server Process????????Immediate mode,?capture????parse??execution??(?????capture???parse?????????????,parse????capture???),?????LOGON?SQL??(???????)??PGA?WCR Capture?????,???????,????????,??tpcc??????4.5%6. ????_capture_buffer_size ??PGA?WCR BUFFER?SIZE,???64k7. WCR Capture?????binrary 2????,?????,????????????????WCR capture file8. WCR: capture file IO write?????Server Process??WCR??

    Read the article

  • SQL Database Mirroring and your web application

    - by Khou
    You have two servers when you perform a SQL Server database mirroring You have 1 primary database and 1 mirror database Do you need to make any changes to web application to tell it that your using database mirroring? If not how does your web application know which database to use when the primary database fails?

    Read the article

  • Should we use Visual Studio 2010 for all SQL Server Database Development?

    - by Luke
    Our company currently has seven dedicated SQL Server 2008 servers each running an average of 10 databases. All databases have many stored procedures and UDFs that commonly reference other databases both on the same server and also across linked servers. We currently use SSMS for all database related administration and development but we have recently purchased Visual Studio 2010 primarily for ongoing C# WinForms and ASP.NET development. I have used VS2010 to perform schema comparisons when rolling out changes from a development server into production and I'm finding it great for this task. We would like to consider using VS2010 for all database development going forward but as far as I understand, we would have to set up ALL databases as projects because of the dependencies on linked servers etc. My question is, do you have any experience using VS2010 for database development in a similar environment? Is it easy to use in tandem with SSMS or is it a one way street once VS2010 projects have been set up for all databases? Can you make any recommendations/impart any experience with a similar scenario? Thanks, Luke

    Read the article

  • How to backup database (MS SQl Server 2008) in C# without using SMO (having proplems) ?

    - by SzamDev
    Hi I have this code and it is not working but I don't why? try { saveFileDialog1.Filter = "SQL Server database backup files|*.bak"; saveFileDialog1.Title = "Database Backup"; if (saveFileDialog1.ShowDialog() == DialogResult.OK) { SqlCommand bu2 = new SqlCommand(); SqlConnection s = new SqlConnection("Data Source=M1-PC;Initial Catalog=master;Integrated Security=True;Pooling=False"); bu2.CommandText = String.Format("BACKUP DATABASE LA TO DISK='{0}'", saveFileDialog1.FileName); s.Open(); bu2.ExecuteNonQuery(); s.Close(); MessageBox.Show("ok"); } } catch (Exception ex) { MessageBox.Show(ex.ToString()); } and I get this error : What is the proplem?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >