Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 89/439 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • This query show me with this active record

    - by New Kid
    I am having trouble getting two tables and passing them to controller: IN A MODEL: function get_all_entries() { $query = $this->db->get('entry'); return $query->result(); $this->db->select('entry_id , count(comment_id) as total_comment'); $this->db->group_by('entry_id'); $comment = $this->db->get('comment'); return $comment->result(); } IN A CONTROLLER: $data['query'] = $this->blog_model->get_all_entries(); $this->load->view('blog/index',$data); How do I return $query and $comment variables to controller? I think I am doing it wrong.

    Read the article

  • iOS AS3 AIR local storage

    - by Kere Puki
    I am developing an app which requires a SQL DB on the device. I am using the File.applicationStorageDirectory and folder.resolvePath to add a new DB. When debugging the app it all looks like it executes correctly and I am able to successfully create a new table. I haven't gone too far with inserting and reading records however I just wanted to ask, when I re-run the app does the existing DB file get replaced with a new empty one? If so do I need to check if the file exists etc. How can I look at the DB on the device (iOS at this stage)? Thanks

    Read the article

  • Can SQLite file copied successfully on the data folder of an unrooted android device ?

    - by student
    I know that in order to access the data folder on the device, it needs to be rooted. However, if I just want to copy the database from my assets folder to the data folder on my device, will the copying process works on an unrooted phone? The following is my Database Helper class. From logcat, I can verify that the methods call to copyDataBase(), createDataBase() and openDataBase() are returned successfully. However, I got this error message android.database.sqlite.SQLiteException: no such table: TABLE_NAME: when my application is executing rawQuery. I'm suspecting the database file is not copied successfully (cannot be too sure as I do not have access to data folder), yet the method call to copyDatabase() are not throwing any exception. What could it be? Thanks. ps: My device is still unrooted, I hope it is not the main cause of the error. public DatabaseHelper(Context context) { super(context, DB_NAME, null, 1); this.myContext = context; } public void createDataBase() throws IOException{ boolean dbExist = checkDataBase(); String s = new Boolean(dbExist).toString(); Log.d("dbExist", s ); if(dbExist){ //do nothing - database already exist Log.d("createdatabase","DB exists so do nothing"); }else{ this.getReadableDatabase(); try { copyDataBase(); Log.d("copydatabase","Successful return frm method call!"); } catch (IOException e) { throw new Error("Error copying database"); } } } private boolean checkDataBase(){ File dbFile = new File(DB_PATH + DB_NAME); return dbFile.exists(); } private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = null; myInput = myContext.getAssets().open(DB_NAME); Log.d("copydatabase","InputStream successful!"); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); } public void openDataBase() throws SQLException{ //Open the database String myPath = DB_PATH + DB_NAME; myDataBase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); } /* @Override public synchronized void close() { if(myDataBase != null) myDataBase.close(); super.close(); }*/ public void close() { // NOTE: openHelper must now be a member of CallDataHelper; // you currently have it as a local in your constructor if (myDataBase != null) { myDataBase.close(); } } @Override public void onCreate(SQLiteDatabase db) { } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { } }

    Read the article

  • Is it possible to set path of database for delayed job in rails?

    - by WitchOfCloud
    Now, I am developing with mailing system with delayed_jobs gem. When I ran on developing environment, it operated well. But, after deploying application on server, it is not acted. This is my database.yml development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: /var/www/service/shared/db/production.sqlite3 pool: 5 timeout: 5000 I checked queue(in /var/www/...) and it act well. Also, I started delayed_jobs(rake jobs:work). So, I think that problem is delayed_job crawl db/development.sqlite3 How can solve this problem?

    Read the article

  • Path Inclusion/Global variable not working?

    - by Dan LaManna
    Simply put, my config file includes my database class, and the config file has in it: global $db; $db = new database(DB_HOST, DB_NAME, DB_USER, DB_PASS); That file is root/config.php Moving on to root/functions/func.newpage.php doesn't have any includes/requires, and uses $db-classfunction since the file I'm working with: root/newpage.php - requires the config file, as well as func.newpage.php. However I still come up with: Undefined variable db. Anything you guys are seeing I'm not? Thanks! Let me know if more details are needed.

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Rails app deployment challenge, not finding database table in production.log

    - by Stefan M
    I'm trying to setup PasswordPusher as my first ruby app ever. Building and running the webrick server as instructed in README works fine. It was only when I tried to add Apache ProxyPass and ProxyPassReverse that the page load slowed down to several minutes. So I gave mod_passenger a whirl but now it's unable to find the password table. Here's what I get in log/production.log. Started GET "/" for 10.10.2.13 at Sun Jun 10 08:07:19 +0200 2012 Processing by PasswordsController#new as HTML Completed 500 Internal Server Error in 1ms ActiveRecord::StatementInvalid (Could not find table 'passwords'): app/controllers/passwords_controller.rb:77:in `new' app/controllers/passwords_controller.rb:77:in `new' While in log/private.log I get a lot more output so here's just a snippet but it looks to me like it's working with the database. Edit: This was actually old log output, maybe from db:create. Migrating to AddUserToPassword (20120220172426) (0.3ms) ALTER TABLE "passwords" ADD "user_id" integer (0.0ms) PRAGMA index_list("passwords") (0.2ms) CREATE INDEX "index_passwords_on_user_id" ON "passwords" ("user_id") (0.7ms) INSERT INTO "schema_migrations" ("version") VALUES ('20120220172426') (0.1ms) select sqlite_version(*) (0.1ms) SELECT "schema_migrations"."version" FROM "schema_migrations" (0.0ms) PRAGMA index_list("passwords") (0.0ms) PRAGMA index_info('index_passwords_on_user_id') (4.6ms) PRAGMA index_list("rails_admin_histories") (0.0ms) PRAGMA index_info('index_rails_admin_histories') (0.0ms) PRAGMA index_list("users") (4.8ms) PRAGMA index_info('index_users_on_unlock_token') (0.0ms) PRAGMA index_info('index_users_on_reset_password_token') (0.0ms) PRAGMA index_info('index_users_on_email') (0.0ms) PRAGMA index_list("views") In my vhost I have it set to use RailsEnv private. <VirtualHost *:80> # ProxyPreserveHost on # # ProxyPass / http://10.220.100.209:180/ # ProxyPassReverse / http://10.220.100.209:180/ DocumentRoot /var/www/pwpusher/public <Directory /var/www/pwpusher/public> allow from all Options -MultiViews </Directory> RailsEnv private ServerName pwpush.intranet ErrorLog /var/log/apache2/error.log LogLevel debug CustomLog /var/log/apache2/access.log combined </VirtualHost> My passenger.conf in mods-enabled is default for Debian. <IfModule mod_passenger.c> PassengerRoot /usr PassengerRuby /usr/bin/ruby </IfModule> In the apache error.log I get something more cryptic to me. [Sun Jun 10 06:25:07 2012] [notice] Apache/2.2.16 (Debian) Phusion_Passenger/2.2.11 PHP/5.3.3-7+squeeze9 with Suhosin-Patch mod_ssl/2.2.16 OpenSSL/0.9.8o configured -- resuming normal operations /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 08:07:19 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / /var/www/pwpusher/vendor/bundle/ruby/1.8/bundler/gems/modernizr-rails-09e9e6a92d67/lib/modernizr/rails/version.rb:3: warning: already initialized constant VERSION cache: [GET /] miss [Sun Jun 10 10:17:16 2012] [debug] mod_deflate.c(615): [client 10.10.2.13] Zlib: Compressed 728 to 423 : URL / Maybe that's routine stuff. I can see the rake command create files in the relative app root db/. I have private.sqlite3, production.sqlite3 among others. And here's my config/database.yml. base: &base adapter: sqlite3 timeout: 5000 development: database: db/development.sqlite3 <<: *base test: database: db/test.sqlite3 <<: *base private: database: db/private.sqlite3 <<: *base production: database: db/production.sqlite3 <<: *base I've tried setting absolute paths in it but that did not help.

    Read the article

  • BIND9 / DNS Zone / Dedicated Server / Unique Reverse DNS

    - by user2832131
    I locate a dedicated server in a datacenter with no DNS Zone setup. Datacenter panel have 1 textfield only you can fill one Reverse DNS only. According with datacenter instructions here... [instructions]: http://www.wiki.hetzner.de/index.php/DNS-Reverse-DNS/en#How_can_I_assign_several_names_to_my_IP_address.2C_if_different_domains_are_hosted_on_my_server.3F How_can_I_assign_several_names_to_my_IP_address ...I need to install BIND9 in order to configure other records like CNAME and MX. Ok, I've installed BIND9, created a Master Zone. And following this example, I put it in the Zone File: [example]: http://wiki.hetzner.de/index.php/DNS_Zonendatei/en example $ttl 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 1383411730 14400 1800 604800 86400 ) @ IN NS ns1.first-ns.de. @ IN NS robotns2.second-ns.de. @ IN NS robotns3.second-ns.com. localhost IN A 127.0.0.1 @ IN A 144.86.786.651 www IN A 144.86.786.651 loopback IN CNAME localhost But when I point my domain to ns1.first-ns.de, DNS Register says "time out". Am I missing something? I created a Master zone. Should it be a Slave zone? named.conf: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; named.conf.options: options { directory "/var/cache/bind"; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; named.conf.local: zone "mydomain.com" { type master; file "/var/lib/bind/mydomain.com.hosts"; allow-update {any;}; allow-transfer {any;}; allow-query {any;}; }; named.conf.default-zones: zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; Problem is that I'm moving my site, and can't update the new NS server due to a 'timeout' message when filling new datacenter NS. I'm filling: MASTER: ns1.first-ns.de SLAVE1: robotns2.second-ns.de SLAVE2: robotns3.second-ns.com

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • Web Designer and/or Developer

    - by chimps
    we've outsourced our app development, the dev's have created a DB hosted on Amazon-EC2. we're in talks with a web designer for website but the designer does not do any backend integration. i.e connect the website with DB created by app developers do you recommend getting designs from the designer and getting a freelancer to do the front-back end integration, I mean would there be issues/complications? Or go with designer who provides the complete package?

    Read the article

  • Automate RAC Cluster Upgrades using EM12c

    - by HariSrinivasan
    One of the most arduous processes  in DB maintenance is upgrading Databases across major versions, especially for complex RAC Clusters.With the release of Database Plug-in  (12.1.0.5.0), EM12c Rel 3 (12.1.0.3.0)  now supports automated upgrading of RAC Clusters in addition to Standalone Databases. This automation includes: Upgrade of the complete Cluster across the nodes. ( Example: 11.1.0.7 CRS, ASM, RAC DB  ->   11.2.0.4 or 12.1.0.1 GI, RAC DB)  Best practices in tune with your operations, where you can automate upgrade in steps: Step 1: Upgrade the Clusterware to Grid Infrastructure (Allowing you to wait, test and then move to DBs). Step 2: Upgrade RAC DBs either separately or in group (Mass upgrade of RAC DB's in the cluster). Standard pre-requisite checks like Cluster Verification Utility (CVU) and RAC checks Division of Upgrade process into Non-downtime activities (like laying down the new Oracle Homes (OH), running checks) to Downtime Activities (like Upgrading Clusterware to GI, Upgrading RAC) there by lowering the downtime required. Ability to configure Back up and Restore options as a part of this upgrade process. You can choose to : a. Take Backup via this process (either Guaranteed Restore Point (GRP) or RMAN) b. Set the procedure to pause just before the upgrade step to allow you to take a custom backup c. Ignore backup completely, if there are external mechanisms already in place.  High Level Steps: Select the Procedure "Upgrade Database" from Database Provisioning Home page. Choose the Target Type for upgrade and the Destination version Pick and choose the Cluster, it picks up the complete topology since the clusterware/GI isn't upgraded already Select the Gold Image of the destination version for deploying both the GI and RAC OHs Specify new OH patch, credentials, choose the Restore and Backup options, if required provide additional pre and post scripts Set the Break points in the procedure execution to isolate Downtime activities Submit and track the procedure's execution status.  The animation below captures the steps in the wizard.  For step by step process and to understand the support matrix check this documentation link. Explore the functionality!! In the next blog, will talk about automating rolling Upgrades of Databases in Physical Standby Data Guard environment using Transient Logical Standby.

    Read the article

  • SSIS Basics: Using the Merge Join Transformation

    SSIS is able to take sorted data from more than one OLE DB data source and merge them into one table which can then be sent to an OLE DB destination. This 'Merge Join' transformation works in a similar way to a SQL join by specifying a 'join key' relationship. this transformation can save a great deal of processing on the destination. Annette Allen, as usual, gives clear guidance on how to do it.

    Read the article

  • SSIS Basics: Using the Merge Join Transformation

    SSIS is able to take sorted data from more than one OLE DB data source and merge them into one table which can then be sent to an OLE DB destination. This 'Merge Join' transformation works in a similar way to a SQL join by specifying a 'join key' relationship. this transformation can save a great deal of processing on the destination. Annette Allen, as usual, gives clear guidance on how to do it.

    Read the article

  • Multiuser's impact on Access Database

    - by SilentRage47
    Can someone explain to me how are effected the performances of an Access 2003 DB when it's used by a lot (30) of users on the same LAN? I'm working on a vb6 project with this access 2003 DB wich performs ok on my local PC, but it's terrible when used across 20-30 users. It's there something I can do to improve performance? How can I understand what's the cause of this degradation in performance?

    Read the article

  • Software center won't open

    - by jji7skyline
    This is the error I get when I try to open it from terminal using 'software-center'. softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/share/software-center/softwarecenter/db/database.py', 96, 'open')' 2012-11-26 20:40:09,305 - root - WARNING - failed to add sca db Couldn't detect type of database I'm on Ubuntu 10.10. Unity won't work on my computer, so I'm stuck with this version.

    Read the article

  • ???????/???Oracle Enterprise Manager 12c ?????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? ?????????(Active Session History???)??????????(????????????????DB???????)?????????(????????RAT?? ?)???????????????????????? ?????????????????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/4_DBMgmt_120106_1.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/4_DBMgmt_120106_1.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-new/databasemgmt-120106-1484788-ja.pdf

    Read the article

  • ???????/?????????NAS??????!Oracle Database?I/O???????NFS????????????SSD???????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? ???Unified Storage????????????????????????????NAS??????????????????????????????????Oracle Database ????????Direct NFS??????NAS?????????????????SSD?????????????????????????????????DB???????????????? Oracle GRID Center ????????????????? IntroductionDirect NFS ????Direct NFS ???Direct NFS ??? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/120113_C-11_DirectNFS.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/120113_C-11_DirectNFS.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-technique/c-11-directnfs-1484611-ja.pdf

    Read the article

  • ???????/??????????????!???????????????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? DBA???????????????????????????????????????????????Tips???????????????DBA?????????????????????????????? AWR ?????? 11g ??????????????????? Introduction??/?????????????????????????????????????????DB??????Summary ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/111220_D-15_DBTuning.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/111220_D-15_DBTuning.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-new/d-15-dbtuning-1448436-ja.pdf

    Read the article

  • ????????Oracle WebLogic Server - JDBC??????|WebLogic Channel|??????

    - by ???02
    WebLogic Server????????????????????????????JDBC???????????????? ?? JDBC????????????????????(????????????)??????????????????????????????????????? ??JDBC?????????????????????????????????????????????(????DB?????DB???ID????????)????????????????????????????????????????????????????????????????????????????????? ¦JDBC????????¦WebLogic Server?JDBC??????¦WebLogic Server?JDBC?????????¦WebLogic Server?JDBC?????????¦WebLogic Server????????????¦WebLogic Server?JDBC??????????Oracle JDBC Driver?????¦WebLogic Server?JDBC????????????/????¦Appendix:???????????JDBC??????????????????????????????????????http://www.oracle.com/technetwork/jp/ondemand/application-grid/wls11g-jdbcdatasource-201107-otn-sc-439531-ja.pdf

    Read the article

  • ???????/???Oracle Exadata?????????????????????????

    - by user788995
    ????? ??:2012/01/23 ??:??????/?? Oracle Exadata ?????????????????????DWH?OLTP?DB????????????????????????????????????????? Oracle Exadata ??Oracle Exadata ???7???? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/111207_D-3_Exadata.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/111207_D-3_Exadata.mp4 http://www.oracle.com/technetwork/jp/ondemand/db-new/d-3-exadata-1448374-ja.pdf

    Read the article

  • ??????????????????

    - by kazun
    2012 ? 8 ? 9 ???????????? ?? 13F ??????????????????????? DB ???????????????????????????????????????????????????????·????????????·??????????Exadata ?????????? Hybrid Columnar Compression(HCC)?????????????????????????????? (1)?????????????????IT????????????? ?????????????·??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????IT???????????????????ZFS Storage Appliances ????????100Petabytes ??(8 ? 9 ???)? ZFS Storage ?????????????????????????????????????????????????????????????????????? (2)????????????????????????????? ??????????? VTR ???????????????????? EOL(End of Life)????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????7 ??????????? StorageTek SL150 ??????????????????????????450TB(????)?30~300 ????????????????????????? 10.1TB?????????????????????????????????????????????????????????????????????????????? ??:StorageTek SL150(????????????) ?????????????????????????????????????????????????????????? (3)Oracle????????????????????????? ??????(????????)????????????????????????????????????????????????Sun ZFS Storage Appliance?????????????????????????????Sun ZFS Storage Appliance ?Pillar Axiom ???????????????????????????? ?????????Oracle Database?????????????????????????????????????????? Database ????????? Hybrid Columnar Compression / Snapshot?Clone????????????????DB???????? I/O Quality of Services?????????DB??? ????? ???Optimized Solutions????Oracle??????(??????????????????)????????????????????????????????Exadata ?? InfiniBand ?????????????????1?????? ??????????????????????????????????????????????Oracle Database ? Exadata ?????????????????????????? Oracle ?????????????????????? Sun ZFS Storage 7120 Appliance ??????????????????????? ??:Sun ZFS Storage 7120 Appliance (4)?????????Oracle Database ??????????????? “Hybrid Columnar Compression (HCC) ” ????????????? HCC??????????????????????????????????????????????????? I/O ????????????????????????????????????????????Oracle Database ???? HCC ??????????????? ?????:?Oracle ????·???? 2009?12? Exadata Hybrid Columnar Compression? ?????????Sun ZFS Storage Appliance ?Pillar Axiom ?????????4??????HCC??????????????????? HCC ???????? HCC ???????? Exadata ? DataGuard ?????HCC ?????????????????????? Exadata ??????????????·??????????????? DB ??????? HCC ?????? ?????????????????????????HCC ?????????????????????????????????? ?????????? HCC ?????????????????????ZFS Storage Appliance ? Pillar Axiom ?????????????????????? 6 ?????????????????????????????????????? ¦???????? ???????????

    Read the article

  • Saving complex aggregates using Repository Pattern

    - by Kevin Lawrence
    We have a complex aggregate (sensitive names obfuscated for confidentiality reasons). The root, R, is composed of collections of Ms, As, Cs, Ss. Ms have collections of other low-level details. etc etc R really is an aggregate (no fair suggesting we split it!) We use lazy loading to retrieve the details. No problem there. But we are struggling a little with how to save such a complex aggregate. From the caller's point of view: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r); Our competing solutions are: Dirty flags Each entity in the aggregate in the aggregate has a dirty flag. The save() method in the repository walks the tree looking for dirty objects and saves them. Deletes and adds are a little trickier - especially with lazy-loading - but doable. Event listener accumulates changes. Repository subscribes a listener to changes and accumulates events. When save is called, the repository grabs all the change events and writes them to the DB. Give up on repository pattern. Implement overloaded save methods to save the parts of the aggregate separately. The original example would become: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r.Ps); repository.save(r.Cs); repository.save(r.Ms); (or worse) Advice please! What should we do?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >