Search Results

Search found 34699 results on 1388 pages for 'database backup'.

Page 453/1388 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • ?????????????????????TCO???????????????????WebLogic???????|WebLogic Channel|??????

    - by ???02
    ????????????/????????????????????????????·???????????????????????????????????????????????????????????WebLogic Server??????(????????)???????????????????????????Oracle Database??????????????????????TCO??????????????????·??????·??????????WebLogic Server???????????????????????(???)???????????????????????????????????????????WebLogic Server?????????????????????????????Oracle Database?????????????????????????????WebLogic Server????????1???Oracle RAC???????????·?????????WebLogic Server??????Oracle Database?????????·??????·?????????????????????????????Oracle Real Application Clusters(RAC)?????????????????????? WebLogic Server???Oracle RAC???????????Active GridLink for RAC???????????????????????Oracle RAC???WebLogic Server?????????????????????????????????Oracle RAC????????????????????????????1?????????????????????????????????????????????????????????????Oracle RAC???"??"?????????????????????????????????·???????????????????????????????????????????????????????·??????????????????????????????????????WebLogic Server?Oracle RAC?????????????????????????Oracle RAC????????????????????????????????WebLogic Server??Oracle RAC????????????????·?????????????????????????????·?????Oracle WebLogic Server?Real Application Clusters? ···?????·???????Oracle WebLogic Server 10.3?????????Oracle RAC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle WebLogic Server JDBC???·???????Oracle RAC?????????????????????????????????????·????????????????????DB?Oracle Coherence???????????/???????????????????/???????????????????????????????NOSQL?(SQL???????????????/??????????)???????????????????????????·????????????Oracle Coherence???????????????????/??????????????????????? ??Coherence??????????WebLogic Server????1???Coherence?????????·??????????????????"????????"????????????????WebLogic Server?????????????????????O/R?????·?????????Oracle TopLink?????????TopLink?Java???????????????Java Persistence API(JPA)????????JPA?????????????????????TopLink?????Oracle Database????????Coherence?????????????????????? ???WebLogic Server????????????????????Coherence?????????????????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????WebLogic Server??Coherence?HTTP????????????????????WebLogic Server????????????????????Coherence????????????????????????????????????????????????????????????WebLogic Server???????????????Coherence?????????????????????????????????????????Coherence???????????????EC??????????·??????????????????????Web?????????????????????????????????????????????????????????????????????·????????????????WebLogic Server?????????1?????Oracle Enterprise Manager????????????????????????????????????Oracle Enterprise Manager(EM)??????????????Oracle EM???????URL???????????????????????????????Oracle Database?????????????????????????????????????"?????"??????TCO??????????????/??????????????????IT?TCO(??????)???????WebLogic Server???????????? ???????????????????????????????????????????????????????????WebLogic Server??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????Oracle EM??????Oracle Database??????????????????????????????????????????????????????????????????TCO????????????? ????Java EE??????????????????????????????????????????TCO????????????????WebLogic Server?????????????????????·????????????????WebLogic Server????????????????????????????IT???????????????????????????WebLogic Server????????????????????????? ???????????????????????????????????????????????JRockit????????????????Sun JVM???????????????WebLogic Server????????Sun JVM????????????????????????????????? ????????????????????????????????????????????????????????????????????? ?????????????????Oracle EM????????????????Oracle EM???????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????Java EE??????????????????????????????????????????????????WebLogic Server?????????????????????????????????????????????????????????????WebLogic Server????????????????????????????WebLogic Server???????????????????????????????????????????????????????????????? ??????TCO???????????????????Java EE???????????????????????????????????????????????"?????"?????????·???????? ??????WebLogic Server??????????????????????·??????·??????????????????????????????????WebLogic Server??????????????????????????????????????????????

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Building a SOA/BPM/BAM Cluster Part I &ndash; Preparing the Environment

    - by antony.reynolds
    An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.3 Provides Web Server, 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 Virtual LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

    Read the article

  • Rsync: windows 7, synology: login error and permission denied error

    - by loonboon
    Good day to all of you all, I'm running into strange/stupid errors, and I hope anybody would be so kind to help me out. I have to admit, I am by no means a guru, so please bear with me :-) Situation: -Synology NAS (runs Linux), and Windows 7 desktop (1 normal/restricted user Lisa and 1 admin user). - Data from W7 desktop to be rsynced to synology: /volume1/home/Lisa/Backup - Rsync command: c:\cygwin\bin\rsync -avz /cygdrive/e/Lisa/ [email protected]:/volume1/homes/Lisa/Backup - I've set up ssh per these two threads: a. http://www.cesareriva.com/archives/102 b. http://www.cesareriva.com/archives/112 Now the horrors begin: - root is allowed to run the rsync succesfully, however, he doesn't login automatically (so I can not use rsync in W7 batchscripts, which is of course required). - Lisa is allowed to login automatically but he can not succesfully finish the rsync command because of permission errors: rsync change dir /volume1/homes/Lisa/Backup failed: permissions denied. This happens for each and every file and subdir rsync tries to create. However, the main directory (Backup) is created. When I try to copy files from windows explorer to the directory 'Backup' using the very same user Lisa everything goes smoothly. So, obviously, there is a permission problem somewhere; either my rsync-command isn't correct, or the folder permissions for homes/Lisa aren't correct (but, then again, Windows 7 copies files to that folder without any problems, so that does make me believe the homes/Lisa-permissions don't appear to be the problem). I also tried adding: --chmod=Dugo+x --chmod=ugo+r which I found somewhere on the web, to the rsync-command, but this didn't solve any problem and gave the exact errors. Would anybody please please help me on how to fix this? I am utterly frustrated about this, because I have been trying for 1 month to get everything to work and it simply doesn't work. I bought the big Synology to end the horrors of 20 external USB-disks for once and for all (we have many pictures and home vids of our deceased dogs and want to watch these, the horrors being 'what material is on what disk'). I'll gladly return the favour of somebody helping me out by buying you a nice beer (paypal), if you could end my misery. I am not extremely skilled on Linux (not at all :-( ) so if you could give an extra word when possible so I understand what to do, I'd be very grateful. I really hope somebody can help me out, Thank you in advance, Lisa

    Read the article

  • ASP.NET Membership and Roles separation relationship

    - by Saif Khan
    Hi, I have an ASP.NET project where I want to keep the membership (SQL Provider) in a separate database and the Roles/Profiles will be per application. Question What is the KEY that relates between the Membership database and the Roles/Profile database? Is it the UserID or UserName? I opened up the tables in separate expolrer and notice the UserID is different in the Membership database from that in the application Roles database.

    Read the article

  • What is the best method to write user "log files" of an android application to a file in a remote server or a table in a remote database?

    - by Samitha Chathuranga
    I am creating a multi user android application and it is connected to a php web service in a remote server and to a remote database via that web service. I want to keep a track of all the important activities done by the users. For an example if a user logged in to the app and changed his profile details and then logged out, a brief description of what he has done should be recorded(with time) somewhere. So then the admin of the system can see what the users are doing. So I think it is better to use log cat files and then flush all those data to a unique file in the server or a table in the database, when the user logs out or exists from his account. If it is appropriate How to do it?

    Read the article

  • PHP Session Class and $_SESSION Array

    - by Gianluca Bargelli
    Hello, i've implemented this custom PHP Session Class for storing sessions into a MySQL database: class Session { private $_session; public $maxTime; private $database; public function __construct(mysqli $database) { $this->database=$database; $this->maxTime['access'] = time(); $this->maxTime['gc'] = get_cfg_var('session.gc_maxlifetime'); session_set_save_handler(array($this,'_open'), array($this,'_close'), array($this,'_read'), array($this,'_write'), array($this,'_destroy'), array($this,'_clean') ); register_shutdown_function('session_write_close'); session_start();//SESSION START } public function _open() { return true; } public function _close() { $this->_clean($this->maxTime['gc']); } public function _read($id) { $getData= $this->database->prepare("SELECT data FROM Sessions AS Session WHERE Session.id = ?"); $getData->bind_param('s',$id); $getData->execute(); $allData= $getData->fetch(); $totalData = count($allData); $hasData=(bool) $totalData >=1; return $hasData ? $allData['data'] : ''; } public function _write($id, $data) { $getData = $this->database->prepare("REPLACE INTO Sessions VALUES (?, ?, ?)"); $getData->bind_param('sss', $id, $this->maxTime['access'], $data); return $getData->execute(); } public function _destroy($id) { $getData=$this->database->prepare("DELETE FROM Sessions WHERE id = ?"); $getData->bind_param('S', $id); return $getData->execute(); } public function _clean($max) { $old=($this->maxTime['access'] - $max); $getData = $this->database->prepare("DELETE FROM Sessions WHERE access < ?"); $getData->bind_param('s', $old); return $getData->execute(); } } It works well but i don't really know how to properly access the $_SESSION array: For example: $db=new DBClass();//This is a custom database class $session=new Session($db->getConnection()); if (isset($_SESSION['user'])) { echo($_SESSION['user']);//THIS IS NEVER EXECUTED! } else { $_SESSION['user']="test"; Echo("Session created!"); } At every page refresh it seems that $_SESSION['user'] is somehow "resetted", what methods can i apply to prevent such behaviour?

    Read the article

  • MSSQL using SSH-tunnel from Visual Studio

    - by pbt
    Hi, I recently contacted a web host regarding support for external database access to a Microsoft SQL Database included in a package they offer. They replied saying that it is only possible with an SSH-tunnel. Is it possible to connect to a MS SQL database in Visual Studio using an SSH-tunnel? It is important for me to be able to access the database from my local machine (for debugging, generating LINQ classes, editing tables, etc). Or, how should I go about working with their database?

    Read the article

  • mysql timeout - c/C++

    - by user1262876
    Guys i'm facing a problem with this code, the problem is the timeout by timeout i mean the time it takes the program to tell me if the server is connected or not. If i use my localhost i get the answer fast, but when i connect to outside my localhost it takes 50sc - 1.5 min to response and the program frezz until it done. HOw can i fix the frezzing, or make my own timeout, like if still waiting after 50sc, tell me connection failed and stop? please use codes as help, becouse i would understand it better, thanks for any help i get PS: USING MAC #include "mysql.h" #include <stdio.h> #include <stdlib.h> // Other Linker Flags: -lmysqlclient -lm -lz // just going to input the general details and not the port numbers struct connection_details { char *server; char *user; char *password; char *database; }; MYSQL* mysql_connection_setup(struct connection_details mysql_details) { // first of all create a mysql instance and initialize the variables within MYSQL *connection = mysql_init(NULL); // connect to the database with the details attached. if (!mysql_real_connect(connection,mysql_details.server, mysql_details.user, mysql_details.password, mysql_details.database, 0, NULL, 0)) { printf("Conection error : %s\n", mysql_error(connection)); exit(1); } return connection; } MYSQL_RES* mysql_perform_query(MYSQL *connection, char *sql_query) { // send the query to the database if (mysql_query(connection, sql_query)) { printf("MySQL query error : %s\n", mysql_error(connection)); exit(1); } return mysql_use_result(connection); } int main() { MYSQL *conn; // the connection MYSQL_RES *res; // the results MYSQL_ROW row; // the results row (line by line) struct connection_details mysqlD; mysqlD.server = (char*)"Localhost"; // where the mysql database is mysqlD.user = (char*)"root"; // the root user of mysql mysqlD.password = (char*)"123456"; // the password of the root user in mysql mysqlD.database = (char*)"test"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); // assign the results return to the MYSQL_RES pointer res = mysql_perform_query(conn, (char*) "SELECT * FROM me"); printf("MySQL Tables in mysql database:\n"); while ((row = mysql_fetch_row(res)) !=NULL) printf("%s - %s\n", row[0], row[1], row[2]); // <-- Rows /* clean up the database result set */ mysql_free_result(res); /* clean up the database link */ mysql_close(conn); return 0; }

    Read the article

  • SQL Server using SSH-tunnel from Visual Studio

    - by pbt
    Hi, I recently contacted a web host regarding support for external database access to a Microsoft SQL Server database included in a package they offer. They replied saying that it is only possible with an SSH-tunnel. Is it possible to connect to a SQL Server database in Visual Studio using an SSH-tunnel? It is important for me to be able to access the database from my local machine (for debugging, generating LINQ classes, editing tables, etc). Or, how should I go about working with their database?

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • How to Add Proprietary Drivers to Ubuntu 10.04

    - by Matthew Guay
    Does the hardware on your Ubuntu system need proprietary drivers work at peak performance?  Today we take a look how easy version 10.04 makes it to install them. Ubuntu 10.04 finally automatically recognizes and installs drivers for most hardware today, it even recognized and configured Wi-Fi drivers correctly every time in our tests.  This is in contrast to the past, when it was often difficult to get hardware to work in Linux.  However, most video cards still need proprietary drivers from their manufacturer to get full hardware video acceleration. Even though Ubuntu doesn’t include any non-open source components, it still makes it easy to install proprietary drivers if you wish.  When you first install and boot into Ubuntu, you may see a popup informing you that “restricted” drivers are available. You may see a notification asking you if you’d like to install optional drivers from your graphics card manufacturer when you try to enable advanced desktop effects.  Click Enable to directly install the drivers right there. Or, you can select the tray icon from the first popup, and click Install drivers. Alternately, if the tray icon has disappeared, click System, then Administration, and select Hardware Drivers.   This will open a dialog showing all the proprietary drivers available for your system, which may include drivers for your video card and other hardware depending on your computer.  Select the driver you wish to install, and click Activate. Enter your password, and then Ubuntu will download and install the driver without any more input.  After installation you may be prompted to reboot your system. Now, you should be able to take full advantage of your hardware, including fancy desktop effects with hardware acceleration. If you ever wish to remove these drivers, simply re-open the drivers dialog as above, select the driver, and click Remove.  Once again, a reboot may be required to finish the process. Conclusion Ubuntu has definitely made it easier to use Linux on your desktop computer, no matter what hardware you have.  If your video card or other hardware require proprietary drivers, it makes them available and simple to install.  And, best of all, all of your drivers stay updated with your software updates, so you can be sure you’re always running the latest. Similar Articles Productive Geek Tips Adding extra Repositories on UbuntuBackup and Restore Hardware Drivers the Easy Way with Double DriverCopy Windows Drivers From One Machine to AnotherInstalling PHP4 and Apache on UbuntuInstalling PHP5 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows

    Read the article

  • MODx character encoding

    - by Piet
    Ahhh character encodings. Don’t you just love them? Having character issues in MODx? Then probably the MODx manager character encoding, the character encoding of the site itself, your database’s character encoding, or the encoding MODx/php uses to talk to MySQL isn’t correct. The Website Encoding Your MODx site’s character encoding can be configured in the manager under Tools/Configuration/Site/Character encoding. This is the encoding your website’s visitors will get. The Manager’s Encoding The manager’s encoding can be changed by setting $modx_manager_charset at manager/includes/lang/<language>.inc.php like this (for example): $modx_manager_charset = 'iso-8859-1'; To find out what language you’re using (and thus was file you need to change), check Tools/Configuration/Site/Language (1 line above the character encoding setting). This needs to be the same encoding as your site. You can’t have your manager in utf8 and your site in iso-8859-1. Your Database’s Encoding The charset MODx/php uses to talk to your database can be set by changing $database_connection_charset in manager/includes/config.inc.php. This needs to be the same as your database’s charset. Make sure you use the correct corresponding charset, for iso-8859-1 you need to use ‘latin1′. Utf8 is just utf8. Example: $database_connection_charset = 'latin1'; Now, if you check Reports/System info, the ‘Database Charset’ might say something else. This is because the mysql variable ‘character_set_database’ is displayed here, which contains the character set used by the default database and not the one for the current database/connection. However, if you’d change this to display ‘character_set_connection’, it could still say something else because the ’set character set’ statement used by MODx doesn’t change this value either. The ’set names’ statement does, but since it turns out my MODx install works now as expected I’ll just leave it at this before I get a headache. If I saved you a potential headache or you think I’m totally wrong or overlooked something, let me know in the comments. btw: I want to be able to use a real editor with MODx. Somehow.

    Read the article

  • When a restore isn’t really complete

    - by John Paul Cook
    This week I discovered that restoring from a full backup doesn’t always restore SQL Server to the same state it was in when the backup was made. There are three settings that, if enabled, are not restored after a database restore. Thanks to Greg Low for pointing out that the list of affected settings is found in the SQL Server 2008 Upgrade Technical Reference Guide from which I quote: · is_broker_enabled · is_honor_broker_priority_on · is_trustworthy_on Detaching and attaching a database will also...(read more)

    Read the article

  • Unstructured Data - The future of Data Administration

    Some have claimed that there is a problem with the way data is currently managed using the relational paradigm do to the rise of unstructured data in modern business. PCMag.com defines unstructured data as data that does not reside in a fixed location. They further explain that unstructured data refers to data in a free text form that is not bound to any specific structure. With the rise of unstructured data in the form of emails, spread sheets, images and documents the critics have a right to argue that the relational paradigm is not as effective as the object oriented data paradigm in managing this type of data. The relational paradigm relies heavily on structure and relationships in and between items of data. This type of paradigm works best in a relation database management system like Microsoft SQL, MySQL, and Oracle because data is forced to conform to a structure in the form of tables and relations can be derived from the existence of one or more tables. These critics also claim that database administrators have not kept up with reality because their primary focus in regards to data administration deals with structured data and the relational paradigm. The relational paradigm was developed in the 1970’s as a way to improve data management when compared to standard flat files. Little has changed since then, and modern database administrators need to know more than just how to handle structured data. That is why critics claim that today’s data professionals do not have the proper skills in order to store and maintain data for modern systems when compared to the skills of system designers, programmers , software engineers, and data designers  due to the industry trend of object oriented design and development. I think that they are wrong. I do not disagree that the industry is moving toward an object oriented approach to development with the potential to use more of an object oriented approach to data.   However, I think that it is business itself that is limiting database administrators from changing how data is stored because of the potential costs, and impact that might occur by altering any part of stored data. Furthermore, database administrators like all technology workers constantly are trying to improve their technical skills in order to excel in their job, so I think that accusing data professional is not just when the root cause of the lack of innovation is controlled by business, and it is business that will suffer for their inability to keep up with technology. One way for database professionals to better prepare for the future of database management is start working with data in the form of objects and so that they can extract data from the objects so that the stored information within objects can be used in relation to the data stored in a using the relational paradigm. Furthermore, I think the use of pattern matching will increase with the increased use of unstructured data because object can be selected, filtered and altered based on the existence of a pattern found within an object.

    Read the article

  • Add Notes to Zoho Notebook in Firefox

    - by Asian Angel
    As you browse the web during the day, you probably find items that catch your interest and would like to save. The Zoho Notebook Helper extension for Firefox provides an easy way to add those items to your Zoho account. Using Zoho Notebook Helper Using the extension is easy and straightforward. Highlight the text, images, and links that you want to save, right click and select Add to Zoho Notebook. Note: It is recommended that you leave your status bar visible while using the extension. You can choose to add the selection to a new or pre-existing notebook or page. We created a new page for our example. Once your selection has been added to your account, you can see how nicely the formatting is retained. Notice the link at the top of the note…clicking on it will open the original webpage in a new tab if clicked on. The notebook mini pane can also pop out into a separate window if needed. You can resize the new external window as desired and send it back to your browser when ready. You can see an even better view of how well the formatting with regard to images, etc. is retained here. A quick look inside our notebook account and the notes that were just added. A second example added to our notebook account using a newly created page. As you build up the number of notebooks and pages, you can easily navigate between them using the drop-down menu in the mini pane’s upper right corner. Two new sets of notes each with their own page displaying nicely in our online account. The ease of use makes this a must-have extension for Zoho fans. Keep in mind that the extension will be temporarily disabled if you have your online account open in a tab. Conclusion Zoho Office doesn’t get much love compared to other online office solutions like Google Docs, or the new Microsoft Web Apps. However, if you are a Zoho user, the Zoho Notebook Helper extension makes it very easy to add those notes, links, and images to your online account for later reference. Links Install the Zoho Notebook Helper extension (Zoho Website) Similar Articles Productive Geek Tips Get Organized with AM-Notebook LiteAdd Notes to Google Notebook from ChromeGeek Reviews: Manage And Organize Notes With EvernoteAdd Sticky Notes to Any Page with Internote for FirefoxCreate Notes Inside (and Outside) of Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Schedule Auto Send & Receive in Microsoft Outlook

    - by Mysticgeek
    If you use Outlook as your email client, you might want to schedule how often it checks for new messages. Today we show you how to schedule how often auto send/receive occurs. If you’re busy during the day and need to keep up with your emails, you might want want Outlook to check for new messages every few minutes. Here we’ll show how to schedule it in Office 2010, 2007, and 2003 for a busy inbox where you want to keep on top of your important emails. Outlook 2010 To schedule Auto Send/Receive in Outlook 2010, click on the File tab then Options. The Outlook Options window opens…click on Advanced and scroll down to Send and receive and click on the Send/Receive button. In the Send/Receive Groups window under Setting for group “All Accounts” check the box Schedule an automatic send/receive every…minutes. It is set to 30 minutes by default and you can change the minutes to whatever you want it to be. If you’re busy and want to keep up with your messages you can go as low as every one minute. You can also get to the Send/Receive groups by selecting Send/Receive tab on the Ribbon and then Define Send/Receive Groups. Outlook 2007 To select the send/receive time intervals in Outlook 2007, open Outlook and click on Tools \ Options. Click on the Mail Setup tab, check the box next to Send immediately when connected then the Send/Receive button.   Now change the schedule to automatically send/receive. You can also access the Send/Receive Groups section by going to Send/Receive > Send/Receive Settings and Define Send/Receive Groups. Outlook 2003 In Outlook 2003 click on Tool \ Options… Click on the Mail Setup tab then check Send immediately when connected, then the Send/receive button. Then set the amount of time between send/receive attempts. If you live out of Microsoft Outlook and want to keep up with messages, setting the automatic send/receive minutes will keep you up to date. Similar Articles Productive Geek Tips Force Outlook 2007 to Download Complete IMAP ItemsUse Hotmail from Microsoft OutlookClear the Auto-Complete Email Address Cache in OutlookIntegrate Twitter With Microsoft OutlookCreate an Email Template in Outlook 2003 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010

    Read the article

  • SQL SERVER – Working with FileTables in SQL Server 2012 – Part 1 – Setting Up Environment

    - by pinaldave
    Filestream is a very interesting feature, and an enhancement of FileTable with Filestream is equally exciting. Today in this post, we will learn how to set up the FileTable Environment in SQL Server. The major advantage of FileTable is it has Windows API compatibility for file data stored within an SQL Server database. In simpler words, FileTables remove a barrier so that SQL Server can be used for the storage and management of unstructured data that are currently residing as files on file servers. Another advantage is that the Windows Application Compatibility for their existing Windows applications enables to see these data as files in the file system. This way, you can use SQL Server to access the data using T-SQL enhancements, and Windows can access the file using its applications. So for the first step, you will need to enable the Filestream feature at the database level in order to use the FileTable. -- Enable Filestream EXEC sp_configure filestream_access_level, 2 RECONFIGURE GO -- Create Database CREATE DATABASE FileTableDB ON PRIMARY (Name = FileTableDB, FILENAME = 'D:\FileTable\FTDB.mdf'), FILEGROUP FTFG CONTAINS FILESTREAM (NAME = FileTableFS, FILENAME='D:\FileTable\FS') LOG ON (Name = FileTableDBLog, FILENAME = 'D:\FileTable\FTDBLog.ldf') WITH FILESTREAM (NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTableDB'); GO Now, you can run the following code and figure out if FileStream options are enabled at the database level. -- Check the Filestream Options SELECT DB_NAME(database_id), non_transacted_access, non_transacted_access_desc FROM sys.database_filestream_options; GO You can see the resultset of the above query which returns resultset as the following image shows. As you can see , the file level access is set to 2 (filestream enabled). Now let us create the filetable in the newly created database. -- Create FileTable Table USE FileTableDB GO CREATE TABLE FileTableTb AS FileTable WITH (FileTable_Directory = 'FileTableTb_Dir'); GO Now you can select data using a regular select table. SELECT * FROM FileTableTb GO It will return all the important columns which are related to the file. It will provide details like filesize, archived, file types etc. You can also see the FileTable in SQL Server Management Studio. Go to Databases >> Newly Created Database (FileTableDB) >> Expand Tables Here, you will see a new folder which says “FileTables”. When expanded, it gives the name of the newly created FileTableTb. You can right click on the newly created table and click on “Explore FileTable Directory”. This will open up the folder where the FileTable data will be stored. When you click on the option, it will open up the following folder in my local machine where the FileTable data will be stored: \\127.0.0.1\mssqlserver\FileTableDB\FileTableTb_Dir In tomorrow’s blog post as Part 2, we will go over two methods of inserting the data into this FileTable. Reference : Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Filestream

    Read the article

  • Stairway to SQL Server Indexes: Step 1, Introduction to Indexes

    Indexes are the database objects that enable SQL Server to satisfy each data access request from a client application with the minimum amount of effort, resulting in the maximum performance of individual requests while also reducing the impact of one request upon another. Prerequisites: Familiarity with the following relational database concepts: Table, row, primary key, foreign key Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Hidden Formatting Troubles with STR() (SQL Spackle)

    Fill in another bit of your T-SQL knowledge about STR(). It right justifies, rounds, and controls the output width of columns. Sounds perfect but here's why you might not want to use it. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Why does my MySQL remote-connection fail (VLAN)?

    - by Johannes Nielsen
    ubuntu-community! Again I have a problem with my special friend MySQL :D I have got two servers - a database-server and a web-server - who are connected via VLAN. Now I want the web-server to have remote access to the database-server's MySQL. So I created the user user in mysql.user. user's Host is xxx.yyy.zzz.9 which is the internal IP-address of the web-server. xxx.yyy.zzz.0 is the network. I also created user with Host % . As long as I use MySQL on the database-server logging in as user, everything works fine. But trying to log in as user from xxx.yyy.zzz.9 using mysql -h xxx.yyy.zzz.8 -u user -p (where xxx.yyy.zzz.8 is the database-server's internal IP), I get ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.yyy.zzz.8' (110) So I tried to activate Bind-Address in the my.cnf file. Well, if I use xxx.yyy.zzz.8, nothing changes. But if I try xxx.yyy.zzz.9 and try to restart MySQL, I get mysql stop/waiting start: Job failed to start I checked the log files and found - nothing. The database-server's MySQL doesn't even register, that the web-server tries to connect remotely. My idea is, that maybe I didn't configure the VLAN properley, even though I asked someone who actually knows such stuff and he told me, I did everything right. What I wrote into /etc/networking/interfaces is: #The VLAN auto eth1 iface eth1 inet static address xxx.yyy.zzz..8 netmask 255.255.255.0 network xxx.yyy.zzz.0 broadcast xxx.yyy.zzz.255 mtu 1500 ifconfig returns eth1 Link encap:Ethernet HWaddr xxxxxxxxxxxxxx inet addr:xxx.yyy.zzz.8 Bcast:xxx.yyy.zzz.255 Mask:255.255.255.0 inet6 addr: xxxxxxxxxxxxxxx/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:241146 errors:0 dropped:0 overruns:0 frame:0 TX packets:9765 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17825995 (17.8 MB) TX bytes:566602 (566.6 KB) Memory:fb900000-fb920000 for the eth1, what is, what I configured. (This is for the database-server, the web-server looks similar). ethtool eth1 returns: Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000003 (3) drv probe Link detected: yes (This is for the database-server, the web-server looks similar). Actually I think, everything is right, but it still doesn't work. Is there someone with an idea? EDIT: I commented ou Bind-Address in my.cnf after it didn't work.

    Read the article

  • A Technique for Performing Cross-host Upgrades to FMW 11gR1

    - by reza.shafii
    The main tool used for the upgrade of iAS 10g mid-tier (data not stored in 10g meta-data repository schemas) environments to Fusion Middleware (FMW) 11gR1 is the FMW Upgrade Assistant (UA). This tool performs what we call an out-of-place upgrade which in a nut-shell means the following: Upgrade is performed by pointing the UA to a 10g source topology as well as an 11g destination topology. The destination topology must be created, using the standard FMW 11g installation and configuration process, prior to the execution of the UA. The UA carries over all of the required changes from the source environment to the destination. This approach has a number of advantages rooted in the fact that the source environment - which is presumably working well and serving its needs - is not disturbed during the upgrade process as the UA only performs read-only operations on it. The UA today can only perform such out-of-place upgrades when the source and destination topologies reside on the same machine. This can sometimes be an issue when the host on which the iAS 10g environment is installed is running at full capacity and installing new hardware for the purpose of the upgrade (in most cases what would be needed is extra memory) is completely infeasible. In such cases, upgrade across a different host is still possible by using the following technique: Backup your source environment and restore it on to a target machine. The backup and restore procedures for the iAS 10.1.2 components are described within this section of the release's Administration Guide. As described in the docs, the Oracle Application Server Backup and Recovery Tool provides capabilities for backing up the installation on one machine and restoring it on another which is exactly what you want to do for the purpose of cross host upgrade. Ensure that the restored environment on your target host is fully functional. Go through the upgrade steps on the target machine to perform the out-of-place upgrade using the UA. Although this process does add another big step to the overall upgrade process, it does make it possible to perform a cross-host upgrade to 11gR1 when necessary. The easiest approach would of course be to find a way of ensuring that the required hardware capacity for upgrade is available on the original 10g host. Using techniques such as scheduling the upgrade at low traffic times and/or temporarily stopping other processes running on the machine to clear up some memory might provide you the sufficient memory needed to perform the out-of-place upgrade and save you the need for using the backup/restore technique I have described in this post.

    Read the article

  • 2 Birds, 1 Stone: Enabling M2M and Mobility in Healthcare

    - by Eric Jensen
    Jim Connors has created a video showcase of a comprehensive healthcare solution, connecting a mobile application directly to an embedded patient monitoring system. In the demo, Jim illustrates how you can easily build solutions on top of the Java embedded platform, using Oracle products like Berkeley DB and Database Mobile Server. Jim is running Apache Tomcat on an embedded device, using Berkeley DB as the data store. BDB is transparently linked to an Oracle Database backend using  Database Mobile Server. Information protection is important in healthcare, so it is worth pointing out that these products offer strong data encryption, for storage as well as transit. In his video, Jim does a great job of demystifying M2M. What's compelling about this demo is that uses a solution architecture that enterprise developers are already comfortable and familiar with: a Java apps server with a database backend. The additional pieces used to embed this solution are Oracle Berkeley DB and Database Mobile Server. It functions transparently, from the perspective of Java apps developers. This means that organizations who understand Java apps (basically everyone) can use this technology to develop embedded M2M products. The potential uses for this technology in healthcare alone are immense; any device that measures and records some aspect of the patient could be linked, securely and directly, to the medical records database. Breathing, circulation, other vitals, sensory perception, blood tests, x-rats or CAT scans. The list goes on and on. In this demo case, it's a testament to the power of the Java embedded platform that they are able to easily interface the device, called a Pulse Oximeter, with the web application. If Jim had stopped there, it would've been a cool demo. But he didn't; he actually saved the most awesome part for the end! At 9:52 Jim drops a bombshell: He's also created an Android app, something a doctor would use to view patient health data from his mobile device. The mobile app is seamlessly integrated into the rest of the system, using the device agent from Oracle's Database Mobile Server. In doing so, Jim has really showcased the full power of this solution: the ability to build M2M solutions that integrate seamlessly with mobile applications. In closing, I want to point out that this is not a hypothetical demo using beta or even v1.0 products. Everything in Jim's demo is available today. What's more, every product shown is mature, and already in production at many customer sites, albeit not in the innovative combination Jim has come up with. If your customers are in the market for these type of solutions (and they almost certainly are) I encourage you to download the components and try it out yourself! All the Oracle products showcased in this video are available for evaluation download via Oracle Technology Network.

    Read the article

  • How to recreate spfile on Exadata?

    - by Bandari Huang
    Copy spfile from the ASM diskgroup to local disk by using the ASMCMD command line tool.  ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfileedwbase.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.355.800017117 ASMCMD> cp +DATA_DM01/EDWBASE/spfileedwbase.ora /home/oracle/spfileedwbase.ora.bak Copy the context from spfileedwbase.ora.bak to initedwbase.ora except garbled character. Using above initedwbase.ora, start one of the RAC instances to the mount phase.   SQL> startup mount pfile=/home/oracle/initedwbase.ora Ensure one of the database instances is mounted before attempting to recreate the spfile.  SQL> select INSTANCE_NAME,HOST_NAME,STATUS from v$instance; INSTANCE_NAME HOST_NAME  STATUS ------------- ---------  ------ edwbase1      dm01db01   MOUNTED Create the new spfile. SQL> create spfile='+DATA_DM01/EDWBASE/spfileedwbase.ora' from pfile='/home/oracle/initedwbase.ora'; ASMCMD will show that a new spfile has been created as the alias spfilerac2.ora is now pointing to a new spfile under the PARAMETER directory in ASM. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> ls -l Type Redund Striped Time Sys Name Y CONTROLFILE/ Y DATAFILE/ Y ONLINELOG/ Y PARAMETERFILE/ Y TEMPFILE/ N spfilerac2.ora => +DATA_DM01/EDWBASE/PARAMETERFILE/spfile.356.800013581  Shutdown the instance and restart the database using srvctl using the newly created spfile. SQL> shutdown immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> exit [oracle@dm01db01 ~]$ srvctl start database -d edwbase [oracle@dm01db01 ~]$ srvctl status database -d edwbase Instance edwbase1 is running on node dm01db01 Instance edwbase2 is running on node dm01db02 ASMCMD will now show a number of spfiles exist in the PARAMETERFILE directory for this database. The spfile containing the parameter preventing startups should be removed from ASM. In this case the file spfile.355.800017117 can be removed because spfile.356.800013581 is the current spfile. ASMCMD> pwd +DATA_DM01/EDWBASE ASMCMD> cd PARAMETERFILE ASMCMD> ls -l Type Redund Striped Time Sys Name PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.355.800017117 PARAMETERFILE UNPROT COARSE FEB 19 08:00:00 Y spfile.356.800013581 ASMCMD> rm spfile.355.800017117 ASMCMD> ls spfile.356.800013581 Referenece: Recreating the Spfile for RAC Instances Where the Spfile is Stored in ASM [ID 554120.1]

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >