Search Results

Search found 31694 results on 1268 pages for 'database administration'.

Page 391/1268 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • ASP.NET 4 Hosting :: How to set up Forms Authentication for your ASP.NET web site

    - by mbridge
    Please follow this steps: 1. Log in to your Control Panel. 2. From the menu, select Databases ? SQL Server 2008. 3. Click the Create User button. 4. Enter a user name and password and click Save. In this demonstration, the user name is dotnetuser and the password is dotnetuserpass. 5. Click the Create Database button. 6. Enter a database name and grant access to the user you created above and click Save. In this demonstration the database is called DotNetAuthentication. 7. Locate and run the ASP.NET SQL Server Setup Wizard. This file is located in your .NET framework directory and is named aspnet_regsql.exe (example: C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regsql.exe). 8. Click Next and choose Configure SQL Server for application services. 9. Click Next and enter the server name and database log in credentials. The server name is the web site pointer address to where your application will be published and the log in credentials are the SQL server user name and password created earlier. 10. Click Next twice and the wizard will take a moment to complete the scripting actions that populate the new database with all the objects necessary to configure the ASP.NET provider.  Once complete, click Finish to close the wizard. 11. Finally, modify the web.config file in your ASP.NET web application to use the database you created.

    Read the article

  • From The OU Classrooms...

    - by rajeshr
    No excuses for not doing this systematically, and I'm trying my best to break this bad habit of bulk uploads of class photographs and do it regularly instead. But for the time being, please forgive my laziness and live by my mass introduction of all fun loving, yet talented folks whom I met in the OU classrooms during the last three months or so through these picture essay that follow. It's unfortunate, I don't get to do this for my Live Virtual Classes for obvious reason,but let me take a moment to thank them all as well for choosing OU programs on various products. Thanks again to each one for memorable moments in the OU classrooms: Pillar Axiom MaxRep session at Bangkok. For detailed information on the OU course on Pillar Axiom Max Rep, access this page. Pillar Axiom SAN Administration Session at Bangkok. Know more about the product here. Details on the Pillar Axiom training program from Oracle University can be found here. Oracle Solaris ZFS Administration & Oracle Solaris Containers session at Hyderabad. Read more about ZFS here. Gain information on Solaris Containers by going here. Oracle University courses on Solaris 10 and its features can be viewed at this page. Oracle Solaris Cluster program at Hyderabad. Here's the OU landing page for the training programs on Oracle Solaris Cluster. Oracle Solaris 11 Administration Session at Bangalore. If you are interested to get trained on Solaris 11, get more details at this webpage. Sun Identity Manager Deployment Fundamentals session at Bangalore. The product is n.k.a Oracle Waveset IDM. Click here to get detailed description on this fabulous hands on training program. With Don Kawahigashi at Taipei for Pillar Axiom Storage training.

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • Software Center not Opening at all Error

    - by Newbie
    When I open software from menu, it says "cannot open software database. Please reinstall the software-center package. When I write software-center on terminal, such error comes: 2014-05-28 09:11:20,584 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2014-05-28 09:11:20,593 - softwarecenter.ui.gtk3.app - ERROR - xapian open failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 302, in __init__ if self.db.schema_version() != DB_SCHEMA_VERSION: File "/usr/share/software-center/softwarecenter/db/database.py", line 289, in schema_version return self.xapiandb.get_metadata("db-schema-version") File "/usr/share/software-center/softwarecenter/db/database.py", line 177, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 190, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3667, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) DatabaseCorruptError: /var/cache/software-center/xapian/iamchert: Chert version file should be 28 bytes, actually 0 Now, when I write command sudo apt-get remove software-center dpkg: error: corrupt info database format file '/var/lib/dpkg/info/format' E: Sub-process /usr/bin/dpkg returned an error code (2) I had ubuntu before but it kind of got corrupted. Now, I have freshly reinstalled it and even at start, software center is not opening and this error comes. I hope you have a solution. Thanks.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • It's Raining Solaris Training

    - by rajeshr
    That the popularity of Solaris 11 is only growing is clear from how regular the training sessions have been around this product. It's such an excitement to be going around sharing knowledge on Solaris, more so to explore the nitty gritty of many new and evolving features. Trust me, it's only getting better! In the process, just like in the past I stumbled on several individuals, each teaching me a lesson or two. I'm grateful. And if I've managed to get over the laziness to come back to the web logs with a collection of class photos from the last couple of months, it's because of a sense of gratitude to all 'em in the picture below. Solaris 11 Network Administration Pilot Teach in Bangalore. Sun Identity Manager (n.k.a Oracle Waveset IDM) Deployment Fundamentals Training. I'm missing from the snap because these delegates sat well over 10,000 Kms away from where I taught this class from, but were kind enough to help me associate a face with the voice by sending me a group photograph. If you want to attend one such OU program cutting down the travel, try OU's Live Virtual Class (a.k.a LVC) . Transition to Solaris 11 in Mumbai. Solaris 11 Advanced Administration Session in Bangalore. Transition to Solaris 11 in Mumbai. Attending Gary Riseborough's Exadata Training at Singapore. Solaris 11 Advanced Administration Session in Bangalore. If only the participants of each LVC session belonged to the same location, there would've been additional three group photographs occupying this space! Thank you everyone for many many memorable moments.

    Read the article

  • Oracle Identity Manager ADF Customization

    - by Arda Eralp
    This blog entry includes an example about customization Oracle Identity Manager (OIM) Self Service screen. Before customization all users that can be logged in OIM Self Service can see "Administration" tab on left menu. On this example we create "Managers" role and only users that have managers role can see "Administration" tab. Step 1: Create "Manager" role  Step 2: Create Sandbox  Step 3: Customize ADF Select "Customize" on the top menu Select "Source" instead of "Design" on top  Select "Administration" tab with blue rectangle and edit component Edit "visible" with expression builder #{oimcontext.currentUser.roles['Manager'] != null} Apply Step 4: Apply to All and Publish sandbox Notes:  This table objects can use for expression. Objects Description #{oimcontext.currentUser['ATTRIBUTE_NAME']} #{oimcontext.currentUser['UDF_NAME']} #{oimcontext.currentUser.roles} #{oimcontext.currentUser.roles['SYSTEM ADMINISTRATORS'] != null} Boolean #{oimcontext.currentUser.adminRoles['OrclOIMSystemAdministrator'] != null} Boolean

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • Datenbank in a Box

    - by A&C Redaktion
    Die Oracle Database Appliance: ein zuverlässiges, einfach zu bedienendes und erschwingliches Datenbank-System. Endlich kommt ein Datenbanksystem auf den Markt, das auf die Bedürfnisse kleinerer Unternehmen zugeschnitten ist: Oracle Database Appliance (ODA). Nicht jeder, der große Datenmengen zu verwalten hat, kann schließlich gleich zu Exadata und Co. greifen. Die kompakte „Datenbank in a Box“ kombiniert Software, Server und Speicherkapazität und bietet diverse Vernetzungsmöglichkeiten. Sie beinhaltet zwei geclusterte SunFire-Server, die unter Oracle Linux laufen, vorinstalliert ist eine Oracle Database 11g Release 2. Einer der großen ODA-Vorteile: Die Datenbank wächst mit den Bedürfnissen des Unternehmens: Die Leistungsfähigkeit des Clusters lässt sich anpassen, indem per "Pay-as-you-grow" Software-Lizensierung sukzessive zwei bis 24 Cores freigeschaltet werden können. Sie bietet außerdem hohe Verfügbarkeit für Eigen- und Standard-OLTP sowie universelle Datenbanken, auch in großer Anzahl. Für den Schutz vor Server- und Speichersystemausfällen sorgen Oracle Real Application Clusters, beziehungsweise Oracle Automatic Storage Management. Proaktive Systemüberwachung, Software-Bereitstellung auf einen Klick, integrierte Patches über den gesamten Stack und ein automatischer Call-Home bei Hardware-Ausfällen sparen Kosten und Ressourcen bei der Instandhaltung. Über das Oracle PartnerNetzwerk steht Kunden eine große Anzahl an branchenübergreifenden und -spezifischen Anwendungen zur Verfügung, die von der besseren Verfügbarkeit der Oracle Database Appliance profitieren. Auch die Fachpresse setzt sich mit der neuen Oracle Database Appliance auseinander: Ausführlich berichten unter anderem die Computerwoche und heise online. Das Admin-Magazin bietet eine kurze aber treffende Übersicht. Eine ebenfalls anschauliche, etwas ausführlichere Darstellung bietet die Webseite von DOAG e.V. Im Webcast zur Oracle Database Appliance geht Judson Althoff unter anderem auf deren Bedeutung für das Partner-Business ein:

    Read the article

  • Rules Manager and Expression Filter getting removed

    - by Mike Dietrich
    I doubt that many people are using the Oracle features "Rules Manager" and "Expression Filter" as usually people handle these things (such as ensuring that a zip code or a car number plate has a certain format) within the application code and not inside the database. Oracle Beehive for instance uses that just on the side.  Anyway, just learned today that Rules Manager and Expression Filter components will get removed once our next database release most likely called Oracle Database 12c will get released. So before upgrading to Oracle Database 12c you can remove EXF and RUL components (SELECT COMP_ID FROM DBA_REGISTRY WHERE COMP_ID IN ('EXF','RUL'); ). You'd simply do that by executing the following script before upgrade:SQL> @?/rdbms/admin/catnoexf.sqlThis will clean up Rules Manager and Expression Filter components inside the database. You could run ?/rdbms/admin/catnorul.sql before but I believe catnoexf.sql will clean up everything already. And you'll find all this information plus guidelines for migration of existing content in MOS Note: 1233535.1 - Obsolescence Notice: Rules Manager and Expression Filter Features of Oracle Database -M.

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ???????!DB??/SSD???~2010?12??????????(???)

    - by Yusuke.Yamamoto
    2010?12???????????(???)?????????????????? ??????????????????????????????????? ???????????????????? SQL?????????? Part1~5????????????????Part4?Part5?????????? ???????????????????????????????????????????????????SSD??????? ?????????????? ????????Oracle GoldenGate:?????!! DB??/Upgrade????????? ??????????????????????????SSD???????????? - Oracle Real Application Clusters 11g Release 2 on Oracle VM with Database Smart Flash Cache ?? ???? ???? ??? ?????? ??????????????!? Oracle Database 11g Release2 - Windows? ??????????? ??? ??????(RAC) ?????????Oracle Database 11gR2 RAC ?????????·??? ASM ? Microsoft Windows x86-64 ??? ?????? ??????Oracle Database 11gR2 Oracle Grid Infrastructure ?????? ??? ???? ???????????????!? ?????????? ??? ??????? ??????Oracle Database 11gR2 ???????????????? ??? ?????? ???????/?????!! ??????·???? ~?????RMAN????~?Oracle???????? ??? ??????? ???????/??????????!? ???????·?????????Oracle???????·??????(????)?? ??? ??????? ???????/????????????????? SQL????????? ??? Part1&2 New!???????????????SQL?????? ??? ??? ???????/??????!Oracle Database????!? ??? Oracle Enterprise Manager ?????????Oracle Enterprise Manager 11g Release 1 Grid Control(11.1.0.1.0) for Linux x86-64 ?????????

    Read the article

  • ?????????????????????TCO???????????????????WebLogic???????|WebLogic Channel|??????

    - by ???02
    ????????????/????????????????????????????·???????????????????????????????????????????????????????????WebLogic Server??????(????????)???????????????????????????Oracle Database??????????????????????TCO??????????????????·??????·??????????WebLogic Server???????????????????????(???)???????????????????????????????????????????WebLogic Server?????????????????????????????Oracle Database?????????????????????????????WebLogic Server????????1???Oracle RAC???????????·?????????WebLogic Server??????Oracle Database?????????·??????·?????????????????????????????Oracle Real Application Clusters(RAC)?????????????????????? WebLogic Server???Oracle RAC???????????Active GridLink for RAC???????????????????????Oracle RAC???WebLogic Server?????????????????????????????????Oracle RAC????????????????????????????1?????????????????????????????????????????????????????????????Oracle RAC???"??"?????????????????????????????????·???????????????????????????????????????????????????????·??????????????????????????????????????WebLogic Server?Oracle RAC?????????????????????????Oracle RAC????????????????????????????????WebLogic Server??Oracle RAC????????????????·?????????????????????????????·?????Oracle WebLogic Server?Real Application Clusters? ···?????·???????Oracle WebLogic Server 10.3?????????Oracle RAC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle WebLogic Server JDBC???·???????Oracle RAC?????????????????????????????????????·????????????????????DB?Oracle Coherence???????????/???????????????????/???????????????????????????????NOSQL?(SQL???????????????/??????????)???????????????????????????·????????????Oracle Coherence???????????????????/??????????????????????? ??Coherence??????????WebLogic Server????1???Coherence?????????·??????????????????"????????"????????????????WebLogic Server?????????????????????O/R?????·?????????Oracle TopLink?????????TopLink?Java???????????????Java Persistence API(JPA)????????JPA?????????????????????TopLink?????Oracle Database????????Coherence?????????????????????? ???WebLogic Server????????????????????Coherence?????????????????????·???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????WebLogic Server??Coherence?HTTP????????????????????WebLogic Server????????????????????Coherence????????????????????????????????????????????????????????????WebLogic Server???????????????Coherence?????????????????????????????????????????Coherence???????????????EC??????????·??????????????????????Web?????????????????????????????????????????????????????????????????????·????????????????WebLogic Server?????????1?????Oracle Enterprise Manager????????????????????????????????????Oracle Enterprise Manager(EM)??????????????Oracle EM???????URL???????????????????????????????Oracle Database?????????????????????????????????????"?????"??????TCO??????????????/??????????????????IT?TCO(??????)???????WebLogic Server???????????? ???????????????????????????????????????????????????????????WebLogic Server??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????Oracle EM??????Oracle Database??????????????????????????????????????????????????????????????????TCO????????????? ????Java EE??????????????????????????????????????????TCO????????????????WebLogic Server?????????????????????·????????????????WebLogic Server????????????????????????????IT???????????????????????????WebLogic Server????????????????????????? ???????????????????????????????????????????????JRockit????????????????Sun JVM???????????????WebLogic Server????????Sun JVM????????????????????????????????? ????????????????????????????????????????????????????????????????????? ?????????????????Oracle EM????????????????Oracle EM???????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????Java EE??????????????????????????????????????????????????WebLogic Server?????????????????????????????????????????????????????????????WebLogic Server????????????????????????????WebLogic Server???????????????????????????????????????????????????????????????? ??????TCO???????????????????Java EE???????????????????????????????????????????????"?????"?????????·???????? ??????WebLogic Server??????????????????????·??????·??????????????????????????????????WebLogic Server??????????????????????????????????????????????

    Read the article

  • Club Platinum 2010 ??! ~???:???????~

    - by Urakawa
    ?????Platinum Club??????????Platinum Club???ORACLE MASTER Platinum??????????????????????????????????????Platinum Club?????????????Club Platinum?(????????)????5?19??????????!   ??????Oracle Database?????????????????????????? ?????????? ?????????? ?????? ?? ???????????Platinum????????????????????????? ???????????????????3?????????(?????????? ?? ????? ????? ?????????????!)????Oracle?????! DB????????Oracle Database/Exadata ??7???????????3???????!?????????????????????????????? ???????Oracle Database 11g R2?????????ASM????????????(ACFS)???RAC One Node??????????????????????????USB???????????ACFS???????????!????????!????????!?????????????????????   ???????Oracle Exadata??Exadata Smart Flash Cache???Data Loading?????US?Oracle Corporation???????????????????????Smart Flash Cache??????DB?Flash Cache?????Data Loading????????????????????????????????????!??Tips???!?????????????????????????????????????????????????Oracle??????????????????????????!Oracle?????? Inside the Oracle Optimizer Kevin Closson's Oracle Blog   ???????????Oracle Exadata??Exadata Smart Scan???Exadata Hybrid Columnar Compression?????????????????????????????????????????????????????????????????????·?????????????IT?????????????????????????????????????????????????Oracle Database 11g: Real-Time SQL Monitoring?SQL Monitor active report?????????????!?????????   ????????!???????????????!????????????????????Platinum Club????????????????????????????Oracle Exadata??????????????????????????????????????????????????????m(_ _)m?????? Oracle Database 11g R2 ??????(PDF) (Platinum Club????)   ???????ORACLE MASTER??????????ORACLE MASTER?????????????????? ??????? ?????????????????ORACLE MASTER???update????????????Oracle Exadata?????????????!?Oracle Database 11g???????????????ORACLE MASTER Expert????Oracle E-Business Suite???DBA??????????????????????????????????????????Blog????????????????????????????   ????1????????????????????????????????Platinum of the Year????!~???~

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Building a SOA/BPM/BAM Cluster Part I &ndash; Preparing the Environment

    - by antony.reynolds
    An increasing number of customers are using SOA Suite in a cluster configuration, I might hazard to say that the majority of production deployments are now using SOA clusters.  So I thought it may be useful to detail the steps in building an 11g cluster and explain a little about why things are done the way they are. In this series of posts I will explain how to build a SOA/BPM cluster using the Enterprise Deployment Guide. This post will explain the setting required to prepare the cluster for installation and configuration. Software Required The following software is required for an 11.1.1.3 SOA/BPM install. Software Version Notes Oracle Database Certified databases are listed here SOA & BPM Suites require a working database installation. Repository Creation Utility (RCU) 11.1.1.3 If upgrading an 11.1.1.2 repository then a separate script is available. Web Tier Utilities 11.1.1.3 Provides Web Server, 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. Web Tier Utilities 11.1.1.3 Web Server, 11.1.1.3 Patch.  You can use the 11.1.1.2 version without problems. Oracle WebLogic Server 11gR1 10.3.3 This is the host platform for 11.1.1.3 SOA/BPM Suites. SOA Suite 11.1.1.2 SOA Suite 11.1.1.3 is an upgrade to 11.1.1.2, so 11.1.1.2 must be installed first. SOA Suite 11.1.1.3 SOA Suite 11.1.1.3 patch, requires 11.1.12 to have been installed. My installation was performed on Oracle Enterprise Linux 5.4 64-bit. Database I will not cover setting up the database in this series other than to identify the database requirements.  If setting up a SOA cluster then ideally we would also be using a RAC database.  I assume that this is running on separate machines to the SOA cluster.  Section 2.1, “Database”, of the EDG covers the database configuration in detail. Settings The database should have processes set to at least 400 if running SOA/BPM and BAM. alter system set processes=400 scope=spfile Run RCU The Repository Creation Utility creates the necessary database tables for the SOA Suite.  The RCU can be run from any machine that can access the target database.  In 11g the RCU creates a number of pre-defined users and schema with a user defiend prefix.  This allows you to have multiple 11g installations in the same database. After running the RCU you need to grant some additional privileges to the soainfra user.  The soainfra user should have privileges on the transaction tables. grant select on sys.dba_pending_transactions to prefix_soainfra Grant force any transaction to prefix_soainfra Machines The cluster will be built on the following machines. EDG Name is the name used for this machine in the EDG. Notes are a description of the purpose of the machine. EDG Name Notes LB External load balancer to distribute load across and failover between web servers. WEBHOST1 Hosts a web server. WEBHOST2 Hosts a web server. SOAHOST1 Hosts SOA components. SOAHOST2 Hosts SOA components. BAMHOST1 Hosts BAM components. BAMHOST2 Hosts BAM components. Note that it is possible to collapse the BAM servers so that they run on the same machines as the SOA servers. In this case BAMHOST1 and SOAHOST1 would be the same, as would BAMHOST2 and SOAHOST2. The cluster may include more than 2 servers and in this case we add SOAHOST3, SOAHOST4 etc as needed. My cluster has WEBHOST1, SOAHOST1 and BAMHOST1 all running on a single machine. Software Components The cluster will use the following software components. EDG Name is the name used for this machine in the EDG. Type is the type of component, generally a WebLogic component. Notes are a description of the purpose of the component. EDG Name Type Notes AdminServer Admin Server Domain Admin Server WLS_WSM1 Managed Server Web Services Manager Policy Manager Server WLS_WSM2 Managed Server Web Services Manager Policy Manager Server WLS_SOA1 Managed Server SOA/BPM Managed Server WLS_SOA2 Managed Server SOA/BPM Managed Server WLS_BAM1 Managed Server BAM Managed Server running Active Data Cache WLS_BAM2 Managed Server BAM Manager Server without Active Data Cache   Node Manager Will run on all hosts with WLS servers OHS1 Web Server Oracle HTTP Server OHS2 Web Server Oracle HTTP Server LB Load Balancer Load Balancer, not part of SOA Suite The above assumes a 2 node cluster. Network Configuration The SOA cluster requires an extensive amount of network configuration.  I would recommend assigning a private sub-net (internal IP addresses such as 10.x.x.x, 192.168.x.x or 172.168.x.x) to the cluster for use by addresses that only need to be accessible to the Load Balancer or other cluster members.  Section 2.2, "Network", of the EDG covers the network configuration in detail. EDG Name is the hostname used in the EDG. IP Name is the IP address name used in the EDG. Type is the type of IP address: Fixed is fixed to a single machine. Floating is assigned to one of several machines to allow for server migration. Virtual is assigned to a load balancer and used to distribute load across several machines. Host is the host where this IP address is active.  Note for floating IP addresses a range of hosts is given. Bound By identifies which software component will use this IP address. Scope shows where this IP address needs to be resolved. Cluster scope addresses only have to be resolvable by machines in the cluster, i.e. the machines listed in the previous section.  These addresses are only used for inter-cluster communication or for access by the load balancer. Internal scope addresses Notes are comments on why that type of IP is used. EDG Name IP Name Type Host Bound By Scope Notes ADMINVHN VIP1 Floating SOAHOST1-SOAHOSTn AdminServer Cluster Admin server, must be able to migrate between SOA server machines. SOAHOST1 IP1 Fixed SOAHOST1 NodeManager, WLS_WSM1 Cluster WSM Server 1 does not require server migration. SOAHOST2 IP2 Fixed SOAHOST1 NodeManager, WLS_WSM2 Cluster WSM Server 2 does not require server migration SOAHOST1VHN VIP2 Floating SOAHOST1-SOAHOSTn WLS_SOA1 Cluster SOA server 1, must be able to migrate between SOA server machines SOAHOST2VHN VIP3 Floating SOAHOST1-SOAHOSTn WLS_SOA2 Cluster SOA server 2, must be able to migrate between SOA server machines BAMHOST1 IP4 Fixed BAMHOST1 NodeManager Cluster   BAMHOST1VHN VIP4 Floating BAMHOST1-BAMHOSTn WLS_BAM1 Cluster BAM server 1, must be able to migrate between BAM server machines BAMHOST2 IP3 Fixed BAMHOST2 NodeManager, WLS_BAM2 Cluster BAM server 2 does not require server migration WEBHOST1 IP5 Fixed WEBHOST1 OHS1 Cluster   WEBHOST2 IP6 Fixed WEBHOST2 OHS2 Cluster   soa.mycompany.com VIP5 Virtual LB LB Public External access point to SOA cluster. admin.mycompany.com VIP6 Virtual LB LB Internal Internal access to WLS console and EM soainternal.mycompany.com VIP7 Virtual LB LB Internal Internal access point to SOA cluster Floating IP addresses are IP addresses that may be re-assigned between machines in the cluster.  For example in the event of failure of SOAHOST1 then WLS_SOA1 will need to be migrated to another server.  In this case VIP2 (SOAHOST1VHN) will need to be activated on the new target machine.  Once set up the node manager will manage registration and removal of the floating IP addresses with the exception of the AdminServer floating IP address. Note that if the BAMHOSTs and SOAHOSTs are the same machine then you can obviously share the hostname and fixed IP addresses, but you still need separate floating IP addresses for the different managed servers.  The hostnames don’t have to be the ones given in the EDG, but they must be distinct in the same way as the ETC names are distinct.  If the type is a fixed IP then if the addresses are the same you can use the same hostname, for example if you collapse the soahost1, bamhost1 and webhost1 onto a single machine then you could refer to them all as HOST1 and give them the same IP address, however SOAHOST1VHN can never be the same as BAMHOST1VHN because these are floating IP addresses. Notes on DNS IP addresses that are of scope “Cluster” just need to be in the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc\hosts on Windows) of all the machines in the cluster and the load balancer.  IP addresses that are of scope “Internal” need to be available on the internal DNS servers, whilst IP addresses of scope “Public” need to be available on external and internal DNS servers. Shared File System At a minimum the cluster needs shared storage for the domain configuration, XA transaction logs and JMS file stores.  It is also possible to place the software itself on a shared server.  I strongly recommend that all machines have the same file structure for their SOA installation otherwise you will experience pain!  Section 2.3, "Shared Storage and Recommended Directory Structure", of the EDG covers the shared storage recommendations in detail. The following shorthand is used for locations: ORACLE_BASE is the root of the file system used for software and configuration files. MW_HOME is the location used by the installed SOA/BPM Suite installation.  This is also used by the web server installation.  In my installation it is set to <ORACLE_BASE>/SOA11gPS2. ORACLE_HOME is the location of the Oracle SOA components or the Oracle Web components.  This directory is installed under the the MW_HOME but the name is decided by the user at installation, default values are Oracle_SOA1 and Oracle_Web1.  In my installation they are set to <MW_HOME>/Oracle_SOA and <MW_HOME>/Oracle _WEB. ORACLE_COMMON_HOME is the location of the common components and is located under the MW_HOME directory.  This is always <MW_HOME>/oracle_common. ORACLE_INSTANCE is used by the Oracle HTTP Server and/or Oracle Web Cache.  It is recommended to create it under <ORACLE_BASE>/admin.  In my installation they are set to <ORACLE_BASE>/admin/Web1, <ORACLE_BASE>/admin/Web2 and <ORACLE_BASE>/admin/WC1. WL_HOME is the WebLogic server home and is always found at <MW_HOME>/wlserver_10.3. Key file locations are shown below. Directory Notes <ORACLE_BASE>/admin/domain_name/aserver/domain_name Shared location for domain.  Used to allow admin server to manually fail over between machines.  When creating domain_name provide the aserver directory as the location for the domain. In my install this is <ORACLE_BASE>/admin/aserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/aserver/applications Shared location for deployed applications.  Needs to be provided when creating the domain. In my install this is <ORACLE_BASE>/admin/aserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/domain_name Either unique location for each machine or can be shared between machines to simplify task of packing and unpacking domain.  This acts as the managed server configuration location.  Keeping it separate from Admin server helps to avoid problems with the managed servers messing up the Admin Server. In my install this is <ORACLE_BASE>/admin/mserver/soa_domain as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/mserver/applications Either unique location for each machine or can be shared between machines.  Holds deployed applications. In my install this is <ORACLE_BASE>/admin/mserver/applications as I only have one domain on the box. <ORACLE_BASE>/admin/domain_name/soa_cluster_name Shared directory to hold the following   dd – deployment descriptors   jms – shared JMS file stores   fadapter – shared file adapter co-ordination files   tlogs – shared transaction log files In my install this is <ORACLE_BASE>/admin/soa_cluster. <ORACLE_BASE>/admin/instance_name Local folder for web server (OHS) instance. In my install this is <ORACLE_BASE>/admin/web1 and <ORACLE_BASE>/admin/web2. I also have <ORACLE_BASE>/admin/wc1 for the Web Cache I use as a load balancer. <ORACLE_BASE>/product/fmw This can be a shared or local folder for the SOA/BPM Suite software.  I used a shared location so I only ran the installer once. In my install this is <ORACLE_BASE>/SOA11gPS2 All the shared files need to be put onto a shared storage media.  I am using NFS, but recommendation for production would be a SAN, with mirrored disks for resilience. Collapsing Environments To reduce the hardware requirements it is possible to collapse the BAMHOST, SOAHOST and WEBHOST machines onto a single physical machine.  This will require more memory but memory is a lot cheaper than additional machines.  For environments that require higher security then stay with a separate WEBHOST tier as per the EDG.  Similarly for high volume environments then keep a separate set of machines for BAM and/or Web tier as per the EDG. Notes on Dev Environments In a dev environment it is acceptable to use a a single node (non-RAC) database, but be aware that the config of the data sources is different (no need to use multi-data source in WLS).  Typically in a dev environment we will collapse the BAMHOST, SOAHOST and WEBHOST onto a single machine and use a software load balancer.  To test a cluster properly we will need at least 2 machines. For my test environment I used Oracle Web Cache as a load balancer.  I ran it on one of the SOA Suite machines and it load balanced across the Web Servers on both machines.  This was easy for me to set up and I could administer it from a web based console.

    Read the article

  • ASP.NET Membership and Roles separation relationship

    - by Saif Khan
    Hi, I have an ASP.NET project where I want to keep the membership (SQL Provider) in a separate database and the Roles/Profiles will be per application. Question What is the KEY that relates between the Membership database and the Roles/Profile database? Is it the UserID or UserName? I opened up the tables in separate expolrer and notice the UserID is different in the Membership database from that in the application Roles database.

    Read the article

  • What is the best method to write user "log files" of an android application to a file in a remote server or a table in a remote database?

    - by Samitha Chathuranga
    I am creating a multi user android application and it is connected to a php web service in a remote server and to a remote database via that web service. I want to keep a track of all the important activities done by the users. For an example if a user logged in to the app and changed his profile details and then logged out, a brief description of what he has done should be recorded(with time) somewhere. So then the admin of the system can see what the users are doing. So I think it is better to use log cat files and then flush all those data to a unique file in the server or a table in the database, when the user logs out or exists from his account. If it is appropriate How to do it?

    Read the article

  • PHP Session Class and $_SESSION Array

    - by Gianluca Bargelli
    Hello, i've implemented this custom PHP Session Class for storing sessions into a MySQL database: class Session { private $_session; public $maxTime; private $database; public function __construct(mysqli $database) { $this->database=$database; $this->maxTime['access'] = time(); $this->maxTime['gc'] = get_cfg_var('session.gc_maxlifetime'); session_set_save_handler(array($this,'_open'), array($this,'_close'), array($this,'_read'), array($this,'_write'), array($this,'_destroy'), array($this,'_clean') ); register_shutdown_function('session_write_close'); session_start();//SESSION START } public function _open() { return true; } public function _close() { $this->_clean($this->maxTime['gc']); } public function _read($id) { $getData= $this->database->prepare("SELECT data FROM Sessions AS Session WHERE Session.id = ?"); $getData->bind_param('s',$id); $getData->execute(); $allData= $getData->fetch(); $totalData = count($allData); $hasData=(bool) $totalData >=1; return $hasData ? $allData['data'] : ''; } public function _write($id, $data) { $getData = $this->database->prepare("REPLACE INTO Sessions VALUES (?, ?, ?)"); $getData->bind_param('sss', $id, $this->maxTime['access'], $data); return $getData->execute(); } public function _destroy($id) { $getData=$this->database->prepare("DELETE FROM Sessions WHERE id = ?"); $getData->bind_param('S', $id); return $getData->execute(); } public function _clean($max) { $old=($this->maxTime['access'] - $max); $getData = $this->database->prepare("DELETE FROM Sessions WHERE access < ?"); $getData->bind_param('s', $old); return $getData->execute(); } } It works well but i don't really know how to properly access the $_SESSION array: For example: $db=new DBClass();//This is a custom database class $session=new Session($db->getConnection()); if (isset($_SESSION['user'])) { echo($_SESSION['user']);//THIS IS NEVER EXECUTED! } else { $_SESSION['user']="test"; Echo("Session created!"); } At every page refresh it seems that $_SESSION['user'] is somehow "resetted", what methods can i apply to prevent such behaviour?

    Read the article

  • MSSQL using SSH-tunnel from Visual Studio

    - by pbt
    Hi, I recently contacted a web host regarding support for external database access to a Microsoft SQL Database included in a package they offer. They replied saying that it is only possible with an SSH-tunnel. Is it possible to connect to a MS SQL database in Visual Studio using an SSH-tunnel? It is important for me to be able to access the database from my local machine (for debugging, generating LINQ classes, editing tables, etc). Or, how should I go about working with their database?

    Read the article

  • mysql timeout - c/C++

    - by user1262876
    Guys i'm facing a problem with this code, the problem is the timeout by timeout i mean the time it takes the program to tell me if the server is connected or not. If i use my localhost i get the answer fast, but when i connect to outside my localhost it takes 50sc - 1.5 min to response and the program frezz until it done. HOw can i fix the frezzing, or make my own timeout, like if still waiting after 50sc, tell me connection failed and stop? please use codes as help, becouse i would understand it better, thanks for any help i get PS: USING MAC #include "mysql.h" #include <stdio.h> #include <stdlib.h> // Other Linker Flags: -lmysqlclient -lm -lz // just going to input the general details and not the port numbers struct connection_details { char *server; char *user; char *password; char *database; }; MYSQL* mysql_connection_setup(struct connection_details mysql_details) { // first of all create a mysql instance and initialize the variables within MYSQL *connection = mysql_init(NULL); // connect to the database with the details attached. if (!mysql_real_connect(connection,mysql_details.server, mysql_details.user, mysql_details.password, mysql_details.database, 0, NULL, 0)) { printf("Conection error : %s\n", mysql_error(connection)); exit(1); } return connection; } MYSQL_RES* mysql_perform_query(MYSQL *connection, char *sql_query) { // send the query to the database if (mysql_query(connection, sql_query)) { printf("MySQL query error : %s\n", mysql_error(connection)); exit(1); } return mysql_use_result(connection); } int main() { MYSQL *conn; // the connection MYSQL_RES *res; // the results MYSQL_ROW row; // the results row (line by line) struct connection_details mysqlD; mysqlD.server = (char*)"Localhost"; // where the mysql database is mysqlD.user = (char*)"root"; // the root user of mysql mysqlD.password = (char*)"123456"; // the password of the root user in mysql mysqlD.database = (char*)"test"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); // assign the results return to the MYSQL_RES pointer res = mysql_perform_query(conn, (char*) "SELECT * FROM me"); printf("MySQL Tables in mysql database:\n"); while ((row = mysql_fetch_row(res)) !=NULL) printf("%s - %s\n", row[0], row[1], row[2]); // <-- Rows /* clean up the database result set */ mysql_free_result(res); /* clean up the database link */ mysql_close(conn); return 0; }

    Read the article

  • SQL Server using SSH-tunnel from Visual Studio

    - by pbt
    Hi, I recently contacted a web host regarding support for external database access to a Microsoft SQL Server database included in a package they offer. They replied saying that it is only possible with an SSH-tunnel. Is it possible to connect to a SQL Server database in Visual Studio using an SSH-tunnel? It is important for me to be able to access the database from my local machine (for debugging, generating LINQ classes, editing tables, etc). Or, how should I go about working with their database?

    Read the article

  • MODx character encoding

    - by Piet
    Ahhh character encodings. Don’t you just love them? Having character issues in MODx? Then probably the MODx manager character encoding, the character encoding of the site itself, your database’s character encoding, or the encoding MODx/php uses to talk to MySQL isn’t correct. The Website Encoding Your MODx site’s character encoding can be configured in the manager under Tools/Configuration/Site/Character encoding. This is the encoding your website’s visitors will get. The Manager’s Encoding The manager’s encoding can be changed by setting $modx_manager_charset at manager/includes/lang/<language>.inc.php like this (for example): $modx_manager_charset = 'iso-8859-1'; To find out what language you’re using (and thus was file you need to change), check Tools/Configuration/Site/Language (1 line above the character encoding setting). This needs to be the same encoding as your site. You can’t have your manager in utf8 and your site in iso-8859-1. Your Database’s Encoding The charset MODx/php uses to talk to your database can be set by changing $database_connection_charset in manager/includes/config.inc.php. This needs to be the same as your database’s charset. Make sure you use the correct corresponding charset, for iso-8859-1 you need to use ‘latin1′. Utf8 is just utf8. Example: $database_connection_charset = 'latin1'; Now, if you check Reports/System info, the ‘Database Charset’ might say something else. This is because the mysql variable ‘character_set_database’ is displayed here, which contains the character set used by the default database and not the one for the current database/connection. However, if you’d change this to display ‘character_set_connection’, it could still say something else because the ’set character set’ statement used by MODx doesn’t change this value either. The ’set names’ statement does, but since it turns out my MODx install works now as expected I’ll just leave it at this before I get a headache. If I saved you a potential headache or you think I’m totally wrong or overlooked something, let me know in the comments. btw: I want to be able to use a real editor with MODx. Somehow.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >