Search Results

Search found 31694 results on 1268 pages for 'database administration'.

Page 509/1268 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • Need Sql Server Hosting 50GB or More

    - by Leo
    Hi I am looking for a Hosting solution (Dedicated or Shared) which will allow me to host a SQL Server database service (Not SQL Express but the Web edition). The size of my database might grow to 50GB or more. The web application will offer more reads than write operations. I also need daily backups and raid 1 storage. Is there a reliable and economical hosting company that would provide this? Additional Question: If there is a easy way to host MS SQL on Amazon EC2 service, it will be preferable.

    Read the article

  • Secure Web Apps from SQL Injection in ASP.Net

    In the first part of this two-part series you learned how SQL injection works in ASP.NET 3.5 using a MS SQL database. You were also shown with a real web application which was not secure against SQL injection attacks how these attacks can be used by the hacker to delete sensitive information from your website such as database tables. In this part you will learn how to start securing your web applications so they will not be vulnerable to these kinds of exploits. A complete corrected example of the insecure web application will be provided at the end of this tutorial.... ALM Software Solution ? Try it live! Requirements Management, Project Planning, Implementation Tracking & QA Testing.

    Read the article

  • TFS Backup Plan Wizard Tool

    - by Enrique Lima
    With the release of the “September – 2010” TFS 2010 Power Tools, came an addition to the Team Foundation Server Administration Console.  This addition is the Team Foundation Backups Tree item.  The tool is used to create backup plans and to work with it you run through a wizard, just like you would in configuring TFS or any of the extensions it has. The areas covered through the tool include: Backup to a Network Backup Path, retention configuration. Under Advanced Options, the extension to be used for the Full and Transactional backups. The capability to include external databases, meaning, include the reporting databases and SharePoint databases as part of the plan. There are further options as you can see, that includes being able to define a task scheduler account, be able to set alerts for notifications on execution of the plans, and last the option to configure the schedule for the plan execution.  All in all a very good tool and great way to safeguard the investment you’ve made.

    Read the article

  • Retrieve data from an ASP.Net application using Ado.Net 2.0 disconnected model

    - by nikolaosk
    This is the second post in a series of posts regarding to ADO.Net 2.0. Have a look at the first post if you like. In this post I am going to investigate the "Disconnected" model. When I say "Disconnected" I mean Datasets . Datasets are in memory representations of tables in a particular database. A Dataset contains a Table collection and each Table collection contains a Row collection and each Row collection contains a Columns collection. So initially you connect to the database, get the data to...(read more)

    Read the article

  • SPARC T5-4 Engineering Simulation Solution

    - by Mike Mulkey-Oracle
    A recent Oracle internal performance evaluation for computer-based product design demonstrated that Oracle's SPARC T5-4 server running MSC's SimManager simulation software with Oracle Database 12c consolidates the work of multiple x86 servers while delivering better overall performance.   Engineering simulation solutions have taken the center stage in helping companies design and develop innovative products while reducing physical prototyping costs, and exploring a larger design space, resulting in more design possibilities. For this solution, a single SPARC T5-4 server running Oracle Solaris 11 was deployed to consolidate the MSC SimManager server, the Oracle Database 12c server, and the web application server onto a single platform. An automotive design workload was deployed to demonstrate how the SPARC T5-4 server can be used to consolidate the work of multiple x86 servers and deliver better overall performance while reducing complexity and achieving optimal product designs.  A joint Oracle/MSC Software solution brief describes this in more detail:  A Simplified Solution for Product Lifecycle Management —MSC SimManager on a SPARC T5-4 Server

    Read the article

  • September issue of the Enterprise Manager Indepth Newsletter

    - by Javier Puerta
    The September issue of the Enterprise Manager Indepth Newsletter is now available here  Featured articles include: Oracle OpenWorld Preview: Don't-Miss Sessions, Hands-on Labs, and MoreBecause of the rapid and widespread adoption of Oracle Enterprise Manager 12c since its launch at Oracle OpenWorld 2011, conference organizers are expecting Oracle Enterprise Manager sessions to attract record crowds at Oracle OpenWorld 2012. Read More Oracle Cloud Builder Summit—Zero to Enterprise Cloud in Two HoursIn August, Oracle launched the worldwide Oracle Cloud Builder Summit series, an event where attendees learn firsthand how to plan, deploy, and manage an enterprise private cloud using Oracle Enterprise Manager 12c—all in a few hours. Read More WEBCASTS Reduce Database Testing Efforts While Maximizing ROIWatch this on-demand Webcast demonstrating how to manage database and system changes with confidence using Oracle Real Application Testing. Viewers will be among the first to hear results from Forrester Consulting's commissioned, multicustomer study, “Total Economic Impact of Oracle Real Application Testing.”

    Read the article

  • Saving multiple attachments in phpmyadmin [closed]

    - by Madiha
    I am sending multiple attachments with an Email message. I am saving the Email message and Email Address in database (i.e., phpmyadmin). Now i want to save the multiple attachments data i.e., Contents of file, Extension, Size and name of file , How can i do it. I am getting the size of file now by the following code in javascript: var size = this.files[0].size; I am a new in php, so any easy tutorials, and help you can specify.. Also i want all the Attachments (maximum 5) to be saved in one cell (i.e., FileContents) in database, same the extensions and sizes collective of all attachments in one cell (i.e., Extensions). Please any body to help

    Read the article

  • Hands-on GlassFish FREE Course covering Deployment, Class Loading, Clustering, etc.

    - by arungupta
    René van Wijk, an Oracle ACE Director and a prolific blogger at middlewaremagic.com has shared contents of a FREE hands-on course on GlassFish. The course provides an introduction to GlassFish internals, JVM tuning, Deployment, Class Loading, Security, Resource Configuration, and Clustering. The self-paced hands-on instructions guide through the process of installing, configuring, deploying, tuning and other aspects of application development and deployment on GlassFish. The complete course material is available here. This course can also be taken as a paid instructor-led course. The attendees will get their own VM and will have plenty of time for Q&A and discussions. Register for this paid course. Oracle Education also offers a similar paid course on Oracle GlassFish Server 3.1: Administration and Deployment.

    Read the article

  • Using an external hard drive as a server and be able to connect via wifi [on hold]

    - by user289228
    OK, so i have an old external HDD(seagate 1tb) and a windows computer just in case but im trying to set it up as a server for my home at the moment but if everything goes right i want to transfer it to being a database hub for the business i work for that way i can have more then one register at a time on the same basic database. The thing is im new to the whole server part along with im not well versed with ubuntu either so what im getting at is if i can get it setup, can i connect it on ubuntu and run a linux based Point of sales program but have multiple linked to the single server? i may be able to do it via the router, i think its a belkin n600 with usb port but at this moment i dont know what it is since im not home. i just need information if this is possible and if so, a guide would be appreciated.

    Read the article

  • Webmatrix fails to connect PHP website to MySQL

    - by Roni
    I downloaded the latest versions of Webmatrix and MySQL. I downloaded a PHP-MySQL Connector: http://dev.mysql.com/downloads/connector/php-mysqlnd/ In the "Databases" Workspace I pressed "New Connection" button and choose "MySQL Connection" In the Dialog box I filled-in all connection details -- It looks like the database was added. But then when I double-click on the database, I get a short error message saying it cannot connect. I tried everything, searched the web... I'm sure it's a very simple question, so please whoever can help I'll be grateful. I think best solution for me would be if someone could please just give me a link to download of: Webmatrix,MySQL,Connector; and instructions on how to install and then how to connect. This would be the safest way to help me.

    Read the article

  • How to fix "Could not open lock file" because "Permission denied"?

    - by user66498
    Whenever trying to install any software and update manger, I get an error stating Package operation failed The installation or removal of a software package failed When I run sudo apt-get update I got this result: conan51xd@conan51xd-Lenovo-B470:~$ sudo apt-get -f install [sudo] password for conan51xd: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. conan51xd@conan51xd-Lenovo-B470:~$ apt-get update E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied) E: Unable to lock directory /var/lib/apt/lists/ E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied) E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?

    Read the article

  • Socialengine installation error

    - by akopacsi
    I'm trying to install Socialengine (clean install, empty database, legal license key), but I ran into this error message at Step 3 of the installation: Step 3: Setup MySQL Database Mysqli statement execute error : Prepared statement needs to be re-prepared I found a troubleshooting artice about "Bug in MySQLi Extension Causes Apache 500 Error" at http://www.socialengine.net/blog/article?id=161&article=Bug-in-MySQLi-Extension-Causes-Apache-500-Error I uploaded the fixed file and try to install again, but it still doesn't work. It terminates at Step 3 again with the same error message. I would be very grateful if you could help me. Thanks.

    Read the article

  • Oracle Exadata X3 Launch Webcast

    - by Cinzia Mascanzoni
    Available on-demand, this webcast covers everything your partners need to know about Oracle’s next-generation database machine. They will learn how to improve performance by storing multiple databases in memory, lower power and cooling costs by 30%, and easily deploy a cloud-based database service. Exadata X3 combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Partners won’t want to miss this webcast. Invite them to watch today! View and share the replay.

    Read the article

  • Mass Delete Using Gridview with Checkboxes

    This sample shows how to use a Gridview to delete multiple records all at once, having marked them with a Checkbox, and clicking one button, external to the Gridview, to delete them all As usual, here, we're using the Northwind Database. If you want to try this on your own Northwind database, adhere to this word of caution - "BACK IT UP FIRST". You will be making multiple deletes, so in order to get it back to the original state, you must make a backup and then restore afterwards. Remember too, that you will need to substitute your own Web.Config connectionstring entry for "YourNorthwindString" in the sample code

    Read the article

  • Mit Oracle Datenbanken in die Pole-Position!

    - by Alliances & Channels Redaktion
    Stellen Sie sich vor, Sie haben die Wahl zwischen einem hübschen, aber uralten Kleinwagen und einem stylischen Tourenwagen auf technischem Höchststand. Beide haben etwas für sich, keine Frage, doch auf der Rennstrecke, wo es allein um Performance geht, ist Nostalgie fehl am Platz. Nicht anders ist es mit Datenbanken. Wer also Wert auf Leistung, Sicherheit und die optimale Ausnutzung von Hardware und IT-Ressourcen legt, sollte sich für ein Database-Tuning entscheiden. Die wesentlichen Vorteile der Oracle Datenbanken bringt dieses Video kurz und knackig auf den Punkt – und ist damit auch bestens zum Einsatz bei Kunden geeignet. Oracle Database Tuning from Worm Marketing Consulting GmbH on Vimeo.

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Remote Diagnostic Agent (RDA) version 4.30

    - by inowodwo
    posted by Maurice Bauhahn Remote Diagnostic Agent (RDA) version 4.30 was released on December 11th A free download can be accessed via Knowledge Management article 314422.1 and installed in any Enterprise Performance Management 11.1.2.x environment. EPM-specific instructions are available in Knowledge Management article 1304885.1. This RDA version incorporates two new modules (EAS=Essbase Administration Services; HWA=Hyperion Web Analysis) and improvements in modules and profiles relating to twelve other Hyperion applications (EPM, EPMA, ESS, FCM, HFM, HFR, HIR, HPL, HPSV, HSS, PR, and HSV). To follow best practice, run related RDA profiles [for example: "perl rda.pl -vnSCRPp Hyperion1112_EAS"] and attach the output zip file [by default in \rda\output\] to your service requests. The comprehensive set of details provided in such output files should help technicians to avoid delays in handling service requests (by avoiding ping-pong communications resulting from repeated requests for additional values).

    Read the article

  • Configuring trace file size and number in WebCenter Content 11g

    - by Kyle Hatlestad
    Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning. [Read More] 

    Read the article

  • Redirect from domain to other one

    - by Michal
    I deal with the following problem. I am customer of one domain reseller, which has fair prices, fine administration, and I have all my domains registered by it. Recently I've created a new webpage using a free web service (those sites where you can create a simple webpages with some template after few clicks). This new web page has default address in the following form: "pagename.provider.cz", but I want to use my own domain "pagename.cz". And that is the problem, because the provider would assign domain name to my presentation only if I registered mentioned domain by him. That wouldn't be a problem, but he is three times more expensive then my favorite one. So I am thinking about registering domain "pagename.cz" under my favorite registrator and then making 301 PHP redirect from it to "pagename.provider.cz". Shall this affect (negatively) my domain ranking? Are there any catches which I shoud care about?

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Quickly Investigating What's in the Tables of SQL Server Databases

    From SQL Server Management Studio it's hard to look through the first few rows of a whole lot of tables in a database. This is odd, since it is a great way to get quickly familiar with a database. Phil tidied up a SQL routine he uses to investigate databases quickly in a browser. He explains how to use it, how it works, and how to use it from PowerShell. Want faster, smaller backups you can rely on?Use SQL Backup Pro for up to 95% compression, faster file transfer and integrated DBCC CHECKDB. Download a free trial now.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >