Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 299/594 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • SQL Southwest, Thursday 18th Oct - User Group Meetup and Virtual Meetup

    October's meeting on Thursday 18th will be a virtual meeting which means anyone in the world can attend if they have access to a PC with an internet connection. We are pleased to announce that Grant Fritchey will be giving us 2 sessions. "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • T-SQL User-Defined Functions: Ten Questions You Were Too Shy To Ask

    SQL Server User-Defined Functions are good to use in most circumstances, but there just a few questions that rarely get asked on the forums. It's a shame, because the answers to them tend to clear up some ingrained misconceptions about functions that can lead to problems, particularly with locking and performance Can 41,000 DBAs really be wrong? Join 41,000 other DBAs who are following the new series from the DBA Team: the 5 Worst Days in a DBA’s Life. Part 3, As Corrupt As It Gets, is out now – read it here.

    Read the article

  • Entity and pattern validation vs DB constraint

    - by Joerg
    When it comes to performance: What is the better way to validate the user input? If you think about a phone number and you only want numbers in the database, but it could begin with a 0, so you will use varchar: Is it better to check it via the entity model like this: @Size(min = 10, max = 12) @Digits(fraction = 0, integer = 12) @Column(name = "phone_number") private String phoneNumber; Or is it better to use on the database side a CHECK (and no checking in the entity model) for the same feature?

    Read the article

  • Oracle Solaris 11.1

    - by user12616590
    Oracle Solaris 11.1 was announced at Oracle OpenWorld recently. This release added 300 new performance and feature enhancements. My favorite new features: Solaris Zones on Shared Storage Support for 32 TB (!) of RAM Improved Oracle RAC lock latency Dynamically resize the Oracle DB SGA Industry-first support for FedFS You can learn more from the press release or by attending the Solaris 11.1 webcast on November 7.

    Read the article

  • Is There A Need For End-To-End ExtJS to Microsoft Server (MVC-C#, LOB) 4 Day Class? (Poll Enclosed)

    Over the past couple years, the focus of the web development Ive been doing involves building highly flexible, highly scalable and straight forward web sites to implement and maintain Line of... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Log of data transfer and copied from Ubuntu

    - by Gaurav_Java
    Yesterday my friend ask me for some files i told him that take it from my system i don't see . what extra files or data he take from my system . I was thinking is here any application or method which shows what data is copied to which USB (if name available then shows name or otherwise device id ). and what data is beign copied to ubuntu machine . It is some like history of USB and System data . i think this feature is in KDE this will really useful in may ways. it provides real time and monitoring utility to monitor USB mass storage devices activities on any machine .

    Read the article

  • Discover 25 Years of SPARC Innovation

    - by Cinzia Mascanzoni
    Over the last 25 years SPARC technology has led the field in enterprise IT innovation – providing world record performance to data centers across the globe. Discover how the history of SPARC has formed the IT landscape of today, and how upcoming improvements to this industry-leading technology will continue to shape the future. Register Now to hear the story of SPARC from the people who shaped the past, present, and future of this remarkable technology

    Read the article

  • Improving the Extended Financial Close and Reporting Process

    Coming out of the recession, many organizations need to build or re-build trust with key stakeholders by delivering more timely and accurate financial and operating results. In this podcast, hear about new capabilities Oracle is delivering through its Enterprise Performance Management products to help organizations coordinate and improve the extended financial close and reporting process, from closing the sub-ledgers to regulatory filings.

    Read the article

  • Sustainability Reporting, Planning, and Management

    Sustainability Reporting, also referred to as the Triple Bottom Line, is the reporting of environmental, social and economic metrics to external and internal stakeholders. Tune into this conversation with John O'Rourke,* *Senior Director, Product Marketing for Oracle Enterprise Performance Management Solutions to learn what is driving the need for this reporting, how companies are responding and the solutions that Oracle offers to help alleviate the complexity, provide an audit trail and a repeatable reporting process.

    Read the article

  • Oracle Magazine, March/April 2008

    Oracle Magazine March/April features articles on IT modernization, Marvel Entertainment, SQL performance analyzer, Oracle SQL Developer, upgrade certification to Oracle Database 11g, Oracle Database 11g features, declarative data filters, Oracle Application Express, PL/SQL best practices, and much more.

    Read the article

  • What would you take into account when you were asked to compare software? [closed]

    - by mstaessen
    For my master's thesis, I am asked to make a comparative study of frameworks for cross-platform mobile development. I want to eliminate the chances of having missed something in my comparison. This is why I want to ask what YOU would value (most) when comparing such frameworks (Like for instance PhoneGap and Appcelerator Titanium). Performance, capabilities and licensing are kind of obvious, but can you think of others?

    Read the article

  • How Fast is Your Website? Increase Website Speed Instantly

    The most popular websites on the web today load quickly; visitors don’t like to wait. Slow websites hurt brands, frustrate visitors, and cost more in terms of visitor productivity and data-transfer. Increasing the performance of your website is now not only easier, we've made it automatic.

    Read the article

  • How Fast is Your Website? Increase Website Speed Instantly

    The most popular websites on the web today load quickly; visitors dont like to wait. Slow websites hurt brands, frustrate visitors, and cost more in terms of visitor productivity and data-transfer. Increasing the performance of your website is now not only easier, we've made it automatic....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Using an Economical Engine Optimization Ranking Search For Link Popularity

    Have you checked your popularity lately by using an economical engine optimization ranking search? If not, you better check it now - but that's if you care too much for your website's performance. More and more website owners are familiar with link popularity checkers to assess their site's relevance and search status. You don't want to miss link popularity services if you want to stay ahead of the game.

    Read the article

  • Facebook dépasse les 2 milliards de dollars de chiffre d'affaires au 3e trimestre, le mobile représente presque la moitié des revenus sur la période

    Facebook dépasse les 2 milliards de dollars de chiffre d'affaires au 3e trimestre, le mobile représente presque la moitié des revenus sur la période Facebook vient de publier un chiffre d'affaires de 2,02 milliards de dollars au troisième trimestre 2013 et enregistre ainsi une progression de 60 % pour un bénéfice net de 425 millions de dollars. Une belle performance en matière de rentabilité en comparaison à la perte que le réseau social à connu l'année passée pour la même période qui s'élevait...

    Read the article

  • What would one call this architecture?

    - by Chris
    I have developed a distributed test automation system which consists of two different entities. One entity is responsible for triggering tests runs and monitoring/displaying their progress. The other is responsible for carrying out tests on that host. Both of these entities retrieve data from a central DB. Now, my first thought is that this is clearly a server-client architecture. After all, you have exactly one organizing entity and many entities that communicate with said entity. However, while the supposed clients to communicate to the server via RPC, they are not actually requesting services or information, rather they are simply reporting back test progress, in fact, once the test run has been triggered they can complete their tasks without connection to the server. The request for a service is actually made by the supposed server which triggers the clients to carry out tests. So would this still be considered a server-client architecture or is this something different?

    Read the article

  • Planning for Disaster

    There is a certain paradox in being advised to expect the unexpected, but the DBA must plan and prepare in advance to protect their organisation's data assets in the event of an unexpected crisis, and return them to normal operating conditions. To minimise downtime in such circumstances should be the aim of every effective DBA. To plan for recovery, It pays to have the mindset of a pessimist. "It's the freaking iPhone of SQL monitoring""Everyone just gets it… that has tremendous value" - Rob Sullivan, DBA, IdeasRun. Get started with SQL Monitor today - download a free trial.

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • SSRS Reports as a Data Source in Excel 2013

    DBAs are expected to know how to administer the technologies that are available with and peripheral to SQL Server. To properly administer them, it certainly helps to understand the technology from the point of view of the user. By using an existing SSRS report as a data feed for Excel, Rodney Landrum explains how these users can now take advantage of development efforts in new ways. The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • Steltix (NL) is live on Oracle Sales Cloud

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Steltix (NL) uses Oracle Sales Cloud (Oracle Fusion CRM in the Oracle Cloud) to improve the business performance of customers and to reduce costs and minimize risks. If you read Dutch, I encourage you reading the press release here!

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >