Search Results

Search found 3775 results on 151 pages for 'higher education'.

Page 26/151 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • Welcome Oracle Data Integration 12c: Simplified, Future-Ready Solutions with Extreme Performance

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 The big day for the Oracle Data Integration team has finally arrived! It is my honor to introduce you to Oracle Data Integration 12c. Today we announced the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions Extreme Performance Fast Time-to-Value       There are many new features that support these key differentiators for Oracle Data Integrator 12c and for Oracle GoldenGate 12c. In this first 12c blog post, I will highlight only a few:·Future-Ready Solutions to Support Current and Emerging Initiatives: Oracle Data Integration offer robust and reliable solutions for key technology trends including cloud computing, big data analytics, real-time business intelligence and continuous data availability. Via the tight integration with Oracle’s database, middleware, and application offerings Oracle Data Integration will continue to support the new features and capabilities right away as these products evolve and provide advance features. E    Extreme Performance: Both GoldenGate and Data Integrator are known for their high performance. The new release widens the gap even further against competition. Oracle GoldenGate 12c’s Integrated Delivery feature enables higher throughput via a special application programming interface into Oracle Database. As mentioned in the press release, customers already report up to 5X higher performance compared to earlier versions of GoldenGate. Oracle Data Integrator 12c introduces parallelism that significantly increases its performance as well. Fast Time-to-Value via Higher IT Productivity and Simplified Solutions:  Oracle Data Integrator 12c’s new flow-based declarative UI brings superior developer productivity, ease of use, and ultimately fast time to market for end users.  It also gives the ability to seamlessly reuse mapping logic speeds development.Oracle GoldenGate 12c ‘s Integrated Delivery feature automatically optimally tunes the process, saving time while improving performance. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. On November 12th we will reveal much more about the new release in our video webcast "Introducing 12c for Oracle Data Integration". Our customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Please join us at this free event to learn more from our executives about the 12c release, hear our customers’ perspectives on the new features, and ask your questions to our experts in the live Q&A. Also, please continue to follow our blogs, tweets, and Facebook updates as we unveil more about the new features of the latest release. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Professional Development – Difference Between Bio, CV and Resume

    - by Pinal Dave
    Applying for work can be very stressful – you want to put your best foot forward, and it can be very hard to sell yourself to a potential employer while highlighting your best characteristics and answering questions.  On top of that, some jobs require different application materials – a biography (or bio), a curriculum vitae (or CV), or a resume.  These things seem so interchangeable, so what is the difference? Let’s start with the one most of us have heard of – the resume.  A resume is a summary of your job and education history.  If you have ever applied for a job, you will have used a resume.  The ability to write a good resume that highlights your best characteristics and emphasizes your qualifications for a specific job is a skill that will take you a long way in the world.  For such an essential skill, unfortunately it is one that many people struggle with. RESUME So let’s discuss what makes a great resume.  First, make sure that your name and contact information are at the top, in large print (slightly larger font than the rest of the text, size 14 or 16 if the rest is size 12, for example).  You need to make sure that if you catch the recruiter’s attention and they know how to get a hold of you. As for qualifications, be quick and to the point.  Make your job title and the company the headline, and include your skills, accomplishments, and qualifications as bullet points.  Use good action verbs, like “finished,” “arranged,” “solved,” and “completed.”  Include hard numbers – don’t just say you “changed the filing system,” say that you “revolutionized the storage of over 250 files in less than five days.”  Doesn’t that sentence sound much more powerful? Curriculum Vitae (CV) Now let’s talk about curriculum vitae, or “CVs”.  A CV is more like an expanded resume.  The same rules are still true: put your name front and center, keep your contact info up to date, and summarize your skills with bullet points.  However, CVs are often required in more technical fields – like science, engineering, and computer science.  This means that you need to really highlight your education and technical skills. Difference between Resume and CV Resumes are expected to be one or two pages long – CVs can be as many pages as necessary.  If you are one of those people lucky enough to feel limited by the size constraint of resumes, a CV is for you!  On a CV you can expand on your projects, highlight really exciting accomplishments, and include more educational experience – including GPA and test scores from the GRE or MCAT (as applicable).  You can also include awards, associations, teaching and research experience, and certifications.  A CV is a place to really expand on all your experience and how great you will be in this particular position. Biography (Bio) Chances are, you already know what a bio is, and you have even read a few of them.  Think about the one or two paragraphs that every author includes in the back flap of a book.  Think about the sentences under a blogger’s photo on every “About Me” page.  That is a bio.  It is a way to quickly highlight your life experiences.  It is essentially the way you would introduce yourself at a party. Where a bio is required for a job, chances are they won’t want to know about where you were born and how many pets you have, though.  This is a way to summarize your entire job history in quick-to-read format – and sometimes during a job hunt, being able to get to the point and grab the recruiter’s interest is the best way to get your foot in the door.  Think of a bio as your entire resume put into words. Most bios have a standard format.  In paragraph one, talk about your most recent position and accomplishments there, specifically how they relate to the job you are applying for.  If you have teaching or research experience, training experience, certifications, or management experience, talk about them in paragraph two.  Paragraph three and four are for highlighting publications, education, certifications, associations, etc.  To wrap up your bio, provide your contact info and availability (dates and times). Where to use What? For most positions, you will know exactly what kind of application to use, because the job announcement will state what materials are needed – resume, CV, bio, cover letter, skill set, etc.  If there is any confusion, choose whatever the industry standard is (CV for technical fields, resume for everything else) or choose which of your documents is the strongest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, Professional Development, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Day 1 - Finding Like Minds

    - by dapostolov
    So, is being a Game Developer any different from being an IT Developer? I picture a poorly lit environment where I get to purchase my own desk lamp; I'm thinking one of those huge lava lamps that pump out so much heat you could fry an egg on it. To my right: a "great wall" of empty coke cans dwarf me. Eating my last slice of pizza I look across my desk to see a fellow developer with a smug look on his face;  he's just coded his latest module for the game and it looks like he's in nirvana. My duty, of course, is to remind him to keep focused on the job at hand. So, picking up my trusty elastic and aerodynamically crafted paper bullet I begin a 10 minute war of welts and laughter which is promptly abrupted by our Project Manager demanding more details from our morning Scrum meeting. After providing about 5 minutes of geek speak and several words of comfort to make his eyes glaze over...it hits me, the idea for the module...beckoning my developer friend over, we quickly shoo the Project Manager away and begin our brainstorming frenzy ... now, where'd I put that full can of coke? OK. OK. This isn't probably the most ideal game developer environment, but it definitely sounds fun to me...and from what I gather is nothing like most game development companies. But I'm not doing this blog series to "go pro"; like I stated in my first post I want to make a 2D game from an idea my best friend and I drummed up long, long ago. I'm in this for the passion AND I want to see how easy it is for us .Net Developers to create a game. So where do I start? Where can I find like minded individuals? What technologies are there? What do I need to make a video game? The questions are endless....AND...since I already have an idea ... lets start with ... Technology (yes, I'm a geek, live with it...) Technology OK. Predominantly, games are still made in C++ or even C. I'm not sure how much assembly code is floating around lately, however, that is not my concern. I do know C / C++ from my past, enough to even get me by, but I'm mainly interested in a recent, not-so-new, technology called XNA. What is XNA? XNA allows us .Net Developers to make 2D / 3D games for windows, Xbox*, and Windows Mobile 7*. * = for a nominal fee *cough* The following link is your one stop shop to XNA game development: http://creators.xna.com/en-US/education/gettingstarted The above site hosts information such as: - getting started - a sample/instructional shooter game in 2D / 3D with code (if I'm taking too long for you in this blog series) - downloads - starter kits... http://creators.xna.com/en-US/education/starterkits/ And of course...forums. You can also subscribe and pay for their premium membership which gets you some pretty awesome tutorials, resources, downloads, and premium community support. Some general Wiki information about XNA: http://en.wikipedia.org/wiki/XNA_%28Microsoft%29 Community Support OK. Let's move on to industry and community support. Apart from XNA, there are some really cool sites out there, I just haven't found all of them yet. However, I found a really cool Game Development website called Gamastura. You can click on the following link to get you there: http://www.gamasutra.com/ The site is 100% dedicated to "The Art & Business of Making Games". Armed with blogs, twitter, jobs/resumes and most importantly industry news; one could subscribe to the feed and got lost in the wealth of information it provides. On a side note: I remember Gamasutra being around when my best friend and I wanted to make a video game...meaning, they've been around for a while now. I think the most beneficial aspect of this site is to understand the industry you want to get into. Otherwise, it's just a cool site to keep up to date with the industry in general. Another Community Support option is LinkedIn. Amongst the land of extremely bloated achievements and responsibilities lay 3 groups (that I have found) that deal with game development.: http://www.linkedin.com/groups?gid=59205 - Game Developers http://www.linkedin.com/groups?gid=824817 - DirectX Game Developer Network http://www.linkedin.com/groups?gid=756587 - DirectX Developers The Game Developers group in LinkedIn is by far the most active of the three and could possibly provide a wealth of support. What I've done thus far: - I lightly researched the XNA technology - I looked around for some community sites to assist me - I downloaded the XNA Game Studio 3.1 on my PC and installed it on my IDE - I even tried both tutorials! http://creators.xna.com/en-US/education/gettingstarted/bgintro/chapter1   Best Regards D.

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • ?My Oracle Support???????????????

    - by user763198
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:105%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:major-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:major-latin; mso-fareast-language:EN-US; mso-bidi-language:EN-US;} l ????????? ??????Help ??????? Table of Contents? ????Certification? ??,????????: My Oracle Support Help - Certifications (Doc ID 870956.5) l ????????? ?????????????????????????,???My Oracle Support Help - Certifications (Doc ID 870956.5)????????????????????,??? Global Customer Support ????? l ??????????? ????????????: http://www.oracle.com/support/contact.html l ???????????? ????????????: https://shop.oracle.com/pls/ostore/f?p=dstore:home:0:::::&tz=8:00 ????????????????,???? “Contact Us” ??? l ???????,??? Export ??? ???????:    1. ???????,????,????Copy Selected Row,??Actions?????Copy Selected Rows? 2. ??????spreadsheet,??????? email ???? ?????????????table ??,????table ??,????Printable View????????????????,?????????????? ????tables????Export??, ??????export???????CSV???? ??:???Printable View ?Export?????????,??????????????? l ?????????? ?Oracle Education ??????????: http://education.oracle.com ????????????? ???????????,???Oracle Education ??(????????????)? l ????? License codes/Keys? http://www.oracle.com/us/support/licensecodes/index.html ?????????License Codes ?????????????(?????),????????????????????Oracle license code ?????????,????Customer Support Identifier (CSI)? l ???Software Delivery Cloud ????????? ???Software Delivery Cloud ???????? (Doc ID 1070969.1) ??? http://edelivery.oracle.com/ ,Oracle Software Delivery Cloud. ??“Sign In/Register” button. ??Terms & Restrictions ????Continue ????????,????Go ????????Continue ????????? l ??????Oracle.com ???? 1, ? www.oracle.com 2, ???????“Help” 3, ?????,???“Oracle.com login FAQ” 4, ???????????? 5, Oracle.com ???????????? l ????CRM Ondemand ????? ?CRMOD ??,????????: (i) CRMOD ??????: http://www.oracle.com/us/products/applications/crmondemand/support/customer-care-support-337680.html (ii) CRMOD ?????????,?????????????????: https://ebusiness.siebel.com/odcustomercare/contact/contact_cc.asp l ???? Oracle Software Delivery Cloud ??? ????Oracle Software Delivery Cloud – ??Oracle ??????,???/????????email ???????? ???????: 1. ? www.oracle.com ?????Single Sign-On (SSO) ?? 2. ?????"Account" 3. ????,?? "Please verify account(?????)" ,????????????email ID 4. ????????,???????? l ???My Oracle Support??????Oracle Database Patchset ? 1. ?????? 2. ??Product ?Product Family: Oracle database. 3. ????:? Oracle 10.2.0.4. 4. ????:?Microsoft x64 (64-bit). 5. ??????: Patchset/Minipack. 6. ?? Go. l ?????????????FTP ??? ???? CD/DVD ? FTP ??: ???My Oracle Support ( https://support.oracle.com ) ???????Contact Us?? ???????????? CD/DVD ? FTP ???????????? ????????,????????,?????????????,????????????

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • Calculating a consecutive streak in data

    - by Jura25
    I’m trying to calculate the maximum winning and losing streak in a dataset (i.e. the highest number of consecutive positive or negative values). I’ve found a somewhat related question here on StackOverflow and even though that gave me some good suggestions, the angle of that question is different, and I’m not (yet) experienced enough to translate and apply that information to this problem. So I was hoping you could help me out, even an suggestion would be great. My data set look like this: > subRes Instrument TradeResult.Currency. 1 JPM -3 2 JPM 264 3 JPM 284 4 JPM 69 5 JPM 283 6 JPM -219 7 JPM -91 8 JPM 165 9 JPM -35 10 JPM -294 11 KFT -8 12 KFT -48 13 KFT 125 14 KFT -150 15 KFT -206 16 KFT 107 17 KFT 107 18 KFT 56 19 KFT -26 20 KFT 189 > split(subRes[,2],subRes[,1]) $JPM [1] -3 264 284 69 283 -219 -91 165 -35 -294 $KFT [1] -8 -48 125 -150 -206 107 107 56 -26 189 In this case, the maximum (winning) streak for JPM is four (namely the 264, 284, 69 and 283 consecutive positive results) and for KFT this value is 3 (107, 107, 56). My goal is to create a function which gives the maximum winning streaks per instrument (i.e. JPM: 4, KFT: 3). To achieve that: R needs to compare the current result with the previous result, and if it is higher then there is a streak of at least 2 consecutive positive results. Then R needs to look at the next value, and if this is also higher: add 1 to the already found value of 2. If this value isn’t higher, R needs to move on to the next value, while remembering 2 as the intermediate maximum. I’ve tried cumsum and cummax in accordance with conditional summing (like cumsum(c(TRUE, diff(subRes[,2]) > 0))), which didn’t work out. Also rle in accordance with lapply (like lapply(rle(subRes$TradeResult.Currency.), function(x) diff(x) > 0)) didn’t work. How can I make this work?

    Read the article

  • Creating a user account on a mac you don't have admin access to [migrated]

    - by mouse
    I am trying to create a user account on a school computer so I can run processes (like compiling large libraries) with admin permissions settings. This way I can walk away and let other people user the computer, and come back after class to retrieve the binaries. Usually some smart person decides to shut the machine down, but if I had higher permissions they wouldn't be able to terminate my processes. Right now I use the guest account, which everyone has access to. If you think this is in some way unethical or a bad idea, please criticize. tl,dr I tried using the dscl series of commands in single user mode as root, as recommended by this site. It returns this error: Cannot open remote host, error: DSOpenDirServiceErr How can I create a local user on this machine to compile my code with higher permissions?

    Read the article

  • How does one calculate voltages for overclocking?

    - by TardisGuy
    So, all I know is voltage and clock have something to do with each other. Unstable if too low voltage Too high voltage, and too much heat. or higher voltage + lower clock may heat less than that voltage at higher clock. The reason why im asking is because if I can learn how the power vs speed works, Then i might be able to project some kind of thermal curve to find out where my perfect overclock might be (without 50 burn-ins) But, as is apparent im sure. I have no idea what im talking about. If anyone can help me learn more about this; throw me a page, a macro, whathaveyou I will bow before your awesomeness and... Mail you a phantom hand written thank you letter. Some clarification Rev 1 What im trying to learn: is how much power a cpu is using with measurements (Core Voltage) vs (Clock speed) - It would answer the question: Would a 1.4v core @ 4.0Ghz use as much power as a 1.4v core @ 3.0Ghz?

    Read the article

  • Sensei mouse sensitivity

    - by Marcelo
    I've recently acquired a Steelseries Sensei. Despite being a great mouse, I'm having some trouble finding settings that I can get used to... The mouse engine allows me to set a CPI from 0 to 5700. It also allows me to set it even higher, calling it "DCPI" (Double CPI), from 5701 to 11400. On Window's Control Panel, there's a "Pointer Speed" slider and a "Enhance Pointer Precision" checkbox (wording may be different as I use a non-english version). The majority of games allow me to set an in-game "Mouse Sensitivity". Some games let me use a "Raw mouse input". I'm already familiar with the basics of CPI/DPI - "higher CPI means less hand movement", but what are the differences between all those options? Is there a "better" or "worst" setting?

    Read the article

  • What is the easiest and cleanest way to create a chrooted SFTP on Centos 5.4?

    - by benjisail
    Hi, I would like to setup a SFTP with chroot (or equivalent) login to my Centos 5.4 server in a clean way. By clean way I mean by using only the YUM command if possible and with something easy to maintain and easy to extend (for example an easy way to add an extra SFTP user). The problem with CentOS 5.4 is that OpenSSH is at version 4.3 in the repository so it is not possible to use the built in chroot capabilities of OpenSSH 4.8+. Installing RSSH required to create manually a chrooted directory which don't seems easy to maintain to me. MySecureShell is an other solution but it require an higher version of openSSL than the one which is in the repository. I know that I could install manually an higher version of OpenSSH but I would lose all the advantage of the Yum command and it could become tricky to maintain if I want to do some updates in the futur... Do you have an easy and clean way to setup a chrooted SFTP login on a centOS 5.4 server? Thanks!

    Read the article

  • Video memory buswidth vs video memory Bandwidth

    - by Mixxiphoid
    My current video card (9600GT) is dying and I'm searching for a new video card. Between acquiring my current one and now, I got a lot more knowledge about hardware and I want to use that to pick my new card. So I decided to not just buy some popular card blindly, but to search for a card able to handle my hardware requirements. I searched the specs at the NVidia site for the GT640 and was confused by the memory section and some questions raised. My current card's memory bus width is 256bit and has 1GB of memory. I checked Google about the importance of bus width. And all the links basically said the same 'The higher the number the more potential simultaneously traffic can be transferred'. This was already clear to me, yet there are currently a lot of new cards which are considered better than my current one with a lower bus width. To go in more detail about my question I copied the memory info from the NVidia site: GT 640 GT640 GDDR5 Memory Specs: Memory Clock 1.8 Gbps 5.0 Gbps Standard Memory Config 2048 MB 1024 MB Memory Interface DDR3 GDDR5 Memory Interface Width 128-bit 64-bit Memory Bandwidth (GB/sec) 28.5 40.0 What puzzled me is that the Memory Bandwidth seems to me the most important part, yet the lower bus width has the higher 'performance'. Is this due to the fact the memory interface is GDDR5 and is therefore able to have a higher memory clock speed (5Gbps)? If I am to buy a new video card, should I check the bus width? Memory clock? Bandwith? Amount of memory? My current card ahs 1GB memory, so I was searching for a 2GB memory card, but now I'm not so sure any more whether that is really 'better'. My main question: To me it seems that memory performance is made up by the combination of bus width and frequency. Is this true? If yes, why are there so many sites telling me I need to get a card with a high bus width? If no, then what IS important when it goes about memory performance on a video card. NOTE: The memory bandwidth is (almost) never displayed on vendor sites. How can I determine which card is better without knowing the bandwith?

    Read the article

  • Under what circumstances can/will a 10BASE-T device slow down a 100BASE-T or faster network?

    - by Fred Hamilton
    Suppose I have a 10BASE-T device, but everything else on my network is 100BASE-T or faster. When I attach the 10Mbit device, one of two things could happen: It could be passed through as 10BASE-T, making whatever it's connected to slow down to 10BASE-T, or It could be converted to a higher speed. I'm looking for all the plausible examples where #1 could happen; where attaching this 10Mbit device will slow down other traffic. I sorta think it can't happen - that as soon as it comes into contact with other traffic it has to get retimed to be inserted into the higher rate traffic (so no slowdown aside from the extra bits from the connection), and that when it's not contacting other traffic, who cares if it's only 10Mbit? Basically I'd like a better understanding of any impact of inserting a slow device into a fast ethernet network.

    Read the article

  • How can I judge the suitability of modern processors for systems with specific CPU requirements?

    - by Iszi
    Inspired by this question: How do I calculate clock speed in multi-core processors? The answers in the above question do a fair job of explaining why a lower-speed multi-core processor won't necessarily perform at the same level as a higher-speed single-core processor. Example: 4*2=8, but a quad-core 2 GHz processor isn't necessarily as fast as a single-core 8 GHz processor. However, I'm having a hard time putting the information in those answers to practical use in my mind. Particularly, I want to know how it should be used to judge whether a given CPU is appropriate for an application with specific requirements. Example scenarios: An application has a minimum CPU requirement of 2.4 GHz dual-core. Another application has a minimum CPU requirement of 1.8 GHz single-core. For either of the above scenarios: Would a higher-speed processor with fewer cores, or a lower-speed processor with more cores, be equally sufficient? If so, how can we determine the appropriate processor speeds required for a given number of cores?

    Read the article

  • Is it safe to buy a replacement laptop battery that has slightly different voltage than the original?

    - by Hugoagogo
    I'm looking for a new battery for my Acer Aspire 4741g laptop. I am trying to get a higher capacity battery. Many sites (like this one and this one) list batteries that are interchangeable with the battery that came with the laptop (AS10D41). These batteries have higher capacity, but also marginally lower voltage (11.1V vs. 11.8V). Is this a problem? Does it mean that the battery is actually incompatible? The replacement battery I am looking at is AS10G3E, and is listed in many places as being compatible with the AS10D41, but nobody makes any mention of the differing voltages. I am trying to find out what kind of problems a lower voltage battery can cause. I know P=IV; with reduced voltage, will the machine draw more current, possibly damaging components? I'm just speculating, but I'm worried about the chance that using a battery at a lower voltage will damage my laptop.

    Read the article

  • Signed and unsigned, and how bit extension works

    - by hatorade
    unsigned short s; s = 0xffff; int i = s; How does the extension work here? 2 larger order bytes are added, but I'm confused whether 1's or 0's are extended there. This is probably platform dependent so let's focus on what Unix does. Would the two bigger order bytes of the int be filled with 1's or 0's, and why? Basically, does the computer know that s is unsigned, and correctly assign 0's to the higher order bits of the int? So i is now 0x0000ffff? Or since ints are default signed in unix does it take the signed bit from s (a 1) and copy that to the higher order bytes?

    Read the article

  • Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

    - by Wang
    We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record? I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it. Is there any way to make the OS check the rest of the search path after it finds the username?

    Read the article

  • Increasing Screen Resolution to more than limit and scaling to origional

    - by Akshat Mittal
    I have a laptop, with Screen Resolution - 1366x768 (most common) - I want to increase it further to 1600x900 (or higher), in the same ratio. I want to scale the higher resolution on my current screen to fit it. I found xrandr with command xrandr --output LVDS1 --scale 1.4x1.4, this worked but again resulted to another problem, it does the scaling thing but the cursor is still blocked into the native screen resolution and I am not able to move it further, I found that the bug is already filed here. Also this command was only for Linux, I wanted to do this thing with both Linux and Windows (including Windows 8). I want a similar software that is bug free (at least not a major bug like this) and that supports Windows as well (or two separate software for Windows and Linux). Any help is appreciated and Thanks in advance.

    Read the article

  • SQL connection is too slow

    - by user66905
    We have a business web application in ASP.NET + SQL Server 2008. In the beginning, SQL Server and IIS were on the same machine. Now we bought another machine. Current configuration is IIS machine plus SQL Server machine, and they are connected by a 1gb LAN connection. With this configuration our web application is slower than before. Max bandwidth is 1-2% of network, about 15mbps. When we use another threads to the same SQL Server from the same IIS machine, network use is higher. So this is no problem with SQL Server. Ho we can make higher bandwidth for this SQL connection? Specs: .Net 3.5 SQL Server 2008 Standard file transfer can use 100% of LAN SQL connection by TCP/IP protocol SQL logins Pool tested with enable and

    Read the article

  • Can an UPS be too powerful?

    - by Andy
    Our old network admin bought the top range UPS a few months ago but never came around to setting it up and is no longer with the company. Now the old UPS broke down and needs to be replaced, but an external company that did an audit said that that UPS won't work. Now we are no hardware specialists, but the difference in specs is a higher output from 5A to 8.8A meaning a higher output. But isn't the UPS supposed to give the server the required output anyway? This 'independent' audit does sell its own hardware including UPSes so I'm not sure how much bias they have. Is there a reason why we can't replace the old broken UPS with the new more powerful one? Is there a way we can check to see if the UPS works with our server? ok, i wrote down the numbers again, the Volts and Amps are what are on the back (where you connect up the plugs which seem diffrent from on the front label.) old one APC SmartUPS 1500 220-240V -- 6.8A new one Dell UPS 1920W 250V -- 10A

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >