Search Results

Search found 6694 results on 268 pages for 'wait states'.

Page 27/268 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • What's a good way to organize samplers for HLSL?

    - by Rei Miyasaka
    According to MSDN, I can have 4096 samplers per context. That's a lot, considering there's only a handful of common sampler states. That tempts me to initialize an array containing a whole bunch of common sampler states, assign them to every device context I use, and then in the pixel shaders refer to them by index using : register(s[n]) where n is the index in the array. If I want more samplers for whatever reason, I can just add them on after the last slot. Does this work? If not, when should I set the samplers? Should it be done when by the mesh renderer? The texture renderer? Or alongside PSSetShader? Edit: That trick I wrote above doesn't work (at least not yet), as the compiler gives me this error message when I try to use the same register twice: error X4500: overlapping register semantics not yet implemented 's0' So how do people usually organize samplers, then?

    Read the article

  • Verizon Wireless Supports its Mission-Critical Employee Portal with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Verizon Wireless, the #1 mobile carrier in the United States, operates the nation’s largest 3G and 4G LTE network, with the most subscribers (109 millions) and the highest revenue ($70.2 Billion in 2011). Verizon Wireless built the first wide-area wireless broadband network and delivered the first wireless consumer 3G multimedia service in the US, and offers global voice and data services in more than 200 destinations around the world. To support 4.2 million daily wireless transactions and 493,000 calls and emails transactions produced by 94.2 million retail customers, Verizon Wireless employs over 78,000 employees with area headquarters across the United States. The Business Challenge Seeing the stupendous rise in social media, video streaming, live broadcasting…etc which redefined the scope of technology, Verizon Wireless, as a technology savvy company, wanted to provide a platform to its employees where they could network socially, view and host microsites, stream live videos, blog and provide the latest news. The IT team at Verizon Wireless had abundant experience with various technology platforms to support the huge number of applications in the company. However, open-source products weren’t yet widely used in the organization and the team had the ambition to adopt such technologies and see if the architecture could meet Verizon Wireless’ rigid requirements. After evaluating a few solutions, the IT team decided to use the LAMP stack for Vzweb, its mission-critical, 24x7 employee portal, with Drupal as the front end and MySQL on Linux as the backend, and for a few other internal websites also on MySQL. The MySQL Solution Verizon Wireless started to support its employee portal, Vzweb, its online streaming website, Vztube, and internal wiki pages, Vzwiki, with MySQL 5.1 in 2010. Vzweb is the main internal communication channel for Verizon Wireless, while Vztube hosts important company-wide webcasts regularly for executive-level announcements, so both channels have to be live and accessible all the time for its 78,000 employees across the United States. However during the initial deployment of the MySQL based Intranet, the application experienced performance issues. High connection spikes occurred causing slow user response time, and the IT team applied workarounds to continue the service. A number of key performance indexes (KPI) for the infrastructure were identified and the operational framework redesigned to support a more robust website and conform to the 99.985% uptime SLA (Service-Level Agreement). The MySQL DBA team made a series of upgrades in MySQL: Step 1: Moved from MyISAM to InnoDB storage engine in 2010 Step 2: Upgraded to the latest MySQL 5.1.54 release in 2010 Step 3: Upgraded from MySQL 5.1 to the latest GA release MySQL 5.5 in 2011, and leveraging MySQL Thread Pool as part of MySQL Enterprise Edition to scale better After making those changes, the team saw a much better response time during high concurrency use cases, and achieved an amazing performance improvement of 1400%! In January 2011, Verizon CEO, Ivan Seidenberg, announced the iPhone launch during the opening keynote at Consumer Electronic Show (CES) in Las Vegas, and that presentation was streamed live to its 78,000 employees. The event was broadcasted flawlessly with MySQL as the database. Later in 2011, Hurricane Irene attacked the East Coast of United States and caused major life and financial damages. During the hurricane, the team directed more traffic to its west coast data center to avoid potential infrastructure damage in the East Coast. Such transition was executed smoothly and even though the geographical distance became longer for the East Coast users, there was no impact in the performance of Vzweb and Vztube, and the SLA goal was achieved. “MySQL is the key component of Verizon Wireless’ mission-critical employee portal application,” said Shivinder Singh, senior DBA at Verizon Wireless. “We achieved 1400% performance improvement by moving from the MyISAM storage engine to InnoDB, upgrading to the latest GA release MySQL 5.5, and using the MySQL Thread Pool to support high concurrent user connections. MySQL has become part of our IT infrastructure, on which potentially more future applications will be built.” To learn more about MySQL Enterprise Edition, Get our Product Guide.

    Read the article

  • MDX using EXISTING, AGGREGATE, CROSSJOIN and WHERE

    - by James Rogers
    It is a well-published approach to using the EXISTING function to decode AGGREGATE members and nested sub-query filters.  Mosha wrote a good blog on it here and a more recent one here.  The use of EXISTING in these scenarios is very useful and sometimes the only option when dealing with multi-select filters.  However, there are some limitations I have run across when using the EXISTING function against an AGGREGATE member:   The AGGREGATE member must be assigned to the Dimension.Hierarchy being detected by the EXISTING function in the calculated measure. The AGGREGATE member cannot contain a crossjoin from any other dimension or hierarchy or EXISTING will not be able to detect the members in the AGGREGATE member.   Take the following query (from Adventure Works DW 2008):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Date].[Fiscal Weeks].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  This is useful to get a per-week average for another member. Many applications generate queries in this manner (such as Oracle OBIEE).  This query returns the correct result of (4) weeks. Now let's put a twist in it.  What if the querying application submits the query in the following manner:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Customer].[Customer Geography].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Customer].[Customer Geography].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  However, the AGGREGATE member is built on a different dimension (in name) than the one EXISTING is trying to detect.  In this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension.   Now another twist, the AGGREGATE member will be named appropriately and contain the hierarchy we are trying to detect with EXISTING but it will be cross-joined with another hierarchy:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   Once again, we are attempting to count the existing fiscal weeks in slicer.  Again, in this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension. However, in 2008 R2 this query returns the correct result of 4 and additionally , the following will return the count of existing countries as well (2):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'   member [Country Count] as 'count(existing([Customer].[Customer Geography].[Country].members))'  member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   2008 R2 seems to work as long as the AGGREGATE member is on at least one of the hierarchies attempting to be detected (i.e. [Date].[Fiscal Weeks] or [Customer].[Customer Geography]). If not, it seems that the engine cannot find a "point of entry" into the aggregate member and ignores it for calculated members.   One way around this would be to put the sets from the AGGREGATE member explicitly in the WHERE clause (slicer).  I realize this is only supported in SSAS 2005 and 2008.  However, after talking with Chris Webb (his blog is here and I highly recommend following his efforts and musings) it is a far more efficient way to filter/slice a query:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   This query returns the correct result of (4) weeks.  Additionally, we can count the cross-join members of the two hierarchies in the slicer:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members)*existing([Customer].[Customer Geography].[Country].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   We get the correct number of (8) here.

    Read the article

  • Silverlight Cream for March 13, 2011 -- #1059

    - by Dave Campbell
    In this Issue: András Velvárt, WIndowsPhoneGeek(-2-), Jesse Liberty(-2-), Victor Gaudioso, Kunal Chowdhury, Jeremy Likness, Michael Crump, and Dhananjay Kumar. Above the Fold: Silverlight: "Application Library Caching in Silverlight 4" Kunal Chowdhury WP7: "Handling WP7 orientation changes via Visual States" András Velvárt Shoutouts: Joe McBride gave a MEF Head User Group presentation and has posted How to Become a MEF Head – Slides & Code From SilverlightCream.com: Handling WP7 orientation changes via Visual States András Velvárt has an Expression Blend/WP7 post up discussing WP7 orientation changes and handling them via Visual States ... see an example from his SurfCube app, and a behavior to handle the control... with source. WP7 PerformanceProgressBar in depth WIndowsPhoneGeek has a post up discussing the WP7 Performance bar from the Windows Phone Toolkit. This is an update on the Toolkit based on the Feb 2011 release. Great explanation of the PerformanceProgressBar, external links, and sample code. Getting data out of WP7 WMAppManifest is easy with Coding4Fun PhoneHelper Next WindowsPhoneGeek has a post up about the PhoneHelper in the Coding4Fun TOolkit, and using it to get data out of the WMAppManifest easily. Good discussion, Links, and code as always Silverlight Unit Test For Phone In Jesse Liberty's "Windows Phone From Scratch" number 41, he's discussing Unit Testing for WP7... he gives some good external links and some good examples. Yet Another Podcast #27–Paul Betts Jesse Liberty's next post is his "Yet Another Podcast" number 27, and an interview with Paul Betts, the creator of Reactive UI... check out the podcast and also the good links listed. New Silverlight Video Tutorial: How to use the Fluid Move Behavior Victor Gaudioso has a new video tutorial up on using the Fluid Move Behavior... making a selected item animate from a ListBox to a Master Details Grid. Application Library Caching in Silverlight 4 Kunal Chowdhury takes a break from SilverlightZone long enough to write a post about Application Library Caching... for example on-demand loading of a 3rd-party XAP. Jounce Part 13: Navigation Parameters Jeremy Likness has his 13th post of a series in understanding his Jounce MVVM framework up. This episode surrounds a new release and what it contains, the primary focus being navigation parameters... that is you can raise a navigation event with a payload. Profiling Silverlight Applications after installing Visual Studio 2010 Service Pack 1 Michael Crump digs into the performance wizard for Silverlight that we get with VS2010 SP1. He shows how to get and read a profile... great intro to a new tool. Binding XML File to Data Grid in Silverlight Dhananjay Kumar demonstrates reading an XML file using LINQ to XML and binding the result to a Silverlight DataGrid Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • EU Digital Agenda scores 85/100

    - by trond-arne.undheim
    If the Digital Agenda was a bottle of wine and I were wine critic Robert Parker, I would say the Digital Agenda has "a great bouquet, many good elements, with astringent, dry and puckering mouth feel that will not please everyone, but still displaying some finesse. A somewhat controlled effort with no surprises and a few noticeable flaws in the delivery. Noticeably shorter aftertaste than advertised by the producers. Score: 85/100. Enjoy now". The EU Digital Agenda states that "standards are vital for interoperability" and has a whole chapter on interoperability and standards. With this strong emphasis, there is hope the EU's outdated standardization system finally is headed for reform. It has been 23 years since the legal framework of standardisation was completed by Council Decision 87/95/EEC8 in the Information and Communications Technology (ICT) sector. Standardization is market driven. For several decades the IT industry has been developing standards and specifications in global open standards development organisations (fora/consortia), many of which have transparency procedures and practices far superior to the European Standards Organizations. The Digital Agenda rightly states: "reflecting the rise and growing importance of ICT standards developed by certain global fora and consortia". Some fora/consortia, of course, are distorted, influenced by single vendors, have poor track record, and need constant vigilance, but they are the minority. Therefore, the recognition needs to be accompanied by eligibility criteria focused on openness. Will the EU reform its ICT standardization by the end of 2010? Possibly, and only if DG Enterprise takes on board that Information and Communications Technologies (ICTs) have driven half of the productivity growth in Europe over the past 15 years, a prominent fact in the EU's excellent Digital Competitiveness report 2010 published on Monday 17 May. It is ok to single out the ICT sector. It simply is the most important sector right now as it fuels growth in all other sectors. Let's not wait for the entire standardization package which may take another few years. Europe does not have time. The Digital Agenda is an umbrella strategy with deliveries from a host of actors across the Commission. For instance, the EU promises to issue "guidance on transparent ex-ante disclosure rules for essential intellectual property rights and licensing terms and conditions in the context of standard setting", by 2011 in the Horisontal Guidelines now out for public consultation by DG COMP and to some extent by DG ENTR's standardization policy reform. This is important. The EU will issue procurement guidance as interoperability frameworks are put into practice. This is a joint responsibility of several DGs, and is likely to suffer coordination problems, controversy and delays. We have seen plenty of the latter already and I have commented on the Commission's own interoperability elsewhere, with mixed luck. :( Yesterday, I watched the cartoonesque Korean western film The Good, the Bad and the Weird. In the movie (and I meant in the movie only), a bandit, a thief, and a bounty hunter, all excellent at whatever they do, fight for a treasure map. Whether that is a good analogy for the situation within the Commission, others are better judges of than I. However, as a movie fanatic, I still await the final shoot-out, and, as in the film, the only certainty is that "life is about chasing and being chased". The missed opportunity (in this case not following up the push from Member States to better define open standards based interoperability) is a casualty of the chaos ensued in the European Wild West (and I mean that in the most endearing sense, and my excuses beforehand to actors who possibly justifiably cannot bear being compared to fictional movie characters). Instead of exposing the ongoing fight, the EU opted for the legalistic use of the term "standards" throughout the document. This is a term that--to the EU-- excludes most standards used by the IT industry world wide. So, while it, for a moment, meant "weapon down", it will not lead to lasting peace. The Digital Agenda calls for the Member States to "Implement commitments on interoperability and standards in the Malmö and Granada Declarations by 2013". This is a far cry from the actual Ministerial Declarations which called upon the Commission to help them with this implementation by recognizing and further defining open standards based interoperability. Unless there is more forthcoming from the Commission, the market's judgement will be: you simply fall short. Generally, I think the EU focus now should be "from policy to practice" and the Digital Agenda does indeed stop short of tackling some highly practical issues. There is need for progress beyond the Digital Agenda. Here are some suggestions that would help Europe re-take global leadership on openness, public sector reform, and economic growth: A strong European software strategy centred around open standards based interoperability by 2011. An ambitious new eCommission strategy for 2011-15 focused on migration to open standards by 2015. Aligning the IT portfolio across the Commission into one Digital Agenda DG by 2012. Focusing all best practice exchange in eGovernment on one social networking site, epractice.eu (full disclosure: I had a role in getting that site up and running) Prioritizing public sector needs in global standardization over European standardization by 2014.

    Read the article

  • Is it true that the Google Spider gives the most relevance of a search result to the first 68 characters of the <title>?

    - by leeand00
    I am reading documentation about my CMS and it states that an HTML page <title> tag is really important in SEO. It states that the Google Spider gives the most relevance to the first 68 characters of a site title. (68 characters being the number of characters that Google will display in it's search engine result pages,) Can anyone verify this is still true? I read in The Information Diet that content farms were getting too good at gaming Google's algorithm for collecting and posting SERPs and so google had to change the search algorithm.

    Read the article

  • NHibernate 3.0 and FluentNHibernate, how to get up and running&hellip;.

    - by DesigningCode
    First up. Its actually really easy. I’m not very religious about my DB tech, I don’t really care, I just want something that works.  So I’m happy to consider all options if they provide an advantage, and recently I was considering jumping from NHibernate to EF 4.0.  However before ditching NHibernate and jumping to EF 4.0 I thought I should try the head version of NHibernates trunk and the Head version of FluentNHibernate. I currently have a “Repository / Unit of Work” Framework built up around these two techs.  All up it makes my life pretty simple for dealing with databases.   The problem is the current release of NHibernate + the Linq provider wasn’t too hot for our purposes.  Especially trying to plug it into older VB.NET code.   The Linq provider spat the dummy with VB.NET lambdas.  Mainly because in C# Query().Where(l => l.Name.Contains("x") || l.Name.Contains("y")).ToList(); is not the same as the VB.NET Query().Where(Function(l) l.Name.Contains("x") Or l.Name.Contains("y")).ToList VB.NET seems to spit out … well…. something different :-) so anyways… Compiling your own version of NHibernate and FluentNHibernate.  It’s actually pretty easy! First you’ll need to install tortisesvn NAnt and Git if you don’t already have them.  NHibernate first step, get the subversion trunk https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/ into a directory somewhere.  eg \thirdparty\nhibernate Then use NAnt to build it.   (if you open the .sln it will show errors in that  AssemblyInfo.cs doesn’t exist ) to build it, there is a .txt document with sample command line build instructions,  I simply used :- NAnt -D:project.config=release clean build >output-release-build.log *wait* *wait* *wait* and ta da, you will have a bin directory with all the release dlls. FluentNHibernate This was pretty simple. there’s instructions here :- http://wiki.fluentnhibernate.org/Getting_started#Installation basically, with git, create a directory, and you issue the command git clone git://github.com/jagregory/fluent-nhibernate.git and wait, and soon enough you have the source. Now, from the bin directory that NHibernate spit out, take everything and dump it into the subdirectory “fluent-nhibernate\tools\NHibernate” Now, to build, you can use rake….which a ruby build system, however you can also just open the solution and build.   Which is what I did.  I had a few problems with the references which I simply re-added using the new ones.  Once built, I just took all the NHibnerate dlls, and the fluent ones and replaced my existing NHibernate / Fluent and killed off the old linq project. All I had to change is the places that used  .Linq<T>  and replace them with .Query<T>  (which was easy as I had wrapped it already to isolate my code from such changes) and hey presto, everything worked.  Even the VB.NET linq calls. I need to do some more testing as I’ve only done basic smoke tests, but its all looking pretty good, so for now, I will stick to NHibernate!

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Oracle Linux Training Across Five Continents

    - by Antoinette O'Sullivan
    The Oracle Linux System Administration course, a top selling course, provides you with a broad selection of key competencies you need to be a great Linux system administrator. And you can now take this course from your desk or in classrooms across all five contents. You can take this 5-day instructor-led course through the follow delivery methods: Training-on-Demand: Start training within 24 hours of registering. You following lecture material at your own pace via streaming video and book time on a lab environment to suit your schedule. Live-Virtual Event: Follow a live event from your own desk, no travel required. You can choose from a selection of events on the schedule to suit a different time zones. In-Class Event: Travel to an education center to take this course. Below is a selection of the in-class events already on the schedule. AFRICA  Location  Date  Delivery Language  Nairobi, Kenya  13 October 2014  English  Johannesburg, South Africa  24 November 2014  English AMERICA  Location  Date  Delivery Language  Mississauga, Canada  27 October 2014  English  Chicago, IL, United States  13 October 2014  English  Roseville, MN, United States  13 October 2014  English ASIA  Location  Date  Delivery  Jakarta, Indonesia  20 October 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Kuala Lumpur, Malaysia  8 December 2014  English  Istanbul, Turkey  10 November 2014  Turkish   Dubai, United Arab Emirates  4 January 2015  English AUSTRALIA  Location  Date  Delivery Language  Canberra, Australia  20 October 2014  English  Melbourne, Australia  20 October 2014  English EUROPE  Location  Date  Delivery Language  Paris, France  6 October 2014  French  Milan, Italy  20 October 2014  Italian  Rome, Italy  8 September 2014  Italian  Bucharest, Romania  27 October 2014  Romanian  Madrid, Spain  1 September 2014  Spanish The Oracle Linux System Administration course is the recommended training course to prepare for you for the Oracle Linux 5 & 6 System Administrator OCA certification exam. Those who have acquired the skills provided in the Oracle Linux System Administration course, can advance their learning by taking the Oracle Linux Advanced Administration course. You can take this 5-day instructor led course as a live-virtual event or an in-class event. Below is a selection of the in-class events on the schedule:  Location  Date  Delivery Language  Jakarta, Indonesia  27 October 2014  English  Kuala Lumpur, Malaysia  6 October 2014  English  Bangkok, Thailand  20 October 2014  English  Belmont, CA, United States  15 September 2014  English For information on the Oracle Linux curriculum, go to http://oracle.com/education/linux.

    Read the article

  • LSP vs OCP / Liskov Substitution VS Open Close

    - by Kolyunya
    I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). the open/closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". LSP in simple words states that any instance of Foo can be replaced with any instance of Bar which is derived from Foo and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if Bar, derived from Foo does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer.

    Read the article

  • How long should my Html Page Title Really be?

    - by RandomBen
    How long should my text within my <title></title> tags really be? I know Google cuts it off at some point but when? When I used IIS7's SEO Toolkit 1.0 I get error stating my title should be under 65 characters. I have a book by Bruce Clay that states I should use from 62-70 characters and roughly 9 +/- 3 words. I also have used SenSEO's Firefox Add-on and it states I should use a max of 65 characters or roughly 15 words. What is the max really? I have 2 sources saying 65 and 1 saying 72 but Bruce Clay is generally kept in high regard.

    Read the article

  • GameState management hierarchical FSM vs stack based FSM

    - by user8363
    I'm reading a bit on Finite State Machines to handle game states (or screens). I would like to build a rather decent FSM that can handle multiple screens. e.g. while the game is running I want to be able to pop-up an ingame menu and when that happens the main screen must stop updating (the game is paused) but must still be visible in the background. However when I open an inventory pop-up the main screen must be visible and continue updating etc. I'm a bit confused about the difference in implementation and functionality between hierarchical FSM's and FSM's that handle a stack of states instead. Are they basically the same? Or are there important differences?

    Read the article

  • Get to Know a Candidate (2 of 25): Merlin Miller&ndash;American Third Position Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Meet Merlin Miller of American Third Position Party In addition to being American Third Position Party nominee, Miller is an independent film maker.  He is a graduate of West Point and has commanded two units in the United States Army.  After military service he worked as an industrial engineering manager for Michelin Tire.  He learned about film making by earning an MFA from the University of Southern California. Mr. Miller is on the ballot in: CO, NJ, and TN. In the 2000’s, Miller began taking an increasingly paleoconservative political stance.  He claimed that Hollywood “ surreptitiously seeks to destroy our European-American heritage and our Christian-based traditional values, and replace them with values that debase these traditional values and elevate minorities as paragons of virtue and wisdom....Today’s motion pictures, in concert with other forms of mass media entertainment, are the greatest enemies to the well-being of our progeny and the future of our country.” Miller states that he "doesn't like" interracial marriage; however, he does not support outlawing interracial marriage, either.  Miller has denied being anti-Semitic, instead claiming that he merely opposes "favoritism" granted to Jews in the film industry.  He also opposes illegal immigration and what he refers to as "wide open borders" in the United States. The American Third Position Party (A3P) is a third positionist American political party which promotes white supremacy.  It was officially launched in January 2010 (although in November 2009 it filed papers to get on a ballot in California) partially to channel the right-wing populist resentment engendered by the financial crisis of 2007–2010 and the policies of the Obama administration and defines its principal mission as representing the political interests of white Americans. The party takes a strong stand against immigration and globalization, and strongly supports an anti-interventionist foreign policy. Although the party does not support labor unions, they do strongly support the labor rights of the American working class on a platform of placing American workers first over illegal immigrant workers and banning of overseas corporate relocation of American industry and technology Third Position or Third Alternative refers to a revolutionary nationalist political position that emphasizes its opposition to both communism and capitalism. Advocates of Third Position politics typically present themselves as "beyond left and right", instead claiming to syncretize radical ideas from both ends of the political spectrum Learn more about Merlin Miller and American Third Position Party on Wikipedia.

    Read the article

  • Creating practically solvable 15 puzzle inputs

    - by Ashwin
    I am now developing a 15 puzzle game. I know the method to detect unsolvable puzzles. But unlike 8-puzzle, solution for 15-puzzle takes quite long time for some input states and can be solved within 5 seconds some other set of input states. Now the problem is that I cannot give the user(the player), a problem for which the solution takes more than 10 seconds(if he/she chooses to see the solution). So what I want is that when I initially shuffle the puzzle, I want to only present those puzzles which can be solved within 10 seconds. There must be some way to determine the hardness of the puzzle. I tried searching the net but could not find it. Does anyone know a way of determining the hardness of a puzzle? NOTE : I am using A* algorithm to find out the solution on a computer with 3GB RAM and 2.27GHZ processor.

    Read the article

  • What makes for a good JIRA workflow with a software development team?

    - by Hari Seldon
    I am migrating my team from a snarl of poorly managed excel documents, individual checklists, and personal emails to manage our application issues and development tasks to a new JIRA project. My team and I are new to JIRA (and issue tracking software in general). My team is skeptical of the transition at best, so I am also trying not to scare them off by introducing something overly complex at the start. I understand one of JIRA's strengths to be the customized workflows that can be created for a project. I've looked over the JIRA documentation and a number of tutorials, and am comfortable with the how in creating workflows, but I need some contextual What to go along with it. What makes a particular workflow work well? What does a poorly designed workflow look like? What are the benefits/drawbacks of a strict workflow with very specific states and transitions to a looser workflow, with fewer, broader defined states and transitions

    Read the article

  • Saving an interface instance into a Bundle

    - by user22241
    All I've have an interface that allows me to switch between different scenes in my Android game. When the home key is pressed, I am saving all of my states (score, sprite positions etc) into a Bundle. When re-launching, I am restoring all my states and all is OK - however, I can't figure out how to save my 'Scene', thus when I return, it always starts at the default screen which is the 'Main Menu'. How would I go about saving my 'Scene' (into a Bundle)? Code import android.view.MotionEvent; public interface Scene{ void render(); void updateLogic(); boolean onTouchEvent(MotionEvent event); } I assume the interface is the relevant piece of code which is why I've posted that snippet. I set my scene like so: ('options' is an object of my Options class which extends MainMenu (Another custom class) which, in turn implements the interface 'Scene') SceneManager.getInstance().setCurrentScene(options); //Current scene is optionscreen

    Read the article

  • New Science and Technology Centers

    NSF supports integrative partnerships that require large-scale, long-term funding to produce research and education of the highest quality National Science Foundation - Education - Science in Society - Educational Resources - United States

    Read the article

  • Serious Forum Design Issue

    - by John
    I want to launch a school forum for a country. I'm in dilemma as to how to design this forum. Country will have many states, each state will have cities, each city will have many schools. Each school will have many classes( 1st - 12th). It looks like this: States - Cities - Schools - Classes I want to make each class as forum. Now the biggest problems are two fold: If I were to list even 1000 schools it will create huge number of forums and I don't think forum softwares(PHPBB, MyBB) are designed to support these many Secondly creating and maintaining huge forums is also very difficult. Ideally I'd like to duplicate existing forum hierarchy to new one. I've done lot of search and I've found that such type of forums are infeasible and such designs don't work. Leave alone the fact the nobody uses such a forum. Keeping in mind this, how should I go about designing this forum?

    Read the article

  • OpenGL behaviour depending on the graphics card?

    - by Dan
    This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL/GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..)

    Read the article

  • A sample Memento pattern: Is it correct?

    - by TheSilverBullet
    Following this query on memento pattern, I have tried to put my understanding to test. Memento pattern stands for three things: Saving state of the "memento" object for its successful retrieval Saving carefully each valid "state" of the memento Encapsulating the saved states from the change inducer so that each state remains unaltered Have I achieved these three with my design? Problem This is a zero player game where the program is initialized with a particular set up of chess pawns - the knight and queen. Then program then needs to keep adding set of pawns or knights and queens so that each pawn is "safe" for the next one move of every other pawn. The condition is that either both pawns should be placed, or none of them should be placed. The chessboard with the most number of non conflicting knights and queens should be returned. Implementation I have 4 classes for this: protected ChessBoard (the Memento) private int [][] ChessBoard; public void ChessBoard(); protected void SetChessBoard(); protected void GetChessBoard(int); public Pawn This is not related to memento. It holds info about the pawns public enum PawnType: int { Empty = 0, Queen = 1, Knight = 2, } //This returns a value that shown if the pawn can be placed safely public bool IsSafeToAddPawn(PawnType); public CareTaker This corresponds to caretaker of memento This is a double dimentional integer array that keeps a track of all states. The reason for having 2D array is to keep track of how many states are stored and which state is currently active. An example: 0 -2 1 -1 2 0 - This is current state. With second index 0/ 3 1 - This state has been saved, but has been undone private int [][]State; private ChessBoard [] MChessBoard; //This gets the chessboard at the position requested and assigns it to originator public ChessBoard GetChessBoard(int); //This overwrites the chessboard at given position public void SetChessBoard(ChessBoard, int); private int [][]State; public PlayGame (This is the originator) private bool status; private ChessBoard oChessBoard; //This sets the state of chessboard at position specified public SetChessBoard(ChessBoard, int); //This gets the state of chessboard at position specified public ChessBoard GetChessBoard(int); //This function tries to place both the pawns and returns the status of this attempt public bool PlacePawns(Pawn);

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >