Search Results

Search found 6431 results on 258 pages for 'cache invalidation'.

Page 234/258 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Invalid Opcode 0000

    - by Mr47
    At random times (usually when watching a movie in XBMC), the computer locks up. I can still sometimes SSH in and get the 'dmesg' output before that locks up too. A hard reboot is usually required to get things going again. I have cut out the date/time/server columns for easier reading, please do ask if these seem relevant omissions... System: Ubuntu 11.04 (2.6.38-8-server) x64 X11 installed with IceWm (and XBMC) Core 2 Duo E8400 @ 3.00GHz 8 GB RAM Asus P5Q premium motherboard Primary harddrive: OCZ Vertex 2 60 GB (SSD) Other harddrives: various 750GB, 1TB, 1.5TB & 2TB (WD & samsung) Any important information I am not supplying is purely a sign of my incompetence in these matters, so please do ask and excuse me for my inabilities... invalid opcode: 0000 [#1] SMP last sysfs file: /sys/devices/system/cpu/cpu1/cache/index2/shared_cpu_map CPU 0 Modules linked in: parport_pc ppdev vesafb snd_hda_codec_analog tuner_simple tuner_types wm8775 tda9887 tda8290 tea5767 tuner cx25840 ir_lirc_codec lirc_dev snd_hda_intel snd_hda_codec snd_hwdep snd_pcm ir_sony_decoder snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq rc_rc6_mce ivtv ir_jvc_decoder cx2341x i2c_algo_bit v4l2_common mceusb videodev ir_rc6_decoder ir_rc5_decoder snd_timer ir_nec_decoder nvidia(P) btusb bluetooth rc_core v4l2_compat_ioctl32 tveeprom snd_seq_device pata_marvell psmouse shpchp serio_raw snd asus_atk0110 soundcore snd_page_alloc lp parport firewire_ohci firewire_core crc_itu_t r8169 sky2 ahci libahci Pid: 4597, comm: xbmc.bin Tainted: P 2.6.38-8-server #42-Ubuntu System manufacturer P5Q Premium/P5Q Premium RIP: 0010:[<ffffffff8119bc4a>] [<ffffffff8119bc4a>] do_mpage_readpage+0x9a/0x510 RSP: 0018:ffff88021f5a59d8 EFLAGS: 00210246 RAX: 0000000000000020 RBX: ffff88021f5a5ac8 RCX: 0000000000000000 RDX: 0000000000000001 RSI: 0000000015e36fe0 RDI: 0000000000000000 RBP: ffff88021f5a5a98 R08: ffff88021f5a5ac8 R09: ffff88021f5a5b38 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 000000000000b148 R14: 0000000000000001 R15: ffff8802067034b8 FS: 00007f3f34eb1700(0000) GS:ffff8800cfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007feb8d515000 CR3: 000000021d744000 CR4: 00000000000406f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Process xbmc.bin (pid: 4597, threadinfo ffff88021f5a4000, task ffff88021f63db80) Stack: ffff88021f5a5a28 ffffffff8116019d ffff88021f5a5b40 0000002000000003 ffff88021f5a5b38 ffff880206703370 ffffea0006d800f8 0000000000000000 ffffea0006d800f8 0000000c811270b5 ffff88021f5a5a68 ffffffff8110bdba Call Trace: [<ffffffff8116019d>] ? mem_cgroup_cache_charge+0xed/0x130 [<ffffffff8110bdba>] ? add_to_page_cache_locked+0xea/0x160 [<ffffffff8119c232>] mpage_readpages+0x102/0x150 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff81149475>] ? alloc_pages_current+0xa5/0x110 [<ffffffff8120157d>] ext4_readpages+0x1d/0x20 [<ffffffff81116a9b>] __do_page_cache_readahead+0x14b/0x220 [<ffffffff81116ed1>] ra_submit+0x21/0x30 [<ffffffff81116ff5>] ondemand_readahead+0x115/0x230 [<ffffffff811171a0>] page_cache_async_readahead+0x90/0xc0 [<ffffffff8110b184>] ? file_read_actor+0xd4/0x170 [<ffffffff812de72e>] ? radix_tree_lookup_slot+0xe/0x10 [<ffffffff8110c521>] do_generic_file_read.clone.23+0x271/0x450 [<ffffffff8110d1ba>] generic_file_aio_read+0x1ca/0x240 [<ffffffff8100a82e>] ? __switch_to+0x20e/0x2f0 [<ffffffff81164c82>] do_sync_read+0xd2/0x110 [<ffffffff8108b61c>] ? hrtimer_try_to_cancel+0x4c/0xe0 [<ffffffff81279083>] ? security_file_permission+0x93/0xb0 [<ffffffff81164fa1>] ? rw_verify_area+0x61/0xf0 [<ffffffff81165463>] vfs_read+0xc3/0x180 [<ffffffff81165571>] sys_read+0x51/0x90 [<ffffffff8100bfc2>] system_call_fastpath+0x16/0x1b Code: ff ff 48 c7 85 78 ff ff ff 00 00 00 00 49 d3 ee b9 0c 00 00 00 2b 4d 8c 48 8b b2 c8 00 00 00 ba 01 00 00 00 41 0f af c6 49 d3 e5 <0f> 36 4d 8c 4c 01 e8 d3 e2 4c 8d 44 16 ff 48 8b 53 20 49 d3 f8 RIP [<ffffffff8119bc4a>] do_mpage_readpage+0x9a/0x510 RSP <ffff88021f5a59d8> ---[ end trace ac6cd2f4692205a3 ]--- Please note that the error is ALWAYS occuring at do_mpage_readpage+0x9a/0x510 with the same numbers after it. I've tried to come up with the possible meaning of these, but couldn't get any further. I've also noticed that the top block from the call trace is always the following with the exact same numbers: [<ffffffff8116019d>] ? mem_cgroup_cache_charge+0xed/0x130 [<ffffffff8110bdba>] ? add_to_page_cache_locked+0xea/0x160 [<ffffffff8119c232>] mpage_readpages+0x102/0x150 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 [<ffffffff812063e0>] ? ext4_get_block+0x0/0x20 Could this indicate a hard drive issue, a RAM issue or something else entirely?

    Read the article

  • Oracle’s Web Experience Management

    - by Christie Flanagan
    Today’s guest post on Oracle’s Web Experience Management comes from a member of our WebCenter Evangelist team, Noël Jaffré, a Principal Technologist based in France.Oracle’s Web Experience Management (WEM) solution enables organizations to optimize the online channel for driving marketing and customer experience management success. It empowers business users to manage the web presence and create rich and engaging online experiences for customers and prospects. Oracle's WEM platform provides a framework to simplify the integration of Oracle, third-party and custom-built applications. This framework essentially allows the creation and integration of applications using one single business interface called the WEM interface. It includes the following: Single sign-on access control for all integrated applications using the Central Authentication Service (CAS) component. A single centralized administration window for user, role, and native applications management including site management. Community server management, gadget server management as well as management for partner integrated technologies. A Representational State Transfer (REST) API for accessing WebCenter Sites data. REST services are supported on both Oracle WebCenter Sites and Oracle WebCenter Sites Satellite Server to leverage the satellite server cache. All REST requests are cached for web consuming applications as well for the high performance delivery of native applications on the mobile channel. Oracle WebCenter Sites’ Web Experience Management environment enables organizations to deliver a compelling online experience to customers by simplifying the deployment and management of sophisticated and engaging websites. The WebCenter Sites platform automates the entire process of managing web content including: Authoring:  Business users can easily contribute and manage web content in real-time, with intuitive interfaces and drag-and-drop content authoring and layout capabilities designed for the non-technical user. Contextual Content Targeting: Marketers are empowered to create and manage targeted campaigns with relevant recommendations and promotions based on the context of the session of the visitor such as his or her navigation history, user profile, language, location or other information shared during the visitor session. Content Publishing and Deployment: It offers advanced multi-site management capabilities for departmental or regional sites, as well as strong multi-lingual and multi-locale content management. The remote satellite server caching infrastructure provides high-performance, distributed caching, tuned to deliver high-volume, targeted and multi-lingual sites. Analytics and Optimization: Business users and marketers have the ability to measure the effectiveness of their online content and campaigns at a granular level. Editors and marketers can immediately determine whether a given article or promotion is relevant to a particular customer segment. User-generated Content: Marketers can enable blogs, comments, rating and reviews on the website.  All comments and reviews posted to the website can be moderated from the administrator interface either manually or automatically using filters, whitelists, blacklists or community based moderation. Personalized Gadget Dashboards:  Site managers can deploy gadgets, small applications using web content, individually or as part of dashboards containing multiple gadgets.  These gadget dashboards enable site visitors to create their own “MyPage” on a given site where they can select and customize the gadgets that the site administrator has made available.  Any gadget that conforms to the iGoogle/OpenSocial standard can be made available to site visitors, or they can be created within the WEM interface. Oracle's WEM platform also provides a unique environment for the delivery of a rich, multichannel online experience for site visitors through its advanced management modules for mobile. With Oracle’s WEM solution, it’s easy to control branding and deliver a consistent message while repurposing web content for publication to mobile devices, kiosks and much more. This distinctive approach provides: HTML5 Delivery: HTML5 delivery which includes native support for adaptive design that responds to the user’s computer screen resolution and orientation. The approach is less driven by the particular hardware and more driven by the user’s interactions with the device. In other words, this approach takes both the screen interactions (either cursor or touch) and screen sizes and orientation into consideration. A Unique Native Mobile Extension Environment for Contributors: From the WEM interface, a contributor can directly manage their mobile channel, using the tooling already in place for driving the traditional web presence. This includes the mobile presentation, as well as mobile insite editing, drag and drop page layout, and in-context recommendations and personalization. Optimized REST APIs for High Performance Content Delivery on Native Mobile Device Applications: WebCenter Sites’ REST API uses the underlying HTTP methods (GET, POST, PUT, DELETE) to interact with resources. Resources support two types of input and output formats -- XML and JSON. REST calls are customizable to optimize the interactions between the content repositories and the client applications. Caching is essential to decrease network loads and improve overall reliability and usability of the applications and user interactions. REST results are cached through the highly efficient Oracle WebCenter Sites caching architecture.

    Read the article

  • The Krewe App Post-Mortem

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/23/the-krewe-app-post-mortem.aspxNow that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates. At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback. I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements. Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth. The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later. Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card. I just had to figure out how to get people to use the feature. Why not just give someone a business card? Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier... When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time. Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database. Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation. All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels. Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.

    Read the article

  • BIND9 server not responding to external queries

    - by Twitchy
    I have set up a BIND server on my dedicated box which I want to host a nameserver for my domain on. When I use dig @202.169.196.59 nzserver.co.nz locally on the server I get the following response... ; <<>> DiG 9.8.1-P1 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43773 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; ANSWER SECTION: nzserver.co.nz. 3600 IN A 202.169.196.59 ;; AUTHORITY SECTION: nzserver.co.nz. 3600 IN NS ns2.nzserver.co.nz. nzserver.co.nz. 3600 IN NS ns1.nzserver.co.nz. ;; ADDITIONAL SECTION: ns1.nzserver.co.nz. 3600 IN A 202.169.196.59 ns2.nzserver.co.nz. 3600 IN A 202.169.196.59 ;; Query time: 0 msec ;; SERVER: 202.169.196.59#53(202.169.196.59) ;; WHEN: Sat Oct 27 15:40:45 2012 ;; MSG SIZE rcvd: 116 Which is good, and is the output I want. But when simply using dig nzserver.co.nz I get... ; <<>> DiG 9.8.1-P1 <<>> nzserver.co.nz ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16970 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; Query time: 308 msec ;; SERVER: 202.169.192.61#53(202.169.192.61) ;; WHEN: Sat Oct 27 17:09:12 2012 ;; MSG SIZE rcvd: 32 And if I use dig @202.169.196.59 nzserver.co.nz on another linux machine I get... ; <<>> DiG 9.7.3 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached Am I doing something wrong here? Port 53 is definitely open. /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 202.169.192.61; 202.169.206.10; }; listen-on { 202.169.196.59; }; }; /etc/bind/named.conf.local zone "nzserver.co.nz" { type master; file "/etc/bind/nzserver.co.nz.zone"; }; /etc/bind/nzserver.co.nz.zone ; BIND db file for nzserver.co.nz $ORIGIN nzserver.co.nz. @ IN SOA ns1.nzserver.co.nz. mr.steven.french.gmail.com. ( 2012102606 28800 7200 864000 3600 ) NS ns1.nzserver.co.nz. NS ns2.nzserver.co.nz. MX 10 mail.nzserver.co.nz. @ IN A 202.169.196.59 * IN A 202.169.196.59 ns1 IN A 202.169.196.59 ns2 IN A 202.169.196.59 www IN A 202.169.196.59 mail IN A 202.169.196.59

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • View Docs and PDFs Directly in Google Chrome

    - by Matthew Guay
    Would you like to view documents, presentations, and PDFs directly in Google Chrome?  Here’s a handy extension that makes Google Docs your default online viewer so don’t have to download the file first. Getting Started By default, when you come across a PDF or other common document file online in Google Chrome, you’ll have to download the file and open it in a separate application. It’d be much easier to simply view online documents directly in Chrome.  To do this, head over to the Docs PDF/PowerPoint Viewer page on the Chrome Extensions site (link below), and click Install to add it to your browser. Click Install to confirm that you want to install this extension. Extensions don’t run by default in Incognito mode, so if you’d like to always view documents directly in Chrome, open the Extensions page and check Allow this extension to run in incognito. Now, when you click a link for a document online, such as a .docx file from Word, it will open in the Google Docs viewer. These documents usually render in their original full-quality.  You can zoom in and out to see exactly what you want, or search within the document.  Or, if it doesn’t look correct, you can click the Download link in the top left to save the original document to your computer and open it in Office.   Even complex PDF render very nicely.  Do note that Docs will keep downloading the document as you’re reading it, so if you jump to the middle of a document it may look blurry at first but will quickly clear up. You can even view famous presentations online without opening them in PowerPoint.  Note that this will only display the slides themselves, but if you’re looking for information you likely don’t need the slideshow effects anyway.   Adobe Reader Conflicts If you already have Adobe Acrobat or Adobe Reader installed on your computer, PDF files may open with the Adobe plugin.  If you’d prefer to read your PDFs with the Docs PDF Viewer, then you need to disable the Adobe plugin.  Enter the following in your Address Bar to open your Chrome Plugins page: chrome://plugins/ and then click Disable underneath the Adobe Acrobat plugin. Now your PDFs will always open with the Docs viewer instead. Performance Who hasn’t been frustrated by clicking a link to a PDF file, only to have your browser pause for several minutes while Adobe Reader struggles to download and display the file?  Google Chrome’s default behavior of simply downloading the files and letting you open them is hardly more helpful.  This extension takes away both of these problems, since it renders the documents on Google’s servers.  Most documents opened fairly quickly in our tests, and we were able to read large PDFs only seconds after clicking their link.  Also, the Google Docs viewer rendered the documents much better than the HTML version in Google’s cache. Google Docs did seem to have problem on some files, and we saw error messages on several documents we tried to open.  If you encounter this, click the Download link in the top left corner to download the file and view it from your desktop instead. Conclusion Google Docs has improved over the years, and now it offers fairly good rendering even on more complex documents.  This extension can make your browsing easier, and help documents and PDFs feel more like part of the Internet.  And, since the documents are rendered on Google’s servers, it’s often faster to preview large files than to download them to your computer. Link Download the Docs PDF/PowerPoint Viewer extension from Google Similar Articles Productive Geek Tips Integrate Google Docs with Outlook the Easy WayGoogle Image Search Quick FixView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeHow To Export Documents from Google Docs to Your Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • So you want a French Site?

    - by juanlarios
    I thought I would write a quick write up of how to create a french site in SharePoint 2007. I'm not talking about a Variation but just a plain French Site from the ground up. There were some gotchas that I felt were worth blogging about. First:  go to Microsoft Telnet Article and follow the install instructions. Make sure that when you get to the download page that you select "French" as part of the drop down and you download and install the right language pack. I noticed that if you did not click the "change" button enven though I selected the 'french' language pack, it reverted back to the english language pack.   Second: You will notice a couple of things. When you go to central admin you will see the following:    Now you can pick between french site or english. You will get this if you install other language packs and they will be listed in the drop down. You will notice that you now have french headings and frech listings of sites. You see "Publishing" as a heading because I have a custom site definition that I deployed as a french site. Third: As you start navigating around and trying to create document libraries or sites you will start getting errors. Errors like the following: "Cannot make a cache safe URL for "SelectorControls.js", file not found. Please verify that the file exists under the layouts directory. " Troubleshoot issues with Windows SharePoint Services. Once you resolve the issue with this "js" file, you will find that there are other js files that are missing. The only problem is that if you are not fluent in French or the language you are trying to deploy, Well, you'll have a tough time understanding error messages as they will all be in the new language you are trying to deploy. So let's just talk about what happened when you installed the language pack. In the 12 Hive:  12/Template    you will now see a 1033 folder and a 1036 folder. The 1036 folder is the folder that was created and added as part of the language pack. What the above error is saying is that now that it's looking at the 1036 folder, well, it's missing some files. The nice thing is that these files are included in the 1033 folder (which is the English Language Pack). Simply copy and paste the controls from the one folder to the other. There will be more than one conflict so you will have to move serveral controls over. Can't remember how many but simply add them as error messages come up. I had to add some navigation controls and some content selectors.   Now that's all that you need to install the Frech Language pack anc reate site collections that are entirely in a another language. Do not mistake this with Variations, where you can have multiple language sites. For those of you doing a little bit extra with this, let me share what I was doing extra and what I needed to get it working for me. I had had a custom site definition which was obviously not showing up in my selection of french sites. I was under the impression that all sites in English would show up in french and that the sites were simply routed to a new Resource file for french content. And that is the case but there is a little extra that needs to be done if you have a custom site definition deployed:  First: Under hive 12/Template/1033/XML  there is a listing of site definition files that are deployed to the English side of things. If you navigate to 12/Template/1036/XML  and open one of the site definitions you will see that they are similar and reference the existing site definitions installed on the server, except that they have some french added to descriptions and names. Simply copy the xml file of your custom template to the 1036 folder to have it show up as a selection when you select French as the dropdown entry when create a site colleciton. You can go ahead and change the description and name to suit the language it's under.    Second: As part of my site definition, I packaed up several list templates, that were saved as STP files. When you navigate to the list template listing, well, the templates are for English sites, not French so I cannot create document libraries based on the template. What now? well here comes KWIzCom to the rescue! They seem to have put out a "STP language converter" where you can take a site template or list template and convert it to any target language you are after. It's a free download, Use it and you're good to go.  One thing I will mention is that when I convereted the English documents I whent ahead and converted them to French-Canadien. And it didn't work! so I finally figured out that the French Version it was expecting in the french site was "French-France". Don't know why that is, it's just what needs to be done to get that working. When I did that, I was able to use the List templates that I created in the English site for the French Site.   Hope it helps , good luck!

    Read the article

  • IBM Keynote: (hardware,software)–>{IBM.java.patterns}

    - by Janice J. Heiss
    On Sunday evening, September 30, 2012, Jason McGee, IBM Distinguished Engineer and Chief Architect Cloud Computing, along with John Duimovich IBM Distinguished Engineer and Java CTO, gave an information- and idea-rich keynote that left Java developers with much to ponder.Their focus was on the challenges to make Java more efficient and productive given the hardware and software environments of 2012. “One idea that is very interesting is the idea of multi-tenancy,” said McGee, “and how we can move up the spectrum. In traditional systems, we ran applications on dedicated middleware, operating systems and hardware. A lot of customers still run that way. Now people introduce hardware virtualization and share the hardware. That is good but there is a lot more we can do. We can share middleware and the application itself.” McGee challenged developers to better enable the Java language to function in these higher density models. He spoke about the need to describe patterns that help us grasp the full environment that an application needs, whether it’s a web or full enterprise application. Developers need to understand the resources that an application interacts with in a way that is simple and straightforward. The task is to then automate that deployment so that the complexity of infrastructure can be by-passed and developers can live in a simpler world where the cloud can automatically configure the needed environment. McGee argued that the key, something IBM has been working on, is to use a simpler pattern that allows a cloud-based architecture to embrace the entire infrastructure required for an application and make it highly available, scalable and able to recover from failure. The cloud-based architecture would automate the complexity of setting up and managing the infrastructure. IBM has been trying to realize this vision for customers so they can describe their Java application environment simply and allow the cloud to automate the deployment and management of applications. “The point,” explained McGee, “is to package the executable used to describe applications, to drop it into a shared system and let that system provide some intelligence about how to deploy and manage those applications.”John Duimovich on Improvements in JavaMcGee then brought onstage IBM’s Distinguished Engineer and CTO for Java, John Duimovich, who showed the audience ways to deploy Java applications more efficiently.Duimovich explained that, “When you run lots of copies of Java in the cloud or any hypervisor virtualized system, there are a lot of duplications of code and jar files. IBM has a facility called ‘shared classes’ where we put shared code, read only artefacts in a cache that is sharable across hypervisors.” By putting JIT code in ahead of time, he explained that the application server will use 20% less memory and operate 30% faster.  He described another example of how the JVM allows for the maximum amount of sharing that manages the tenants and file sockets and memory use through throttling and control. Duimovich touched on the “thin is in” model and IBM’s Liberty Profile and lightweight runtime for the cloud, which allows for greater efficiency in interacting with the cloud.Duimovich discussed the confusion Java developers experience when, for example, the hypervisor tells them that that they have 8 and then 4 and then 16 cores. “Because hypervisors are virtualized, they can change based on resource needs across the hypervisor layer. You may have 10 instances of an operation system and you may need to reallocate memory, " explained Duimovich.  He showed how to resize LPARs, reallocate CPUs and migrate applications as needed. He explained how application servers can resize thread pools and better use resources based on information from the hypervisors.Java Challenges in Hardware and SoftwareMcGee ended the keynote with a summary of upcoming hardware and software challenges for the Java platform. He noted that one reason developers love Java is it allows them to ignore differences in hardware. He stated that the most important things happening in hardware were in network and storage – in developments such as the speed of SSD, the exploitation of high-speed, low-latency networking, and recent developments such as storage-class memory, and non-volatile main memory. “So we are challenged to maintain the benefits of Java and the abstraction it provides from hardware while still exploiting the new innovations in hardware,” said McGee.McGee discussed transactional messaging applications where developers send messages transactionally persist a message to storage, something traditionally done by backing messages on spinning disks, something mostly outdated. “Now,” he pointed out, “we would use SSD and store it in Flash and get 70,000 messages a second. If we stored it using a PCI express-based flash memory device, it is still Flash but put on a PCI express bus on a card closer to the CPU. This way I get 300,000 messages a second and 25% improvement in latency.” McGee’s central point was that hardware has a huge impact on the performance and scalability of applications. New technologies are enabling developers to build classes of Java applications previously unheard of. “We need to be able to balance these things in Java – we need to maintain the abstraction but also be able to exploit the evolution of hardware technology,” said McGee. According to McGee, IBM's current focus is on systems wherein hardware and software are shipped together in what are called Expert Integrated Systems – systems that are pre-optimized, and pre-integrated together. McGee closed IBM’s engaging and thought-provoking keynote by pointing out that the use of Java in complex applications is increasingly being augmented by a host of other languages with strong communities around them – JavaScript, JRuby, Scala, Python and so forth. Java developers now must understand the strengths and weaknesses of such newcomers as applications increasingly involve a complex interconnection of languages.

    Read the article

  • WebCenter Content (WCC) Trace Sections

    - by Kevin Smith
    Kyle has a good post on how to modify the size and number of WebCenter Content (WCC) trace files. His post reminded me I have been meaning to write a post on WCC trace sections for a while. searchcache - Tells you if you query was found in the WCC search cache. searchquery - Shows the processing of the query as it is converted form what the user submitted to the end query that will be sent to the database. Shows conversion from the universal query syntax to the syntax specific to the search solution WCC is configured to use. services (verbose) - Lists the filters that are called for each service. This will let you know what filters are available for each service and will also tell you what filters are used by WCC add-on components and any custom components you have installed. The How To Component Sample has a list of filters, but it has not been updated since 7.5, so it is a little outdated now. With each new release WCC adds more filters. If you have a filter that has no code attached to it you will see output like this: services/6    09.25 06:40:26.270    IdcServer-423    Called filter event computeDocName with no filter plugins registered When a WCC add-on or custom component uses a filter you will see trace output like this: services/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionValidateCheckinData with parameter postValidateCheckinDataservices/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionFilters with parameter postValidateCheckinData As you can see from this sample output it is possible to have multiple code points using the same filter. systemdatabase - Dumps the database call AFTER it executes. This can be somewhat troublesome if you are trying to track down some weird database problems. We had a problem where WCC was getting into a deadlock situation. We turned on the systemdatabase trace section and thought we had the problem database call, but it turned out since it printed out the database call after it was executed we were looking at the database call BEFORE the one causing the deadlock. We ended up having to turn on tracing at the database level to see the database call WCC was making that was causing the deadlock. socketrequests (verbose) - dumps the actual messages received and sent over the socket connection by WCC for a service. If you have gzip enabled you will see junk on the response coming back from WCC. For debugging disable the gzip of the WCC response.Here is an example of the dump of the request for a GET_SEARCH_RESULTS service call. socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: REMOTE_USER=sysadmin.USER-AGENT=Java;.Stel socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: lent.CIS.11g.CONTENT_TYPE=text/html.HEADER socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: _ENCODING=UTF-8.REQUEST_METHOD=POST.CONTEN socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: T_LENGTH=270.HTTP_HOST=CIS.$$$$.NoHttpHead socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: ers=0.IsJava=1.IdcService=GET_SEARCH_RESUL socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: [email protected] socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: calData.SortField=dDocName.ClientEncoding= socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: UTF-8.IdcService=GET_SEARCH_RESULTS.UserTi socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: meZone=UTC.UserDateFormat=iso8601.SortDesc socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: =ASC.QueryText=dDocType..matches..`Documen socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: t`.@end. userstorage, jps - Provides trace details for user authentication and authorization. Includes information on the determination of what roles and accounts a user has access to. In 11g a new trace section, jps, was added with the addition of the JpsUserProvider to communicate with WebLogic Server. The WCC developers decide when to use the verbose option for their trace output, so sometime you need to try verbose to see what different information you get. One of the things I would always have liked to see if the ability to turn on verbose output selectively for individual trace sections. When you turn on verbose output you get it for all trace sections you have enabled. This can quickly fill up your trace files with a lot of information if you have the socket trace section turned on.

    Read the article

  • Certify August Updates

    - by Sadia2
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We have added some release and platform certifications to MOS Certify. Applications : Oracle Demantra Demand Management 7.3.1.5, Oracle Demantra Predictive Trade Planning 7.3.1.5, Oracle Demantra Sales and Operations Planning 7.3.1.5 Database: Oracle Database Client 12.1.0.1.0 11.2.0.4.0, Oracle Clusterware 11.2.0.4.0, Oracle Database 11.2.0.4.0, Oracle Real Application Clusters 11.2.0.4.0 E-Business Suite: Oracle E-Business Suite 12.1.3, Oracle E-Business Suite 12.1.2, Oracle E-Business Suite 12.1.1, Oracle E-Business Suite 12.0.6, Oracle E-Business Suite 11.5.10.2 Edge Applications: Oracle Transportation Management 6.3.2 Enterprise Manager: Oracle Application Management Pack for Oracle E-Business Suite 12.1.0.1.0 Fusion Middleware: Discoverer Administrator 11.1.1.6.0, Discoverer Desktop 11.1.1.6.0, Forms Builder 11.1.1.6.0, Oracle Application Development Framework 11.1.1.6.0, Oracle Application Development Runtime 11.1.1.6.0, Oracle Business Intelligence Publisher 11.1.1.6.0, Oracle Directory Services Manager 11.1.1.6.0, Oracle Forms 11.1.1.6.0, Oracle GoldenGate 11.1.1.1.0, 11.1.1.1.2, 11.1.1.1.1, Oracle GoldenGate Application Adapters 11.1.1.1.1, Oracle Identity and Access Management 11.1.2.0.0, 11.1.2.1.0, Oracle Identity Federation 11.1.1.6.0, Oracle Real-Time Decision Load Generator 11.1.1.7.0, Oracle Real-Time Decision Studio 11.1.1.7.0, Oracle Real-Time Decisions 11.1.1.6.0, Oracle Reports 11.1.1.6.0, Oracle Segmentation Server 11.1.1.6.0, Oracle Virtual Directory 11.1.1.6.0, Oracle Web Cache 11.1.1.6.0, Oracle WebCenter Content Imaging 11.1.1.8.0, Oracle WebCenter Content Inbound Refinery Server 11.1.1.8.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Enterprise Capture 11.1.1.8.0, Oracle WebCenter Portal 11.1.1.8.0, Oracle WebCenter Sites 11.1.1.8.0, Oracle WebCenter Sites: CIP for EMC Documentum 11.1.1.8.0, Oracle WebCenter Sites: CIP for File Systems and MS SharePoint 11.1.1.8.0, Oracle WebCenter Sites: Community-Gadgets 11.1.1.8.0, Oracle WebCenter Sites: Explorer 11.1.1.8.0, Oracle WebCenter Universal Content Management 11.1.1.8.0, Reports Builder 11.1.1.6.0, Oracle WebCenter Content Records 11.1.1.8.0, Oracle WebCenter Content Rights 11.1.1.8.0, Oracle WebCenter Content UI 11.1.1.8.0, Oracle WebCenter Sites: Developer Tools 11.1.1.8.0 FSGBU Insurance Group : Oracle Health Insurance Claims 2.13.3.0.0, 2.13.2.0.0, 2.13.1.0.0 JD Edwards EnterpriseOne: JD Edwards EnterpriseOne Tools 9.1.3.0, 9.1.2.0, 9.1.0.0 JD Edwards World: JD Edwards World Service Enablement A93SE, A931SE PeopleSoft: PeopleSoft PeopleTools 8.52 Siebel Enterprise: Siebel Application Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel CRM Desktop Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Database Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel HI Web Client 8.2.2.2.0, 8.1.1.9.0, Siebel Gateway Server 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Outlook Add-in Client 8.2.2.2.0, Siebel Remote Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Tools Client 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0, Siebel Web Server Extension 8.2.2.4.0, 8.2.2.3.0, 8.2.2.2.0, 8.1.1.11.0, 8.1.1.10.0, 8.1.1.9.0 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 5

    - by MarkPearl
    Learning Outcomes Describe the operation of a memory cell Explain the difference between DRAM and SRAM Discuss the different types of ROM Explain the concepts of a hard failure and a soft error respectively Describe SDRAM organization Semiconductor Main Memory The two traditional forms of RAM used in computers are DRAM and SRAM DRAM (Dynamic RAM) Divided into two technologies… Dynamic Static Dynamic RAM is made with cells that store data as charge on capacitors. The presence or absence of charge in a capacitor is interpreted as a binary 1 or 0. Because capacitors have natural tendency to discharge, dynamic RAM requires periodic charge refreshing to maintain data storage. The term dynamic refers to the tendency of the stored charge to leak away, even with power continuously applied. Although the DRAM cell is used to store a single bit (0 or 1), it is essentially an analogue device. The capacitor can store any charge value within a range, a threshold value determines whether the charge is interpreted as a 1 or 0. SRAM (Static RAM) SRAM is a digital device that uses the same logic elements used in the processor. In SRAM, binary values are stored using traditional flip flop logic configurations. SRAM will hold its data as along as power is supplied to it. Unlike DRAM, no refresh is required to retain data. SRAM vs. DRAM DRAM is simpler and smaller than SRAM. Thus it is more dense and less expensive than SRAM. The cost of the refreshing circuitry for DRAM needs to be considered, but if the machine requires a large amount of memory, DRAM turns out to be cheaper than SRAM. SRAMS are somewhat faster than DRAM, thus SRAM is generally used for cache memory and DRAM is used for main memory. Types of ROM Read Only Memory (ROM) contains a permanent pattern of data that cannot be changed. ROM is non volatile meaning no power source is required to maintain the bit values in memory. While it is possible to read a ROM, it is not possible to write new data into it. An important application of ROM is microprogramming, other applications include library subroutines for frequently wanted functions, System programs, Function tables. A ROM is created like any other integrated circuit chip, with the data actually wired into the chip as part of the fabrication process. To reduce costs of fabrication, we have PROMS. PROMS are… Written only once Non-volatile Written after fabrication Another variation of ROM is the read-mostly memory, which is useful for applications in which read operations are far more frequent than write operations, but for which non volatile storage is required. There are three common forms of read-mostly memory, namely… EPROM EEPROM Flash memory Error Correction Semiconductor memory is subject to errors, which can be classed into two categories… Hard failure – Permanent physical defect so that the memory cell or cells cannot reliably store data Soft failure – Random error that alters the contents of one or more memory cells without damaging the memory (common cause includes power supply issues, etc.) Most modern main memory systems include logic for both detecting and correcting errors. Error detection works as follows… When data is to be read into memory, a calculation is performed on the data to produce a code Both the code and the data are stored When the previously stored word is read out, the code is used to detect and possibly correct errors The error checking provides one of 3 possible results… No errors are detected – the fetched data bits are sent out An error is detected, and it is possible to correct the error. The data bits plus error correction bits are fed into a corrector, which produces a corrected set of bits to be sent out An error is detected, but it is not possible to correct it. This condition is reported Hamming Code See wiki for detailed explanation. We will probably need to know how to do a hemming code – refer to the textbook (pg. 188 – 189) Advanced DRAM organization One of the most critical system bottlenecks when using high-performance processors is the interface to main memory. This interface is the most important pathway in the entire computer system. The basic building block of main memory remains the DRAM chip. In recent years a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market including… SDRAM (Synchronous DRAM) DDR-DRAM RDRAM SDRAM (Synchronous DRAM) SDRAM exchanges data with the processor synchronized to an external clock signal and running at the full speed of the processor/memory bus without imposing wait states. SDRAM employs a burst mode to eliminate the address setup time and row and column line precharge time after the first access In burst mode a series of data bits can be clocked out rapidly after the first bit has been accessed SDRAM has a multiple bank internal architecture that improves opportunities for on chip parallelism SDRAM performs best when it is transferring large blocks of data serially There is now an enhanced version of SDRAM known as double data rate SDRAM or DDR-SDRAM that overcomes the once-per-cycle limitation of SDRAM

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • Polite busy-waiting with WRPAUSE on SPARC

    - by Dave
    Unbounded busy-waiting is an poor idea for user-space code, so we typically use spin-then-block strategies when, say, waiting for a lock to be released or some other event. If we're going to spin, even briefly, then we'd prefer to do so in a manner that minimizes performance degradation for other sibling logical processors ("strands") that share compute resources. We want to spin politely and refrain from impeding the progress and performance of other threads — ostensibly doing useful work and making progress — that run on the same core. On a SPARC T4, for instance, 8 strands will share a core, and that core has its own L1 cache and 2 pipelines. On x86 we have the PAUSE instruction, which, naively, can be thought of as a hardware "yield" operator which temporarily surrenders compute resources to threads on sibling strands. Of course this helps avoid intra-core performance interference. On the SPARC T2 our preferred busy-waiting idiom was "RD %CCR,%G0" which is a high-latency no-nop. The T4 provides a dedicated and extremely useful WRPAUSE instruction. The processor architecture manuals are the authoritative source, but briefly, WRPAUSE writes a cycle count into the the PAUSE register, which is ASR27. Barring interrupts, the processor then delays for the requested period. There's no need for the operating system to save the PAUSE register over context switches as it always resets to 0 on traps. Digressing briefly, if you use unbounded spinning then ultimately the kernel will preempt and deschedule your thread if there are other ready threads than are starving. But by using a spin-then-block strategy we can allow other ready threads to run without resorting to involuntary time-slicing, which operates on a long-ish time scale. Generally, that makes your application more responsive. In addition, by blocking voluntarily we give the operating system far more latitude regarding power management. Finally, I should note that while we have OS-level facilities like sched_yield() at our disposal, yielding almost never does what you'd want or naively expect. Returning to WRPAUSE, it's natural to ask how well it works. To help answer that question I wrote a very simple C/pthreads benchmark that launches 8 concurrent threads and binds those threads to processors 0..7. The processors are numbered geographically on the T4, so those threads will all be running on just one core. Unlike the SPARC T2, where logical CPUs 0,1,2 and 3 were assigned to the first pipeline, and CPUs 4,5,6 and 7 were assigned to the 2nd, there's no fixed mapping between CPUs and pipelines in the T4. And in some circumstances when the other 7 logical processors are idling quietly, it's possible for the remaining logical processor to leverage both pipelines. Some number T of the threads will iterate in a tight loop advancing a simple Marsaglia xor-shift pseudo-random number generator. T is a command-line argument. The main thread loops, reporting the aggregate number of PRNG steps performed collectively by those T threads in the last 10 second measurement interval. The other threads (there are 8-T of these) run in a loop busy-waiting concurrently with the T threads. We vary T between 1 and 8 threads, and report on various busy-waiting idioms. The values in the table are the aggregate number of PRNG steps completed by the set of T threads. The unit is millions of iterations per 10 seconds. For the "PRNG step" busy-waiting mode, the busy-waiting threads execute exactly the same code as the T worker threads. We can easily compute the average rate of progress for individual worker threads by dividing the aggregate score by the number of worker threads T. I should note that the PRNG steps are extremely cycle-heavy and access almost no memory, so arguably this microbenchmark is not as representative of "normal" code as it could be. And for the purposes of comparison I included a row in the table that reflects a waiting policy where the waiting threads call poll(NULL,0,1000) and block in the kernel. Obviously this isn't busy-waiting, but the data is interesting for reference. _table { border:2px black dotted; margin: auto; width: auto; } _tr { border: 2px red dashed; } _td { border: 1px green solid; } _table { border:2px black dotted; margin: auto; width: auto; } _tr { border: 2px red dashed; } td { background-color : #E0E0E0 ; text-align : right ; } th { text-align : left ; } td { background-color : #E0E0E0 ; text-align : right ; } th { text-align : left ; } Aggregate progress T = #worker threads Wait Mechanism for 8-T threadsT=1T=2T=3T=4T=5T=6T=7T=8 Park thread in poll() 32653347334833483348334833483348 no-op 415 831 124316482060249729303349 RD %ccr,%g0 "pause" 14262429269228623013316232553349 PRNG step 412 829 124616702092251029303348 WRPause(8000) 32443361333133483349334833483348 WRPause(4000) 32153308331533223347334833473348 WRPause(1000) 30853199322432513310334833483348 WRPause(500) 29173070315032223270330933483348 WRPause(250) 26942864294930773205338833483348 WRPause(100) 21552469262227902911321433303348

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Swap partition not recognized (The disk drive with UUID=... is not ready yet or not present)

    - by ladaghini
    I think I had an encrypted swap partition, because I chose to encrypt my home directory during the installation. I believe that's what the line with /dev/mapper/cryptswap1 ... in my /etc/fstab is all about. I did something to bork my swap because on the next boot, I got a message (paraphrased): The disk drive for /dev/mapper/cryptswap1 is not ready yet or not present. Wait to continue. Press S to skip or M to manually recover. (As a side note, pressing S or M seemed to do nothing different from just waiting.) Here's what I've tried: This tutorial on how to fix the swap partition not mounting. However, in the above, the mkswap command fails because the device is busy. So I booted from a live USB, ran GParted to reformat the swap partition (which showed up as an unknown fs type), and chrooted into the broken system to try that tutorial again. I also adjusted /etc/initramfs-tools/conf.d/resume and /etc/fstab to reflect the new UUID generated from formatting the partition as a swap. That still didn't work; instead of /dev/mapper/cryptswap1 not present, "The disk drive with UUID=[swap partition's UUID] is not ready yet or not present..." So I decided to start afresh as though I never had created a swap partition in the first place. From the Live USB, I deleted the swap partition altogether (which, again showed up in GParted as an unknown fs type), removed the swap and cryptswap entries in /etc/fstab as well as removed /etc/initramfs-tools/conf.d/resume and /etc/crypttab. At this point the main system shouldn't be considered broken because there is no swap partition and no instructions to mount one, right? I didn't get any errors during startup. I followed the same instructions to create and encrypt the swap partition, starting with creating a partition for the swap, though I think fdisk said a reboot was necessary to see changes. I was confident the 3rd process above would work, but the problem yet persists. Some relevant info (/dev/sda8 is the swap partition): /etc/fstab file: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=4c11e82c-5fe9-49d5-92d9-cdaa6865c991 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=4031413e-e89f-49a9-b54c-e887286bb15e /boot ext4 defaults 0 2 # /home was on /dev/sda7 during installation UUID=d5bbfc6f-482a-464e-9f26-fd213230ae84 /home ext4 defaults 0 2 # swap was on /dev/sda8 during installation UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 output of sudo mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda5 on /boot type ext4 (rw) /dev/sda7 on /home type ext4 (rw) /home/undisclosed/.Private on /home/undisclosed type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=cbae1771abd34009,ecryptfs_fnek_sig=7cefe2f59aab8e58) gvfs-fuse-daemon on /home/undisclosed/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=undisclosed) output of sudo blkid (note that /dev/sda8 is missing): /dev/sda1: LABEL="SYSTEM" UUID="960490E80490CC9D" TYPE="ntfs" /dev/sda2: UUID="D4043140043126C0" TYPE="ntfs" /dev/sda3: LABEL="Shared" UUID="80F613E1F613D5EE" TYPE="ntfs" /dev/sda5: UUID="4031413e-e89f-49a9-b54c-e887286bb15e" TYPE="ext4" /dev/sda6: UUID="4c11e82c-5fe9-49d5-92d9-cdaa6865c991" TYPE="ext4" /dev/sda7: UUID="d5bbfc6f-482a-464e-9f26-fd213230ae84" TYPE="ext4" /dev/mapper/cryptswap1: UUID="41fa147a-3e2c-4e61-b29b-3f240fffbba0" TYPE="swap" output of sudo fdisk -l: Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xdec3fed2 Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 210135039 104862720 7 HPFS/NTFS/exFAT /dev/sda3 210135040 415422463 102643712 7 HPFS/NTFS/exFAT /dev/sda4 415424510 625141759 104858625 5 Extended /dev/sda5 415424512 415922175 248832 83 Linux /dev/sda6 415924224 515921919 49998848 83 Linux /dev/sda7 515923968 621389823 52732928 83 Linux /dev/sda8 621391872 625141759 1874944 82 Linux swap / Solaris Disk /dev/mapper/cryptswap1: 1919 MB, 1919942656 bytes 255 heads, 63 sectors/track, 233 cylinders, total 3749888 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xaf5321b5 /etc/initramfs-tools/conf.d/resume file: RESUME=UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 /etc/crypttab file: cryptswap1 /dev/sda8 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 output of sudo swapon -as: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 1874940 0 -1 output of sudo free -m: total used free shared buffers cached Mem: 1476 1296 179 0 35 671 -/+ buffers/cache: 590 886 Swap: 1830 0 1830 So, how can this be fixed?

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sam Drake
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • java.util.zip.ZipException: Error opening file When Deploying an Application to Weblogic Server

    - by lmestre
    The latest weeks we had a hard time trying to solve a deployment issue.* WebLogic Server 10.3.6* Target: WLS Cluster<21-10-2013 05:29:40 PM CLST> <Error> <Console> <BEA-240003> <Console encountered the following error weblogic.management.DeploymentException:        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:69)        at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)        at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)        at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)        at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)        at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)        at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)        at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)        at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)        at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)        at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)        at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)        at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)        at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:545)        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:256)        at weblogic.work.ExecuteThread.run(ExecuteThread.java:221)Caused by: java.util.zip.ZipException: Error opening file - C:\Oracle\Middleware\user_projects\domains\MyDomain\servers\MyServer\stage\myapp\myapp.war Message - error in opening zip file        at weblogic.servlet.utils.WarUtils.existsInWar(WarUtils.java:87)        at weblogic.servlet.utils.WarUtils.isWebServices(WarUtils.java:76)        at weblogic.servlet.internal.WarDeploymentFactory.findOrCreateComponentMBeans(WarDeploymentFactory.java:61) So the first idea you have with that error is that the war file is corrupted or has incorrect privileges.        We tried:1. Unzipping the  war file, the file was perfect.2. Checking the size, same size as in other environments.3. Checking the ownership of the file, same as in other environments.4. Checking the permissions of the file, same as other applications.Then we accepted the file was fine, so we tried enabling some deployment debugs, but no clues.We also tried:1. Delete all contents of <MyDomain/servers/<MyServer>/tmp> a and <MyDomain/servers/<MyServer>/cache> folders, the issue persisted.2. When renaming the application the deployment was sucessful3. When targeting to the Admin Server, deployment was also working.4. Using 'Copy this application onto every target for me' didn't help either.Finally, my friend 'Test Case' solved the issue again.I saw this name in the config.xml<jdbc-system-resource>    <name>myapp</name>    <target></target>    <descriptor-file-name>jdbc/myapp-jdbc.xml</descriptor-file-name>  </jdbc-system-resource> So, it turned out that customer had created a DataSource with the same name as the application 'myapp' in the above example.By deleting the datasource and created another exact DataSource with a different name the issue was solved.At this point, Do you know Why 'java.util.zip.ZipException: Error opening file' was occurring?Because all names is WebLogic Server need to be unique.References: http://docs.oracle.com/cd/E23943_01/web.1111/e13709/setup.htm"Assigning Names to WebLogic Server ResourcesMake sure that each configurable resource in your WebLogic Server environment has a unique name. Each, domain, server, machine, cluster, JDBC data source, virtual host, or other resource must have a unique name." Enjoy!

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

  • Cannot see the variable In my own JQuery plugin's function.

    - by qinHaiXiang
    I am writing one of my own JQuery plugin. And I got some strange which make me confused. I am using JQuery UI datepicker with my plugin. ;(function($){ var newMW = 1, mwZIndex = 0; // IgtoMW contructor Igtomw = function(elem , options){ var activePanel, lastPanel, daysWithRecords, sliding; // used to check the animation below is executed to the end. // used to access the plugin's default configuration this.opts = $.extend({}, $.fn.igtomw.defaults, options); // intial the model window this.intialMW(); }; $.extend(Igtomw.prototype, { // intial model window intialMW : function(){ this.sliding = false; //this.daysWithRecords = []; this.igtoMW = $('<div />',{'id':'igto'+newMW,'class':'igtoMW',}) .css({'z-index':mwZIndex}) // make it in front of all exist model window; .appendTo('body') .draggable({ containment: 'parent' , handle: '.dragHandle' , distance: 5 }); //var igtoWrapper = igtoMW.append($('<div />',{'class':'igtoWrapper'})); this.igtoWrapper = $('<div />',{'class':'igtoWrapper'}).appendTo(this.igtoMW); this.igtoOpacityBody = $('<div />',{'class':'igtoOpacityBody'}).appendTo(this.igtoMW); //var igtoHeaderInfo = igtoWrapper.append($('<div />',{'class':'igtoHeaderInfo dragHandle'})); this.igtoHeaderInfo = $('<div />',{'class':'igtoHeaderInfo dragHandle'}) .appendTo(this.igtoWrapper); this.igtoQuickNavigation = $('<div />',{'class':'igtoQuickNavigation'}) .css({'color':'#fff'}) .appendTo(this.igtoWrapper); this.igtoContentSlider = $('<div />',{'class':'igtoContentSlider'}) .appendTo(this.igtoWrapper); this.igtoQuickMenu = $('<div />',{'class':'igtoQuickMenu'}) .appendTo(this.igtoWrapper); this.igtoFooter = $('<div />',{'class':'igtoFooter dragHandle'}) .appendTo(this.igtoWrapper); // append to igtoHeaderInfo this.headTitle = this.igtoHeaderInfo.append($('<div />',{'class':'headTitle'})); // append to igtoQuickNavigation this.igQuickNav = $('<div />', {'class':'igQuickNav'}) .html('??') .appendTo(this.igtoQuickNavigation); // append to igtoContentSlider this.igInnerPanelTopMenu = $('<div />',{'class':'igInnerPanelTopMenu'}) .appendTo(this.igtoContentSlider); this.igInnerPanelTopMenu.append('<div class="igInnerPanelButtonPreWrapper"><a href="" class="igInnerPanelButton Pre" action="" style="background-image:url(images/igto/igInnerPanelTopMenu.bt.bg.png);"></a></div>'); this.igInnerPanelTopMenu.append('<div class="igInnerPanelSearch"><input type="text" name="igInnerSearch" /><a href="" class="igInnerSearch">??</a></div>' ); this.igInnerPanelTopMenu.append('<div class="igInnerPanelButtonNextWrapper"><a href="" class="igInnerPanelButton Next" action="sm" style="background-image:url(images/igto/igInnerPanelTopMenu.bt.bg.png); background-position:-272px"></a></div>' ); this.igInnerPanelBottomMenu = $('<div />',{'class':'igInnerPanelBottomMenu'}) .appendTo(this.igtoContentSlider); this.icWrapper = $('<div />',{'class':'icWrapper','id':'igto'+newMW+'Panel'}) .appendTo(this.igtoContentSlider); this.icWrapperCotentPre = $('<div class="slider pre"></div>').appendTo(this.icWrapper); this.icWrapperCotentShow = $('<div class="slider firstShow "></div>').appendTo(this.icWrapper); this.icWrapperCotentnext = $('<div class="slider next"></div>').appendTo(this.icWrapper); this.initialPanel(); this.initialQuickMenus(); console.log(this.leftPad(9)); newMW++; mwZIndex++; this.igtoMW.bind('mousedown',function(){ var $this = $(this); //alert($this.css('z-index') + ' '+mwZIndex); if( parseInt($this.css('z-index')) === (mwZIndex-1) ) return; $this.css({'z-index':mwZIndex}); mwZIndex++; //alert(mwZIndex); }); }, initialPanel : function(){ this.defaultPanelNum = this.opts.initialPanel; this.activePanel = this.defaultPanelNum; this.lastPanel = this.defaultPanelNum; this.defaultPanel = this.loadPanelContents(this.defaultPanelNum); $(this.defaultPanel).appendTo(this.icWrapperCotentShow); }, initialQuickMenus : function(){ // store the current element var obj = this; var defaultQM = this.opts.initialQuickMenu; var strMenu = ''; var marginFirstEle = '8'; $.each(defaultQM,function(key,value){ //alert(key+':'+value); if(marginFirstEle === '8'){ strMenu += '<a href="" class="btPanel" panel="'+key+'" style="margin-left: 8px;" >'+value+'</a>'; marginFirstEle = '4'; } else{ strMenu += '<a href="" class="btPanel" panel="'+key+'" style="margin-left: 4px;" >'+value+'</a>'; } }); // append to igtoQuickMenu this.igtoQMenu = $(strMenu).appendTo(this.igtoQuickMenu); this.igtoQMenu.bind('click',function(event){ event.preventDefault(); var element = $(this); if(element.is('.active')){ return; } else{ $(obj.igtoQMenu).removeClass('active'); element.addClass('active'); } var d = new Date(); var year = d.getFullYear(); var month = obj.leftPad( d.getMonth() ); var inst = null; if( obj.sliding === false){ console.log(obj.lastPanel); var currentPanelNum = parseInt(element.attr('panel')); obj.checkAvailability(); obj.getDays(year,month,inst,currentPanelNum); obj.slidePanel(currentPanelNum); obj.activePanel = currentPanelNum; console.log(obj.activePanel); obj.lastPanel = obj.activePanel; obj.icWrapper.find('input').val(obj.activePanel); } }); }, initialLoginPanel : function(){ var obj = this; this.igPanelLogin = $('<div />',{'class':"igPanelLogin"}); this.igEnterName = $('<div />',{'class':"igEnterName"}).appendTo(this.igPanelLogin); this.igInput = $('<input type="text" name="name" value="???" />').appendTo(this.igEnterName); this.igtoLoginBtWrap = $('<div />',{'class':"igButtons"}).appendTo(this.igPanelLogin); this.igtoLoginBt = $('<a href="" class="igtoLoginBt" action="OK" >??</a>\ <a href="" class="igtoLoginBt" action="CANCEL" >??</a>\ <a href="" class="igtoLoginBt" action="ADD" >????</a>').appendTo(this.igtoLoginBtWrap); this.igtoLoginBt.bind('click',function(event){ event.preventDefault(); var elem = $(this); var action = elem.attr('action'); var userName = obj.igInput.val(); obj.loadRootMenu(); }); return this.igPanelLogin; }, initialWatchHistory : function(){ var obj = this; // for thirt part plugin used if(this.sliding === false){ this.watchHistory = $('<div />',{'class':'igInnerPanelSlider'}).append($('<div />',{'class':'igInnerPanel_pre'}).addClass('igInnerPanel')) .append($('<div />',{'class':'igInnerPanel'}).datepicker({ dateFormat: 'yy-mm-dd',defaultDate: '2010-12-01' ,showWeek: true,firstDay: 1, //beforeShow:setDateStatistics(), onChangeMonthYear:function(year, month, inst) { var panelNum = 1; month = obj.leftPad(month); obj.getDays(year,month,inst,panelNum); } , beforeShowDay: obj.checkAvailability, onSelect: function(dateText, inst) { obj.checkAvailability(); } }).append($('<div />',{'class':'extraMenu'})) ) .append($('<div />',{'class':'igInnerPanel_next'}).addClass('igInnerPanel')); return this.watchHistory; } }, loadPanelContents : function(panelNum){ switch(panelNum){ case 1: alert('inside loadPanelContents') return this.initialWatchHistory(); break; case 2: return this.initialWatchHistory(); break; case 3: return this.initialWatchHistory(); break; case 4: return this.initialWatchHistory(); break; case 5: return this.initialLoginPanel(); break; } }, loadRootMenu : function(){ var obj = this; var mainMenuPanel = $('<div />',{'class':'igRootMenu'}); var currentMWId = this.igtoMW.attr('id'); this.activePanel = 0; $('#'+currentMWId+'Panel .pre'). queue(function(next){ $(this). html(mainMenuPanel). addClass('panelShow'). removeClass('pre'). attr('panelNum',0); next(); }). queue(function(next){ $('<div style="width:0;" class="slider pre"></div>'). prependTo('#'+currentMWId+'Panel').animate({width:348}, function(){ $('#'+currentMWId+'Panel .slider:last').remove() $('#'+currentMWId+'Panel .slider:last').replaceWith('<div class="slider next"></div>'); $('.btMenu').remove(); // remove bottom quick menu obj.sliding = false; $(this).removeAttr('style'); }); $('.igtoQuickMenu .active').removeClass('active'); next(); }); }, slidePanel : function(currentPanelNum){ var currentMWId = this.igtoMW.attr('id'); var obj = this; //alert(obj.loadPanelContents(currentPanelNum)); if( this.activePanel > currentPanelNum){ $('#'+currentMWId+'Panel .pre'). queue(function(next){ alert('inside slidePanel') //var initialDate = getPanelDateStatus(panelNum); //console.log('intial day in bigger panel '+initialDate) $(this). html(obj.loadPanelContents(currentPanelNum)). addClass('panelShow'). removeClass('pre'). attr('panelNum',currentPanelNum); $('#'+currentMWId+'Panel .next').remove(); next(); }). queue(function(next){ $('<div style="width:0;" class="slider pre"></div>'). prependTo('#'+currentMWId+'Panel').animate({width:348}, function(){ //$('#igto1Panel .slider:last').find(setPanel(currentPanelNum)).datepicker('destroy'); $('#'+currentMWId+'Panel .slider:last').empty().removeClass('panelShow').addClass('next').removeAttr('panelNum'); $('#'+currentMWId+'Panel .slider:last').replaceWith('<div class="slider next"></div>') obj.sliding = false;console.log('inuse inside animation: '+obj.sliding); $(this).removeAttr('style'); }); next(); }); } else{ ///// current panel num smaller than next $('#'+currentMWId+'Panel .next'). queue(function(next){ $(this). html(obj.loadPanelContents(currentPanelNum)). addClass('panelShow'). removeClass('next'). attr('panelNum',currentPanelNum); $('<div class="slider next">empty</div>').appendTo('#'+currentMWId+'Panel'); next(); }). queue(function(next){ $('#'+currentMWId+'Panel .pre').animate({width:0}, function(){ $(this).remove(); //$('#igto1Panel .slider:first').find(setPanel(currentPanelNum)).datepicker('destroy'); $('#'+currentMWId+'Panel .slider:first').empty().removeClass('panelShow').addClass('pre').removeAttr('panelNum').removeAttr('style'); $('#'+currentMWId+'Panel .slider:first').replaceWith('<div class="slider pre"></div>') obj.sliding = false; console.log('inuse inside animation: '+obj.sliding); }); next(); }); } }, getDays : function(year,month,inst,panelNum){ var obj = this; // depand on the mysql qurey condition var table_of_record = 'moviewh';//getTable(panelNum); var date_of_record = 'watching_date';//getTableDateCol(panelNum); var date_to_find = year+'-'+month; var node_of_xml_date_list = 'whDateRecords';//getXMLDateNode(panelNum); var user_id = '1';//getLoginUserId(); //var daysWithRecords = []; // empty array before asigning this.daysWithRecords.length = 0; $.ajax({ type: "GET", url: "include/get.date.list.process.php", data:({ table_of_record : table_of_record,date_of_record:date_of_record,date_to_find:date_to_find,user_id:user_id,node_of_xml_date_list:node_of_xml_date_list }), dataType: "json", cache: false, // force broser don't cache the xml file async: false, // using this option to prevent datepicker refresh ??NO success:function(data){ // had no date records if(data === null) return; obj.daysWithRecords = data; } }); //setPanelDateStatus(year,month,panelNum); console.log('call from getdays() ' + this.daysWithRecords); }, checkAvailability : function(availableDays) { // var i; var checkdate = $.datepicker.formatDate('yy-mm-dd', availableDays); //console.log( checkdate); // for(var i = 0; i < this.daysWithRecords.length; i++) { // // if(this.daysWithRecords[i] == checkdate){ // // return [true, "available"]; // } // } //console.log('inside check availablility '+ this.daysWithRecords); //return [true, "available"]; console.log(typeof this.daysWithRecords) for(i in this.daysWithRecords){ //if(this.daysWithRecords[i] == checkdate){ console.log(typeof this.daysWithRecords[i]); //return [true, "available"]; //} } return [true, "available"]; //return [false, ""]; }, leftPad : function(num) { return (num < 10) ? '0' + num : num; } }); $.fn.igtomw = function(options){ // Merge options passed in with global defaults var opt = $.extend({}, $.fn.igtomw.defaults , options); return this.each(function() { new Igtomw(this,opt); }); }; $.fn.igtomw.defaults = { // 0:mainMenu 1:whatchHistor 2:requestHistory 3:userManager // 4:shoppingCart 5:loginPanel initialPanel : 5, // default panel is LoginPanel initialQuickMenu : {'1':'whatchHIstory','2':'????','3':'????','4':'????'} // defalut quick menu }; })(jQuery); usage: $('.openMW').click(function(event){ event.preventDefault(); $('<div class="">').igtomw(); }) HTML code: <div id="taskBarAndStartMenu"> <div class="taskBarAndStartMenuM"> <a href="" class="openMW" >??IGTO</a> </div> <div class="taskBarAndStartMenuO"></div> </div> In my work flow: when I click the "whatchHistory" button, my plugin would load a panel with JQuery UI datepicker applied which days had been set to be availabled or not. I am using the function "getDays()" to get the available days list and stored the data inside daysWithRecords, and final the UI datepicker's function "beforeShowDay()" called the function "checkAvailability()" to set the days. the variable "daysWithRecords" was declared inside Igtomw = function(elem , options) and was initialized inside the function getDays() I am using the function "initialWatchHistory()" to initialization and render the JQuery UI datepicker in the web. My problem is the function "checkAvailability()" cannot see the variable "daysWithRecords".The firebug prompts me that "daysWithRecords" is "undefined". this is the first time I write my first plugin. So .... Thank you very much for any help!!

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >