Search Results

Search found 23316 results on 933 pages for 'content generation'.

Page 688/933 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • Can mouseenter be made to not fire in IE on DOMready?

    - by mrclay
    jQuery emulates IE's mouseenter event on non-IE browsers. In IE, however, mouseenter is being triggered when the page loads (maybe due to jQuery's use of doScroll in the $.ready implementation), even if the mouse is not moved at all. This doesn't happen in other browsers and definitely doesn't follow Microsoft's own spec, which says (emphasis mine): The event fires only if the mouse pointer is outside the boundaries of the object and the user moves the mouse pointer inside the boundaries of the object. If the mouse pointer is currently inside the boundaries of the object, for the event to fire, the user must move the mouse pointer outside the boundaries of the object and then back inside the boundaries of the object. This only becomes an issue of usability if hover (or the hoverIntent plugin) is applied to a navigational item to display a drop down or "mega-menu": In IE, mouseenter will fire immediately after $.ready, obscuring the content with the menu. I've put together a demonstration of both the mouseenter inconsistency and the usability issue it creates.

    Read the article

  • Read line comments in INI file

    - by Dave Jarvis
    The parse_ini_file function removes comments in configuration files. What would you do to keep the comments that are associated with the next line? For example: [email] ; Verify that the email's domain has a mail exchange (MX) record. validate_domain = true Am thinking of using X(HT)ML and XSLT to transform the content into an INI file (so that the documentation and options can be single sourced). For example: <h1>email</h1> <p>Verify that the email's domain has a mail exchange (MX) record.</p> <dl> <dt>validate_domain</dt> <dd>true</dd> </dl> Any other ideas?

    Read the article

  • Use trackball for scrolling in Android ListView

    - by gartenkralleb
    hi, i have a ListView, whose content should not be selectable. i set the ListView to choiceMode="none" and set a transparent listSelector. for touch mode it acts as desired. a problem is the use of the trackball. if i try to use it for scrolling, the scroll starts after the 10th push to "down". this is caused by the invisible listSelector, which scrolls down the list until the 10th list item is "invisibily" selected. i try to override dispatchTrackballEvent and do the scrolling with scrollBy, but i did not finished this approach yet, to have a look on alternative suggestions. is there a simple solution for my problem?

    Read the article

  • Dynamic adding of usercontrols not showing control on page

    - by Phil
    I am trying to insert a user control dynamically into my default.aspx page via the following method in the page_init: Dim control As UserControl = LoadControl("~\Modules\Content.ascx") Controls.Add(control) When I run the page there is no sign of the usercontrol. Am I using the correct code to insert the usercontrol? Is there an alternative method of insertion available? Does the fact that the usercontrol has a page_load make a difference? Do I need to register the control in my aspx page at design time? Thanks in advance for any assistance you can offer.

    Read the article

  • IIS7 Rewrite changing default button

    - by Skoder
    Hey, I've run into a strange problem regarding default buttons in master pages and IIS7 rewrite module. All my content pages have default buttons set in the code-behind (on prerender), or they are in panels on the aspx page. This works fine on my local machine and on the production server. However, when I enable IIS7 URL Rewrite, the default button is always to the one in the master page. protected void LoginButton_PreRender(object sender, EventArgs e) { Button btnDefault = sender as Button; this.Page.Form.DefaultButton = btnDefault.UniqueID; } That's how I set my default button in the code-behind. I'm not sure what the rewrite module could be doing. Thanks for any help

    Read the article

  • DOS batch command to read some info from text file

    - by Ray
    Hello All, I am trying to read some info from a text file by using windows command line, and save it to a variable just like "set info =1234" Below is the content of the txt file, actually I just need the revision number, and the location of it is always the same line 5, and from column 11 to 15. In the sample it's 1234, and I am wondering is there a way to save it to a variable in Dos command line. Thanks a lot! svninfo.txt: Path: . URL: https://www.abc.com Repository Root: https://www.abc.com/svn Repository UUID: 12345678-8b61-fa43-97dc-123456789 Revision: 1234 Node Kind: directory Schedule: normal Last Changed Author: abc Last Changed Rev: 1234 Last Changed Date: 2010-04-01 18:19:54 -0700 (Thu, 01 Apr 2010)

    Read the article

  • The Future of Project Management is Social

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Kazim Isfahani, Director, Product Marketing, Oracle Rapid Ascent. Breakneck Speed. Lightning Fast. Perhaps even overwhelming. No matter which set of adjectives we use to describe it, social media’s rise into the enterprise mainstream has been unprecedented. Indeed, the big 4 social media powerhouses (Facebook, Google+, LinkedIn, and Twitter), have nearly 2 Billion users between them. You may be asking (as you should really) “That’s all well and good for the consumer, but for me at my company, what’s your point? Beyond the fact that I can check and post updates, that is.” Good question, kind sir. Impact of Social and Collaboration on Project Management I’ll dovetail this discussion to the project management realm, since that’s what I’m writing about. Speed is a big challenge for project-driven organizations. Anything that can help speed up project delivery - be it a new product introduction effort or a geographical expansion project - fast is a good thing. So where does this whole social thing fit particularly since there are already a host of tools to help with traditional project execution? The fact is companies have seen improvements in their productivity by deploying departmental collaboration and other social-oriented solutions. McKinsey’s survey on social tools shows we have reached critical scale: 72% of respondents report that their companies use at least one and over 40% say they are using social networks and blogs. We don’t hear as much about the impact of social media technologies at the project and project manager level, but that does not mean there is none. Consider the new hire. The type of individual entering the workforce and executing on projects is a generation of worker expecting visually appealing, easy to use and easy to understand technology meshing hand-in-hand with business processes. Consider the project manager. The social era has enhanced the role that the project manager must play. Today’s project manager must be a supreme communicator, an influencer, a sympathizer, a negotiator, and still manage to keep all stakeholders in the loop on project progress. Social tools play a significant role in this effort. Now consider the impact to the project team. The way that a project team functions has changed, with newer, social oriented technologies making the process of information dissemination and team communications much more fluid. It’s clear that a shift is occurring where “social” is intersecting with project management. The Rise of Social Project Management We refer to the melding of project management and social networking as Social Project Management. Social Project Management is based upon the philosophy that the project team is one part of an integrated whole, and that valuable and unique abilities exist within the larger organization. For this reason, Social Project Management systems should be integrated into the collaborative platform(s) of an organization, allowing communication to proceed outside the project boundaries. What makes social project management "social" is an implicit awareness where distributed teams build connected links in ways that were previously restricted to teams that were co-located. Just as critical, Social Project Management embraces the vision of seamless online collaboration within a project team, but also provides for, (and enhances) the use of rigorous project management techniques. Social Project Management acknowledges that projects (particularly large projects) are a social activity - people doing work with people, for other people, with commitments to yet other people. The more people (larger projects), the more interpersonal the interactions, and the more social affects the project. The Epitome of Social - Fusion Project Portfolio Management If I take this one level further to discuss Fusion Project Portfolio Management, the notion of Social Project Management is on full display. With Fusion Project Portfolio Management, project team members have a single place for interaction on projects and access to any other resources working within the Fusion ERP applications. This allows team members the opportunity to be informed with greater participation and provide better information. The application’s the visual appeal, and highly graphical nature makes it easy to navigate information. The project activity stream adds to the intuitive user experience. The goal of productivity is pervasive throughout Fusion Project Portfolio Management. Field research conducted with Oracle customers and partners showed that users needed a way to stay in the context of their core transactions and yet easily access social networking tools. This is manifested in the application so when a user executes a business process, they not only have the transactional application at their fingertips, but also have things like e-mail, SMS, text, instant messaging, chat – all providing a number of different ways to interact with people and/or groups of people, both internal and external to the project and enterprise. But in the end, connecting people is relatively easy. The larger issue is finding a way to serve up relevant, system-generated, actionable information, in real time, which will allow for more streamlined execution on key business processes. Fusion Project Portfolio Management’s design concept enables users to create project communities, establish discussion threads, manage event calendars as well as deliver project based work spaces to organize communications within the context of a project – all within a secure business environment. We’d love to hear from you and get your thoughts and ideas about how Social Project Management is impacting your organization. To learn more about Oracle Fusion Project Portfolio Management, please visit this link

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • bidirectional habtm linking

    - by Alexey Poimtsev
    Hi, all. I have application with 2 groups of models - content based (news, questions) and "something" based (devices, applications etc). I need to link all models between groups - for example question may belongs to 3 different things - one application and 2 devices. The same - for news. From other side - i need to see all news articles and questions related to some application or device. Any idea how to develop this in rails? I have only one idea - mixins that will add methods content_id and thing_id to models and join table.

    Read the article

  • Rails: translations for table's column.

    - by Andrew
    In rails application I have two models: Food and Drink. Both food and drink have a name, which has to be stored in two languages. How do I better realize translations for theese tables? First solution I realized was to replace name column with name_en and name_ru. Another solution is to encode with YAML hash like { :en => 'eng', :ru => 'rus' } and store yaml as a name. What would you recommend, assuming content is not static? Maybe there's good article?

    Read the article

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • IOError: [Errno 13] Permission denied when trying to read an file in google app engine

    - by mahesh
    I want to read an XML file and parse it, for that I had used SAX parser which requires file as input to parse. For that I had stored my XML file in Entity called XMLDocs with following property XMLDocs Entity Name name : Property of string type content : property of blob type (will contain my xml file) Reason I had to store file like this as I had not yet provide my billing detail to google Now when I try to open this file in my I am getting error of permission denied.. Please help me, what I have to do... You can see that error by running my app at www.parsepython.appspot.com

    Read the article

  • Why does this window object not have the eval function?

    - by Motti
    I ran into an interesting (?) problem in the YUI rich edit demo on IE. When looking at the window object for the content editable frame used as the browser I see that the eval function is undefined (by running the following). javascript:alert(document.getElementById("editor_editor").contentWindow.eval) This only happens on IE (I checked on IE6 and IE8), and it doesn't happen on Firefox or Chrome. All the other window functions and properties seem to be in order, now I realise that eval is not really defined on the window but on the global object but my understanding was that in browsers the window is the global object (also eval does appear on all other windows so why not on this one?). Does anyone know if this is a know bug/limitation in IE and how I can get to eval in the context of the global object of this frame? (I need the side effects to be available to anything running from within this frame).

    Read the article

  • WPF: Bind/Apply Filter on Boolean Property

    - by Julian Lettner
    I want to apply a filter to a ListBox accordingly to the IsSelected property of a CheckBox. At the moment I have something like this. XAML <CheckBox Name="_filterCheckBox" Content="Filter list" Checked="ApplyFilterHandler"/> <ListBox ItemsSource="{Binding SomeItems}" /> CodeBehind public ObservableCollection<string> SomeItems { get; private set; } private void ApplyFilterHandler(object sender, RoutedEventArgs e) { if (_filterCheckBox.IsChecked.Value) CollectionViewSource.GetDefaultView(SomeItems).Filter += MyFilter; else CollectionViewSource.GetDefaultView(SomeItems).Filter -= MyFilter; } private bool MyFilter(object obj) { return ... } It works but this solution feels like the old-fashioned way (Windows Forms). Question: Is it possible to achieve this with Bindings / in XAML? Thanks for your time.

    Read the article

  • Unable to create more then one gadget in GateIn portal

    - by Genadii Ganebnyi
    Repost from: Jboss GateIn Discussion. Please refer that post for screenshots. Here what happens when I try to create 2 gadgets: I press "Create a new gadget" Leave content as is and press "Save". As the result gadget gets name "CHANGME" and there is no way to change it. Press "Create a new gadget" and change title to "hello2 ...." Press "Save". New gadget replaces previous gadget in the left menu and gets the same name "CHANGME" with no way to change it Is this a bug or I am doing something wrong?

    Read the article

  • Conditional Validation using JQuery Validation Plugin

    - by Steve Kemp
    I have a simple html form that I've added validation to using the JQuery Validation plugin. I have it working for single fields that require a value. I now need to extend that so that if a user answers Yes to a question they must enter something in the Details field, otherwise the Details field can be left blank. I'm using radio buttons to display the Yes/No. Here's my complete html form - I'm not sure where to go from here: <script type="text/javascript" charset="utf-8"> $.metadata.setType("attr", "validate"); $(document).ready(function() { $("#editRecord").validate(); }); </script> <style type="text/css"> .block { display: block; } form.cmxform label.error { display: none; } </style> </head> <body> <div id="header"> <h1> Questions</h1> </div> <div id="content"> <h1> Questions Page 1 </h1> </div> <div id="content"> <h1> </h1> <form class="cmxform" method="post" action="editrecord.php" id="editRecord"> <input type="hidden" name="-action" value="edit"> <h1> Questions </h1> <table width="46%" class="record"> <tr> <td width="21%" valign="top" class="field_name_left"><p>Question 1</p></td> <td width="15%" valign="top" class="field_data"> <label for="Yes"> <input type="radio" name="Question1" value="Yes" validate = "required:true" /> Yes </label> <label for="No"> <input type="radio" name="Question1" value="No" /> No </label> <label for="Question1" class="error">You must answer this question to proceed</label> </td> <td width="64%" valign="top" class="field_data"><strong>Details:</strong> <textarea id = "Details1" class="where" name="Details1" cols="25" rows="2"></textarea></td> </tr> <tr> <td valign="top" class="field_name_left">Question 2</td> <td valign="top" class="field_data"> <label for="Yes"> <input type="radio" name="Question2" value="Yes" validate = "required:true" /> Yes </label> <label for="No"> <input type="radio" name="Question2" value="No" /> No </label> <label for="Question2" class="error">You must answer this question to proceed</label> </td> <td valign="top" class="field_data"><strong>Details:</strong> <textarea id = "Details2" class="where" name="Details2" cols="25" rows="2"></textarea> </td> </tr> <tr class="submit_btn"> <td colspan="3"> <input type="submit" name="-edit" value="Finish"> <input type="reset" name="reset" value="Reset"> </td> </tr> </table> </form> </div> </body> </html>

    Read the article

  • PHP website Optimization

    - by ana
    I have a high traffic website and I need make sure my site is fast enough to display my pages to everyone rapidly. I searched on Google many articles about speed and optimization and here's what I found: Cache the page Save it to the disk Caching the page in memory: This is very fast but if I need to change the content of my page I have to remove it from cache and then re-save the file on the disk. Save it to disk This is very easy to maintain but every time the page is accessed I have to read on the disk. Which method should I go with? Thanks

    Read the article

  • Creating a QuerySet based on a ManyToManyField in Django

    - by River Tam
    So I've got two classes; Picture and Tag that are as follows: class Tag(models.Model): pics = models.ManyToManyField('Picture', blank=True) name = models.CharField(max_length=30) # stuff omitted class Picture(models.Model): name = models.CharField(max_length=100) pub_date = models.DateTimeField('date published') tags = models.ManyToManyField('Tag', blank=True) content = models.ImageField(upload_to='instaton') #stuff omitted And what I'd like to do is get a queryset (for a ListView) given a tag name that contains the most recent X number of Pictures that are tagged as such. I've looked up very similar problems, but none of the responses make any sense to me at all. How would I go about creating this queryset?

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • php echo doesnt print newlines

    - by Skun
    Hey ! I started doing a new project in PHP / MySql . The Aim of this project is to manage articles for a magazine that i am the editor of. So the content of the articles, i decided i would store with the "TEXT" column type of MySql Now when i retrieve this column and print it with echo, the newlines are not there. Its all on the same line. $resset = mysql_query("select txt from articles where id = 1"); $row = mysql_fetch_assoc($resset); $txt = $row['txt']; echo $txt; //doesnt print it as it is in the database as it was typed (multiline) Find below, the text as it looks in the database and as it looks when it is echoed But within the database, its stored with newlines. Has anybody else encountered this problem? Please help me as my project depends on this :|

    Read the article

  • Read / write program in Java using JFileChooser

    - by Casper Marcussen
    Didn't want to bother people in here, but since I am under a time pressure, I am desperate to get help. My questions is how to link the file choosen from a JFileChooser to a file, and how to convert it to string being able to display and edit it in a TextArea. I have the GUI set up using swing, but the link between actionListener and the JFileChooser is not complete. Any help would be much appreciated. Code: http://pastebin.com/p3fb17Wi EDIT: I found this program, that does pretty much what i wanted to, but it does not allow me to save the actual file : http://www.java-forums.org/new-java/8856-how-get-content-text-file-write-jtextarea.html

    Read the article

  • How to hide jQuery Sub-Menus(ddsmoothmenu)?

    - by Tim
    I'm new to jQuery and i must admit that i've understood nothing yet, the syntax appears to me as an unknown language although i thought that i had my experiences with javascript. Nevertheless i managed it to implement this menu in my asp.net masterpage's header. Even got it to work that the content-page is loaded with ajax with help from here. But finally i'm failing with the menu to disappear when the new page was loaded asynchronously. I dont know how to hide this accursed jQuery Menu. Following the part of the js-file where the events are registered for hiding/disappearing. I dont know how to get the part that is responsible for it and even i dont know how to implement that part in my Anchor-onclick function where i dont have a reference to the jQuery Object. buildmenu:function($, setting){ var smoothmenu=ddsmoothmenu var $mainmenu=$("#"+setting.mainmenuid+">ul") //reference main menu UL $mainmenu.parent().get(0).className=setting.classname || "ddsmoothmenu" var $headers=$mainmenu.find("ul").parent() $headers.hover( function(e){ $(this).children('a:eq(0)').addClass('selected') }, function(e){ $(this).children('a:eq(0)').removeClass('selected') } ) $headers.each(function(i){ //loop through each LI header var $curobj=$(this).css({zIndex: 100-i}) //reference current LI header var $subul=$(this).find('ul:eq(0)').css({display:'block'}) $subul.data('timers', {}) this._dimensions={w:this.offsetWidth, h:this.offsetHeight, subulw:$subul.outerWidth(), subulh:$subul.outerHeight()} this.istopheader=$curobj.parents("ul").length==1? true : false //is top level header? $subul.css({top:this.istopheader && setting.orientation!='v'? this._dimensions.h+"px" : 0}) $curobj.children("a:eq(0)").css(this.istopheader? {paddingRight: smoothmenu.arrowimages.down[2]} : {}).append( //add arrow images '<img src="'+ (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[1] : smoothmenu.arrowimages.right[1]) +'" class="' + (this.istopheader && setting.orientation!='v'? smoothmenu.arrowimages.down[0] : smoothmenu.arrowimages.right[0]) + '" style="border:0;" />' ) if (smoothmenu.shadow.enable){ this._shadowoffset={x:(this.istopheader?$subul.offset().left+smoothmenu.shadow.offsetx : this._dimensions.w), y:(this.istopheader? $subul.offset().top+smoothmenu.shadow.offsety : $curobj.position().top)} //store this shadow's offsets if (this.istopheader) $parentshadow=$(document.body) else{ var $parentLi=$curobj.parents("li:eq(0)") $parentshadow=$parentLi.get(0).$shadow } this.$shadow=$('<div class="ddshadow'+(this.istopheader? ' toplevelshadow' : '')+'"></div>').prependTo($parentshadow).css({left:this._shadowoffset.x+'px', top:this._shadowoffset.y+'px'}) //insert shadow DIV and set it to parent node for the next shadow div } $curobj.hover( function(e){ var $targetul=$subul //reference UL to reveal var header=$curobj.get(0) //reference header LI as DOM object clearTimeout($targetul.data('timers').hidetimer) $targetul.data('timers').showtimer=setTimeout(function(){ header._offsets={left:$curobj.offset().left, top:$curobj.offset().top} var menuleft=header.istopheader && setting.orientation!='v'? 0 : header._dimensions.w menuleft=(header._offsets.left+menuleft+header._dimensions.subulw>$(window).width())? (header.istopheader && setting.orientation!='v'? -header._dimensions.subulw+header._dimensions.w : -header._dimensions.w) : menuleft //calculate this sub menu's offsets from its parent if ($targetul.queue().length<=1){ //if 1 or less queued animations $targetul.css({left:menuleft+"px", width:header._dimensions.subulw+'px'}).animate({height:'show',opacity:'show'}, ddsmoothmenu.transition.overtime) if (smoothmenu.shadow.enable){ var shadowleft=header.istopheader? $targetul.offset().left+ddsmoothmenu.shadow.offsetx : menuleft var shadowtop=header.istopheader?$targetul.offset().top+smoothmenu.shadow.offsety : header._shadowoffset.y if (!header.istopheader && ddsmoothmenu.detectwebkit){ //in WebKit browsers, restore shadow's opacity to full header.$shadow.css({opacity:1}) } header.$shadow.css({overflow:'', width:header._dimensions.subulw+'px', left:shadowleft+'px', top:shadowtop+'px'}).animate({height:header._dimensions.subulh+'px'}, ddsmoothmenu.transition.overtime) } } }, ddsmoothmenu.showhidedelay.showdelay) }, function(e){ var $targetul=$subul var header=$curobj.get(0) clearTimeout($targetul.data('timers').showtimer) $targetul.data('timers').hidetimer=setTimeout(function(){ $targetul.animate({height:'hide', opacity:'hide'}, ddsmoothmenu.transition.outtime) if (smoothmenu.shadow.enable){ if (ddsmoothmenu.detectwebkit){ //in WebKit browsers, set first child shadow's opacity to 0, as "overflow:hidden" doesn't work in them header.$shadow.children('div:eq(0)').css({opacity:0}) } header.$shadow.css({overflow:'hidden'}).animate({height:0}, ddsmoothmenu.transition.outtime) } }, ddsmoothmenu.showhidedelay.hidedelay) } ) //end hover }) //end $headers.each() $mainmenu.find("ul").css({display:'none', visibility:'visible'}) } one link of my menu what i want to hide when the content is redirected to another page(i need "closeMenu-function"): <li><a href="DeliveryControl.aspx" onclick="AjaxContent.getContent(this.href);closeMenu();return false;">Delivery Control</a></li> In short: I want to fade out the submenus the same way they do automatically onblur, so that only the headermenu stays visible but i dont know how. Thanks, Tim EDIT: thanks to Starx' private-lesson in jQuery for beginners i solved it: I forgot the # in $("#smoothmenu1"). After that it was not difficult to find and call the hover-function from the menu's headers to let them fade out smoothly: $("#smoothmenu1").find("ul").hover(); Regards, Tim

    Read the article

  • Proper way to setup a UISegmentedControll on UINavigationController UINavigationBar all inside UITab

    - by Kaspa
    The title pretty much describes it all. The problem being the handling of the UISegmentedControll callbacks (button presses). If the content type of all of the nested views was the same (i.e. some UITableViewControllers) then I could just switch dataSource'es and reload the tables. However this is not the case, I have 3 very different views in there that allow further drilldown / interaction based on the NavigationControllers. So the way I have this set up ATM is that there is a "container" class that I put all of the UINavigationControllers in. They all share the same and one UISegmentedController and I redirect the callbacks to the container view controller. This does not feel too good at all. Additionally there is a problem when the user taps on the tab bar icon, the navigation controller pops to root which is ... the empty container view. Here's a picture of what I want to achieve:

    Read the article

  • Transform/redraw view after pinch zoom on x-axis

    - by Jonathan
    My setup. UIScrollView (scrollView) - UIView (contentView) - UIView (subView) I have managed to do so I can zoom contentView only on the x-axis. The problem is that subView is containing a graph. When contentView is zoomed and transformed the graph gets unsharp and distorded since the scaling only effect the x-axis. What I need help with is to somehow redraw the contentView after I'm done with the zooming without distorting the graph. Is it possible to transform the contentView so that the graph stays sharp even if you only zoom in the x-axis or redraw the content in the zoomed/stretched view somehow? I have tried the solution in this thread link but haven't succeeded to get it running.

    Read the article

  • android: having two listviews in two listactivities didn't work

    - by Yang
    I guess my previous question wasn't clear enough (http://stackoverflow.com/questions/2549585/android-failed-to-setcontentview-when-switching-to-listactivity), so I explain as follows. In my app I have two listactivities which uses two different listviews: public class Activity1 extends ListActivity { @Override public void onCreate(Bundle savedInstanceState) { try{ super.onCreate(savedInstanceState); setContentView(R.layout.listview1); } public class Activity2 extends ListActivity { @Override public void onCreate(Bundle savedInstanceState) { try{ super.onCreate(savedInstanceState); setContentView(R.layout.listview2); } } As required by android, listview must have an ID which is exactly "@android:id/list". If I set the listview in both listview1 and listview2 with the same ID, then they will end up using the same format of listview, which is not what I want. But if I set one of the IDs to be sth like "@+id/listview2", android gave me the error: java.lang.RuntimeException: Your content must have a ListView whose id attribute is 'android.R.id.list' How do I handle this dilema?

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >