Search Results

Search found 15535 results on 622 pages for 'mat keep'.

Page 38/622 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • MySQL Cluster 7.3 - Join This Week's Webinar to Learn What's New

    - by Mat Keep
    The first Development Milestone and Early Access releases of MySQL Cluster 7.3 were announced just several weeks ago. To provide more detail and demonstrate the new features, Andrew Morgan and I will be hosting a live webinar this coming Thursday 25th October at 0900 Pacific Time / 16.00 UTC Even if you can't make the live webinar, it is still worth registering for the event as you will receive a notification when the replay will be available, to view on-demand at your convenience In the webinar, we will discuss the enhancements being previewed as part of MySQL Cluster 7.3, including: - Foreign Key Constraints: Yes, we've looked into the future and decided Foreign Keys are it ;-) You can read more about the implementation of Foreign Keys in MySQL Cluster 7.3 here - Node.js NoSQL API: Allowing web, mobile and cloud services to query and receive results sets from MySQL Cluster, natively in JavaScript, enables developers to seamlessly couple high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. You can study the Node.js / MySQL Cluster tutorial here - Auto-Installer: This new web-based GUI makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments on-premise or in the cloud You can view a YouTube tutorial on the MySQL Cluster Auto-Installer here  So we have a lot to cover in our 45 minute session. It will be time well spent if you want to know more about the future direction of MySQL Cluster and how it can help you innovate faster, with greater simplicity. Registration is open 

    Read the article

  • Redehost Transforms Cloud & Hosting Services with MySQL Enterprise Edition

    - by Mat Keep
    RedeHost are one of Brazil's largest cloud computing and web hosting providers, with more than 60,000 customers and 52,000 web sites running on its infrastructure. As the company grew, Redehost needed to automate operations, such as system monitoring, making the operations team more proactive in solving problems. Redehost also sought to improve server uptime, robustness, and availability, especially during backup windows, when performance would often dip. To address the needs of the business, Redehost migrated from the community edition of MySQL to MySQL Enterprise Edition, which has delivered a host of benefits: - Pro-active database management and monitoring using MySQL Enterprise Monitor, enabling Redehost to fulfil customer SLAs. Using the Query Analyzer, Redehost were able to more rapidly identify slow queries, improving customer support - Quadrupled backup speed with MySQL Enterprise Backup, leading to faster data recovery and improved system availability - Reduced DBA overhead by 50% due to the improved support capabilities offered by MySQL Enterprise Edition. - Enabled infrastructure consolidation, avoiding unnecessary energy costs and premature hardware acquisition You can learn more from the full Redehost Case Study Also, take a look at the recently updated MySQL in the Cloud whitepaper for the latest developments that are making it even simpler and more efficient to develop and deploy new services with MySQL in the cloud

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • How to keep track of a private messaging system using MongoDB?

    - by luckytaxi
    Take facebook's private messaging system where you have to keep track of sender and receiver along w/ the message content. If I were using MySQL I would have multiple tables, but with MongoDB I'll try to avoid all that. I'm trying to come up with a "good" schema that can scale and is easy to maintain. If I were using mysql, I would have a separate table to reference the user and and message. See below ... profiles table user_id first_name last_name message table message_id message_body time_stamp user_message_ref table user_id (FK) message_id (FK) is_sender (boolean) With the schema listed above, I can query for any messages that "Bob" may have regardless if he's the recipient or sender. Now how to turn that into a schema that works with MongoDB. I'm thinking I'll have a separate collection to hold the messages. Problem is, how can I differentiate between the sender and the recipient? If Bob logs in, what do I query against? Depending on whether Bob initiated the email, I don't want to have to query against "sender" and "receiver" just to see if the message belongs to the user.

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • New Options for MySQL High Availability

    - by Mat Keep
    Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation. There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself. For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA. At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris: - DRBD for MySQL - Oracle Solaris Clustering for MySQL DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies. DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies. Putting it All Together When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL To make the decision process simpler, we have also published a new MySQL HA Solutions Guide. Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity. This is subject recently presented at Oracle Open World, and the slides are available here. Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

    Read the article

  • Guide to MySQL & NoSQL, Webinar Q&A

    - by Mat Keep
    0 0 1 959 5469 Homework 45 12 6416 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Yesterday we ran a webinar discussing the demands of next generation web services and how blending the best of relational and NoSQL technologies enables developers and architects to deliver the agility, performance and availability needed to be successful. Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into areas like auto-sharding and cross-shard JOINs, replication, performance, client libraries, etc. So I thought it would be useful to post those below, for the benefit of those unable to attend the webinar. Before getting to the Q&A, there are a couple of other resources that maybe useful to those looking at NoSQL capabilities within MySQL: - On-Demand webinar (coming soon!) - Slides used during the webinar - Guide to MySQL and NoSQL whitepaper  - MySQL Cluster demo, including NoSQL interfaces, auto-sharing, high availability, etc.  So here is the Q&A from the event  Q. Where does MySQL Cluster fit in to the CAP theorem? A. MySQL Cluster is flexible. A single Cluster will prefer consistency over availability in the presence of network partitions. A pair of Clusters can be configured to prefer availability over consistency. A full explanation can be found on the MySQL Cluster & CAP Theorem blog post.  Q. Can you configure the number of replicas? (the slide used a replication factor of 1) Yes. A cluster is configured by an .ini file. The option NoOfReplicas sets the number of originals and replicas: 1 = no data redundancy, 2 = one copy etc. Usually there's no benefit in setting it >2. Q. Interestingly most (if not all) of the NoSQL databases recommend having 3 copies of data (the replication factor).    Yes, with configurable quorum based Reads and writes. MySQL Cluster does not need a quorum of replicas online to provide service. Systems that require a quorum need > 2 replicas to be able to tolerate a single failure. Additionally, many NoSQL systems take liberal inspiration from the original GFS paper which described a 3 replica configuration. MySQL Cluster avoids the need for a quorum by using a lightweight arbitrator. You can configure more than 2 replicas, but this is a tradeoff between incrementally improved availability, and linearly increased cost. Q. Can you have cross node group JOINS? Wouldn't that run into the risk of flooding the network? MySQL Cluster 7.2 supports cross nodegroup joins. A full cross-join can require a large amount of data transfer, which may bottleneck on network bandwidth. However, for more selective joins, typically seen with OLTP and light analytic applications, cross node-group joins give a great performance boost and network bandwidth saving over having the MySQL Server perform the join. Q. Are the details of the benchmark available anywhere? According to my calculations it results in approx. 350k ops/sec per processor which is the largest number I've seen lately The details are linked from Mikael Ronstrom's blog The benchmark uses a benchmarking tool we call flexAsynch which runs parallel asynchronous transactions. It involved 100 byte reads, of 25 columns each. Regarding the per-processor ops/s, MySQL Cluster is particularly efficient in terms of throughput/node. It uses lock-free minimal copy message passing internally, and maximizes ID cache reuse. Note also that these are in-memory tables, there is no need to read anything from disk. Q. Is access control (like table) planned to be supported for NoSQL access mode? Currently we have not seen much need for full SQL-like access control (which has always been overkill for web apps and telco apps). So we have no plans, though especially with memcached it is certainly possible to turn-on connection-level access control. But specifically table level controls are not planned. Q. How is the performance of memcached APi with MySQL against memcached+MySQL or any other Object Cache like Ecache with MySQL DB? With the memcache API we generally see a memcached response in less than 1 ms. and a small cluster with one memcached server can handle tens of thousands of operations per second. Q. Can .NET can access MemcachedAPI? Yes, just use a .Net memcache client such as the enyim or BeIT memcache libraries. Q. Is the row level locking applicable when you update a column through memcached API? An update that comes through memcached uses a row lock and then releases it immediately. Memcached operations like "INCREMENT" are actually pushed down to the data nodes. In most cases the locks are not even held long enough for a network round trip. Q. Has anyone published an example using something like PHP? I am assuming that you just use the PHP memcached extension to hook into the memcached API. Is that correct? Not that I'm aware of but absolutely you can use it with php or any of the other drivers Q. For beginner we need more examples. Take a look here for a fully worked example Q. Can I access MySQL using Cobol (Open Cobol) or C and if so where can I find the coding libraries etc? A. There is a cobol implementation that works well with MySQL, but I do not think it is Open Cobol. Also there is a MySQL C client library that is a standard part of every mysql distribution Q. Is there a place to go to find help when testing and/implementing the NoSQL access? If using Cluster then you can use the [email protected] alias or post on the MySQL Cluster forum Q. Are there any white papers on this?  Yes - there is more detail in the MySQL Guide to NoSQL whitepaper If you have further questions, please don’t hesitate to use the comments below!

    Read the article

  • A python random function acts differently when assigned to a list or called directly...

    - by Dror Hilman
    I have a python function that randomize a dictionary representing a position specific scoring matrix. for example: mat = { 'A' : [ 0.53, 0.66, 0.67, 0.05, 0.01, 0.86, 0.03, 0.97, 0.33, 0.41, 0.26 ] 'C' : [ 0.14, 0.04, 0.13, 0.92, 0.99, 0.04, 0.94, 0.00, 0.07, 0.23, 0.35 ] 'T' : [ 0.25, 0.07, 0.01, 0.01, 0.00, 0.04, 0.00, 0.03, 0.06, 0.12, 0.14 ] 'G' : [ 0.08, 0.23, 0.20, 0.02, 0.00, 0.06, 0.04, 0.00, 0.54, 0.24, 0.25 ] } The scambling function: def scramble_matrix(matrix, iterations): mat_len = len(matrix["A"]) pos1 = pos2 = 0 for count in range(iterations): pos1,pos2 = random.sample(range(mat_len), 2) #suffle the matrix: for nuc in matrix.keys(): matrix[nuc][pos1],matrix[nuc][pos2] = matrix[nuc][pos2],matrix[nuc][pos1] return matrix def print_matrix(matrix): for nuc in matrix.keys(): print nuc+"[", for count in matrix[nuc]: print "%.2f"%count, print "]" now to the problem... When I try to scramble a matrix directly, It's works fine: print_matrix(mat) print "" print_matrix(scramble_matrix(mat,10)) gives: A[ 0.53 0.66 0.67 0.05 0.01 0.86 0.03 0.97 0.33 0.41 0.26 ] C[ 0.14 0.04 0.13 0.92 0.99 0.04 0.94 0.00 0.07 0.23 0.35 ] T[ 0.25 0.07 0.01 0.01 0.00 0.04 0.00 0.03 0.06 0.12 0.14 ] G[ 0.08 0.23 0.20 0.02 0.00 0.06 0.04 0.00 0.54 0.24 0.25 ] A[ 0.41 0.97 0.03 0.86 0.53 0.66 0.33.05 0.67 0.26 0.01 ] C[ 0.23 0.00 0.94 0.04 0.14 0.04 0.07 0.92 0.13 0.35 0.99 ] T[ 0.12 0.03 0.00 0.04 0.25 0.07 0.06 0.01 0.01 0.14 0.00 ] G[ 0.24 0.00 0.04 0.06 0.08 0.23 0.54 0.02 0.20 0.25 0.00 ] but when I try to assign this scrambling to a list , it does not work!!! ... print_matrix(mat) s=[] for x in range(3): s.append(scramble_matrix(mat,10)) for matrix in s: print "" print_matrix(matrix) result: A[ 0.53 0.66 0.67 0.05 0.01 0.86 0.03 0.97 0.33 0.41 0.26 ] C[ 0.14 0.04 0.13 0.92 0.99 0.04 0.94 0.00 0.07 0.23 0.35 ] T[ 0.25 0.07 0.01 0.01 0.00 0.04 0.00 0.03 0.06 0.12 0.14 ] G[ 0.08 0.23 0.20 0.02 0.00 0.06 0.04 0.00 0.54 0.24 0.25 ] A[ 0.01 0.66 0.97 0.67 0.03 0.05 0.33 0.53 0.26 0.41 0.86 ] C[ 0.99 0.04 0.00 0.13 0.94 0.92 0.07 0.14 0.35 0.23 0.04 ] T[ 0.00 0.07 0.03 0.01 0.00 0.01 0.06 0.25 0.14 0.12 0.04 ] G[ 0.00 0.23 0.00 0.20 0.04 0.02 0.54 0.08 0.25 0.24 0.06 ] A[ 0.01 0.66 0.97 0.67 0.03 0.05 0.33 0.53 0.26 0.41 0.86 ] C[ 0.99 0.04 0.00 0.13 0.94 0.92 0.07 0.14 0.35 0.23 0.04 ] T[ 0.00 0.07 0.03 0.01 0.00 0.01 0.06 0.25 0.14 0.12 0.04 ] G[ 0.00 0.23 0.00 0.20 0.04 0.02 0.54 0.08 0.25 0.24 0.06 ] A[ 0.01 0.66 0.97 0.67 0.03 0.05 0.33 0.53 0.26 0.41 0.86 ] C[ 0.99 0.04 0.00 0.13 0.94 0.92 0.07 0.14 0.35 0.23 0.04 ] T[ 0.00 0.07 0.03 0.01 0.00 0.01 0.06 0.25 0.14 0.12 0.04 ] G[ 0.00 0.23 0.00 0.20 0.04 0.02 0.54 0.08 0.25 0.24 0.06 ] What is the problem??? Why the scrambling do not work after the first time, and all the list filled with the same matrix?!

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • WPF: How can I KEEP the same ItemTemplate instance once its created ??

    - by Samir Sabri
    Hello, Here is a cinario: I have a ListView, with ItemsSource = ProjectModel.Instance.PagesModelsCollection; where PagesModelsCollection is an ObservableCollection In the ListView XAML part: <ListView.ItemTemplate> <DataTemplate x:Name="PagesViewDataTemplate"> <DataTemplate.Resources> <Style x:Key="PageHostStyle" TargetType="{x:Type p:KPage}"> </Style> </DataTemplate.Resources> <StackPanel x:Name="MarginStack" Margin="50,50,50,50" > <p:KPage x:Name="PageHost" > </p:KPage> </StackPanel> </DataTemplate> </ListView.ItemTemplate> The problem is the ITemTemplate is re-created each time we refresh the Items. So, if we have 100 Item in the list view, another 100 new ItemTemplate instance will be created if we refresh the items! As a result, if we add UIElements on one of the ItemTemplate intances, those added UIElements will be lost, because the old ITemTemplate is replaced with a new one! How can I KEEP the ItemTemplate instance once its created ??

    Read the article

  • Go - Using a container/heap to implement a priority queue

    - by Seth Hoenig
    In the big picture, I'm trying to implement Dijkstra's algorithm using a priority queue. According to members of golang-nuts, the idiomatic way to do this in Go is to use the heap interface with a custom underlying data structure. So I have created Node.go and PQueue.go like so: //Node.go package pqueue type Node struct { row int col int myVal int sumVal int } func (n *Node) Init(r, c, mv, sv int) { n.row = r n.col = c n.myVal = mv n.sumVal = sv } func (n *Node) Equals(o *Node) bool { return n.row == o.row && n.col == o.col } And PQueue.go: // PQueue.go package pqueue import "container/vector" import "container/heap" type PQueue struct { data vector.Vector size int } func (pq *PQueue) Init() { heap.Init(pq) } func (pq *PQueue) IsEmpty() bool { return pq.size == 0 } func (pq *PQueue) Push(i interface{}) { heap.Push(pq, i) pq.size++ } func (pq *PQueue) Pop() interface{} { pq.size-- return heap.Pop(pq) } func (pq *PQueue) Len() int { return pq.size } func (pq *PQueue) Less(i, j int) bool { I := pq.data.At(i).(Node) J := pq.data.At(j).(Node) return (I.sumVal + I.myVal) < (J.sumVal + J.myVal) } func (pq *PQueue) Swap(i, j int) { temp := pq.data.At(i).(Node) pq.data.Set(i, pq.data.At(j).(Node)) pq.data.Set(j, temp) } And main.go: (the action is in SolveMatrix) // Euler 81 package main import "fmt" import "io/ioutil" import "strings" import "strconv" import "./pqueue" const MATSIZE = 5 const MATNAME = "matrix_small.txt" func main() { var matrix [MATSIZE][MATSIZE]int contents, err := ioutil.ReadFile(MATNAME) if err != nil { panic("FILE IO ERROR!") } inFileStr := string(contents) byrows := strings.Split(inFileStr, "\n", -1) for row := 0; row < MATSIZE; row++ { byrows[row] = (byrows[row])[0 : len(byrows[row])-1] bycols := strings.Split(byrows[row], ",", -1) for col := 0; col < MATSIZE; col++ { matrix[row][col], _ = strconv.Atoi(bycols[col]) } } PrintMatrix(matrix) sum, len := SolveMatrix(matrix) fmt.Printf("len: %d, sum: %d\n", len, sum) } func PrintMatrix(mat [MATSIZE][MATSIZE]int) { for r := 0; r < MATSIZE; r++ { for c := 0; c < MATSIZE; c++ { fmt.Printf("%d ", mat[r][c]) } fmt.Print("\n") } } func SolveMatrix(mat [MATSIZE][MATSIZE]int) (int, int) { var PQ pqueue.PQueue var firstNode pqueue.Node var endNode pqueue.Node msm1 := MATSIZE - 1 firstNode.Init(0, 0, mat[0][0], 0) endNode.Init(msm1, msm1, mat[msm1][msm1], 0) if PQ.IsEmpty() { // make compiler stfu about unused variable fmt.Print("empty") } PQ.Push(firstNode) // problem return 0, 0 } The problem is, upon compiling i get the error message: [~/Code/Euler/81] $ make 6g -o pqueue.6 Node.go PQueue.go 6g main.go main.go:58: implicit assignment of unexported field 'row' of pqueue.Node in function argument make: *** [all] Error 1 And commenting out the line PQ.Push(firstNode) does satisfy the compiler. But I don't understand why I'm getting the error message in the first place. Push doesn't modify the argument in any way.

    Read the article

  • Using Interop.Word, is there a way to do a replace (using Find.Execute) and keep the original text's

    - by AJ
    I'm attempting to write find/replace code for Word documents using Word Automation through Interop.Word (11.0). My documents all have various fields (that don't show up in Document.Fields) that are surrounded with brackets, eg., <DATE> needs to be replaced with DateTime.Now.Format("MM/dd/yyyy"). The find/replace works fine. However, some of the text being replaced is right-justified, and upon replacement, the text wraps to the next line. Is there any way that I can keep the justification when I perform the replace? Code is below: using Word = Microsoft.Office.Interop.Word; Word.Application wordApp = null; try { wordApp = new Word.Application {Visible = false}; //.... open the document .... object unitsStory = Word.WdUnits.wdStory; object moveType = Word.WdMovementType.wdMove; wordApp.Selection.HomeKey(ref unitsStory, ref moveType); wordApp.Selection.Find.ClearFormatting(); wordApp.Selection.Find.Replacement.ClearFormatting(); //tried removing this, no luck object replaceTextWith = DateTime.Now.ToString("MM/dd/yyyy"); object textToReplace = "<DATE>"; object replaceAll = Word.WdReplace.wdReplaceAll; object typeMissing = System.Reflection.Missing.Value; wordApp.Selection.Find.Execute(ref textToReplace, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing, ref replaceTextWith, ref replaceAll, ref typeMissing, ref typeMissing, ref typeMissing, ref typeMissing); // ... save quit etc.... } finally { //clean up wordApp } TIA.

    Read the article

  • How do you keep application logic separate from UI when UI components have built-in functionality?

    - by Al C
    I know it's important to keep user interface code separated from domain code--the application is easier to understand, maintain, change, and (sometimes) isolate bugs. But here's my mental block ... Delphi comes with components with methods that do what I want, e.g., a RichText Memo component lets me work with rich text. Other components, like TMS's string grid not only do what I want, but I paid extra for the functionality. These features put the R in RAD. It seems illogical to write my own classes to do things somebody else has already done for me. It's reinventing the wheel [ever tried working directly with rich text? :-) ] But if I use the functionality built into components like these, then I will end up with lots of intermingled UI and domain code--I'll have a form with most of my code built into its event handlers. How do you deal with this issue? ... Or, if I want to continue using the code others have already written for me, how would you suggest I deal with the issue?

    Read the article

  • For business people to manage, keep binary images in MySQL or just the urls?

    - by Michael Mao
    Hello everyone: I am working on a task to enable image uploading and auto-scaling(from full sized to thumbnail) by jQuery & PHP. I can naturally come up with two approaches : First, store both images as binary objects directly into MySQL; Second, store only urls to the images and keep the images somewhere on server. The images are for everyone to view, so there are no security restrictions, as far as I know. Personally I don't have any preference, however, at the end of the day, it is the business people that are going to manage the images as part of the system(CRUD). So I am wondering which seems to be a bit better for them? Of course I am building a easy-to-use, visualize web interface for the staff to control the process, but I am not sure if that is enough. Lessons told me that if I don't think for the future and seek the most flexible approach, the I will probably screw myself sooner or later. PS. The following link is what I've found so far, which is pretty cool, no flash involved :) Andrew Valum's ajax image upload jQuery plugin

    Read the article

  • cURL + HTTP_POST, keep getting 500 error. Has no idea?

    - by mysqllearner
    Okay, I want to make a HTTP_POST using cURL to a SSL site. I already imported the certificate to my server. This is my code: $url = "https://www.xxx.xxx"; $post = "";# all data that going to send $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $post); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0'); $exe = curl_exec($ch); $getInfo = curl_getinfo($ch); if ($exe === false) { $output = "Error in sending"; if (curl_error($ch)){ $output .= "\n". curl_error($ch); } } else if($geInfo['http_code'] != 777){ $output = "No data returned. Error: " . $geInfo['http_code']; if (curl_error($ch)){ $output .= "\n". curl_error($ch); } } curl_close($c); echo $output; It keep returned "500". Based on w3schools, 500 means Internal Server Error. Is my server having problem? How to solve/troubleshoot this?

    Read the article

  • in Rails, with check_box_tag, how do I keep the checkboxes checked after submitting query?

    - by Sebastien Paquet
    Ok, I know this is for the Saas course and people have been asking questions related to that as well but i've spent a lot of time trying and reading and I'm stuck. First of all, When you have a model called Movie, is it better to use Ratings as a model and associate them or just keep Ratings in an array floating in space(!). Second, here's what I have now in my controller: def index @movies = Movie.where(params[:ratings].present? ? {:rating => (params[:ratings].keys)} : {}).order(params[:sort]) @sort = params[:sort] @ratings = Ratings.all end Now, I decided to create a Ratings model since I thought It would be better. Here's my view: = form_tag movies_path, :method => :get do Include: - @ratings.each do |rating| = rating.rating = check_box_tag "ratings[#{rating.rating}]" = submit_tag "Refresh" I tried everything that is related to using a conditional ternary inside the checkbox tag ending with " .include?(rating) ? true : "" I tried everything that's supposed to work but it doesn't. I don't want the exact answer, I just need guidance.Thanks in advance!

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Why doesn't my UIViewController class keep track of an NSArray instance variable.

    - by TaoStoner
    Hey, I am new to Objective-C 2.0 and Xcode, so forgive me if I am missing something elementary here. Anyways, I am trying to make my own UIViewController class called GameView to display a new view. To work the game I need to keep track of an NSArray that I want to load from a plist file. I have made a method 'loadGame' which I want to load the correct NSArray into an instance variable. However it appears that after the method executes the instance variable loses track of the array. Its easier if I just show you the code.... @interface GameView : UIViewController { IBOutlet UIView *view IBOutlet UILabel *label; NSArray *currentGame; } -(IBOutlet)next; -(void)loadDefault; ... @implementation GameView - (IBOutlet)next{ int numElements = [currentGame count]; int r = rand() % numElements; NSString *myString = [currentGame objectAtIndex:(NSUInteger)r]; [label setText: myString]; } - (void)loadDefault { NSDictionary *games; NSString *path = [[NSBundle mainBundle] bundlePath]; NSString *finalPath = [path stringByAppendingPathComponent:@"Games.plist"]; games = [NSDictionary dictionaryWithContentsOfFile:finalPath]; currentGame = [games objectForKey:@"Default"]; } when loadDefault gets called, everything runs perfectly, but when I try to use the currentGame NSArray later in the method call to next, currentGame appears to be nil. I am also aware of the memory management issues with this code. Any help would be appreciated with this problem.

    Read the article

  • Why do people keep parsing HTML using regex? [closed]

    - by polygenelubricants
    As much as I love regular expressions, it's obvious to me that it's not the best tool for parsing HTML, especially given the numerous good HTML parsers out there. And yet there are numerous questions on stackoverflow that attempts to parse HTML using regex. And people would always point out what a bad idea that is in the comments. And the accepted answer would often have a disclaimer how this isn't really the ideal way of doing things. But based on the constant flow of questions, it still seems that people keep parsing HTML using regex, despite the perceived difficulty in reading and maintaining it (and that's putting correctness aside for now). So my question is: why? Is it because it's easy to learn? Is it because it's faster? Is it because it's the industry standard? Is it because there are already so many reusable regexes to build from? Is it because 100% correctness is never really the objective? (90% good enough?) etc... I'd also like to hear from the downvoters why they did so. Is it because: There's absolutely nothing wrong with using regex to parse HTML and asking "Why?" is just dumb? The premise of the question is flawed because the people who are using regex to parse HTML is such a small minority?

    Read the article

  • Is it good to keep &nbsp; inside blank <td>?

    - by jitendra
    if this is the structure <table cellspacing="0" cellpadding="0"> <tr> <td>I don't need noting here then should i keep &nbsp; here always </td> <td>item </td> </tr> <tr> <td>model</td> <td>one</td> </tr> <tr> <td>model</td> <td>two</td> </tr> <tr> <td>model</td> <td>three</td> </tr> </table> How screen reader will read blank TD, Is it semantically correct?

    Read the article

  • C# .NET Why does my inherited listview keep drawing in LargeIcon View ?? Because Microsoft is Evil!!

    - by Bugz R us
    I have a inherited Listview which standard has to be in Tile Mode. When using this control, the DrawItem gives e.bounds which are clearly bounds of largeIcon view ?? When debugging to check the view it is actually set to, it says it's in Tile view ?? Yet e.DrawText draws LargeIcon view ?? ......... Edit: ................. This seems only to happen when the control is placed upon another usercontrol? ......... Edit 2: ................. It gets stranger ... When i add buttons next to the list to change the view at runtime, "Tile" is the same as "LargeIcon", and "List" view is the same as "SmallIcons" ??? I've also completely removed the ownerdraw ... .......... Edit 3: ................. MSDN Documentation: Tile view Each item appears as a full-sized icon with the item label and subitem information to the right of it. The subitem information that appears is specified by the application. This view is available only on Windows XP and the Windows Server 2003 family. On earlier operating systems, this value is ignored and the ListView control displays in the LargeIcon view. Well I am on XP ya damn liars ?!? Apparently if the control is within a usercontrol, this value is ignored too ... pff I'm getting enough of this Microsoft crap .... you just keep on hitting bugs ... another day down the drain ... public class InheritedListView : ListView { //Hiding members ... mwuahahahahaha //yeah i was still laughing then [BrowsableAttribute(false)] public new View View { get { return base.View; } } public InheritedListView() { base.View = View.Tile; this.OwnerDraw = true; base.DrawItem += new DrawListViewItemEventHandler(DualLineGrid_DrawItem); } void DualLineGrid_DrawItem(object sender, DrawListViewItemEventArgs e) { View v = this.View; //**when debugging, v is Tile, however e.DrawText() draws in LargeIcon mode, // e.Bounds also reflects LargeIcon mode ???? ** }

    Read the article

  • Please Help i keep getting a no match for call to error!!??

    - by Timothy Poseley
    #include <iostream> #include <string> using namespace std; // Turns a digit between 1 and 9 into its english name // Turn a number into its english name string int_name(int n) { string digit_name; { if (n == 1) return "one"; else if (n == 2) return "two"; else if (n == 3) return "three"; else if (n == 4) return "four"; else if (n == 5) return "five"; else if (n == 6) return "six"; else if (n == 7) return "seven"; else if (n == 8) return "eight"; else if (n == 9) return "nine"; return ""; } string teen_name; { if (n == 10) return "ten"; else if (n == 11) return "eleven"; else if (n == 12) return "twelve"; else if (n == 13) return "thirteen"; else if (n == 14) return "fourteen"; else if (n == 14) return "fourteen"; else if (n == 15) return "fifteen"; else if (n == 16) return "sixteen"; else if (n == 17) return "seventeen"; else if (n == 18) return "eighteen"; else if (n == 19) return "nineteen"; return ""; } string tens_name; { if (n == 2) return "twenty"; else if (n == 3) return "thirty"; else if (n == 4) return "forty"; else if (n == 5) return "fifty"; else if (n == 6) return "sixty"; else if (n == 7) return "seventy"; else if (n == 8) return "eighty"; else if (n == 9) return "ninety"; return ""; } int c = n; // the part that still needs to be converted string r; // the return value if (c >= 1000) { r = int_name(c / 1000) + " thousand"; c = c % 1000; } if (c >= 100) { r = r + " " + digit_name(c / 100) + " hundred"; c = c % 100; } if (c >= 20) { r = r + " " + tens_name(c /10); c = c % 10; } if (c >= 10) { r = r + " " + teen_name(c); c = 0; } if (c > 0) r = r + " " + digit_name(c); return r; } int main() { int n; cout << endl << endl; cout << "Please enter a positive integer: "; cin >> n; cout << endl; cout << int_name(n); cout << endl << endl; return 0; } I Keep getting this Error code: intname2.cpp: In function âstd::string int_name(int)â: intname2.cpp:74: error: no match for call to â(std::string) (int)â intname2.cpp:80: error: no match for call to â(std::string) (int)â intname2.cpp:86: error: no match for call to â(std::string) (int&)â intname2.cpp:91: error: no match for call to â(std::string) (int&)â

    Read the article

  • I keep getting a no match for call to error!!??

    - by Timothy Poseley
    #include <iostream> #include <string> using namespace std; // Turns a digit between 1 and 9 into its english name // Turn a number into its english name string int_name(int n) { string digit_name; { if (n == 1) return "one"; else if (n == 2) return "two"; else if (n == 3) return "three"; else if (n == 4) return "four"; else if (n == 5) return "five"; else if (n == 6) return "six"; else if (n == 7) return "seven"; else if (n == 8) return "eight"; else if (n == 9) return "nine"; return ""; } string teen_name; { if (n == 10) return "ten"; else if (n == 11) return "eleven"; else if (n == 12) return "twelve"; else if (n == 13) return "thirteen"; else if (n == 14) return "fourteen"; else if (n == 14) return "fourteen"; else if (n == 15) return "fifteen"; else if (n == 16) return "sixteen"; else if (n == 17) return "seventeen"; else if (n == 18) return "eighteen"; else if (n == 19) return "nineteen"; return ""; } string tens_name; { if (n == 2) return "twenty"; else if (n == 3) return "thirty"; else if (n == 4) return "forty"; else if (n == 5) return "fifty"; else if (n == 6) return "sixty"; else if (n == 7) return "seventy"; else if (n == 8) return "eighty"; else if (n == 9) return "ninety"; return ""; } int c = n; // the part that still needs to be converted string r; // the return value if (c >= 1000) { r = int_name(c / 1000) + " thousand"; c = c % 1000; } if (c >= 100) { r = r + " " + digit_name(c / 100) + " hundred"; c = c % 100; } if (c >= 20) { r = r + " " + tens_name(c /10); c = c % 10; } if (c >= 10) { r = r + " " + teen_name(c); c = 0; } if (c > 0) r = r + " " + digit_name(c); return r; } int main() { int n; cout << endl << endl; cout << "Please enter a positive integer: "; cin >> n; cout << endl; cout << int_name(n); cout << endl << endl; return 0; } I Keep getting this Error code: intname2.cpp: In function âstd::string int_name(int)â: intname2.cpp:74: error: no match for call to â(std::string) (int)â intname2.cpp:80: error: no match for call to â(std::string) (int)â intname2.cpp:86: error: no match for call to â(std::string) (int&)â intname2.cpp:91: error: no match for call to â(std::string) (int&)â

    Read the article

  • How can I write javaScript cookies to keep the data persistent after page reloads on my form?

    - by Johhny Thero
    Hello, I am trying to learn how to write cookies to keep the data in my CookieButton1 button persistent and to survive refreshes and page reloads. How can I do this in JavaScript? I have supplied my source code. Any advise, links or tutorials will be very helpful. If you navigate to http://iqlusion.net/test.html and click on Empty1, it will start to ask you questions. When finished it stores everything into CookieButton1. But when I refresh my browser the data resets and goes away. Thanks! <html> <head> <title>no_cookies> </head> <script type="text/javascript" > var Can1Set = "false"; function Can1() { if (Can1Set == "false") { Can1Title = prompt("What do you want to name this new canned response?",""); Can1State = prompt("Enter a ticket state (open or closed)","closed"); Can1Response = prompt("Enter the canned response:",""); Can1Points = prompt("What point percentage do you want to assign? (0-10)","2.5"); // Set the "Empty 1" button text to the new name the user specified document.CookieTest.CookieButton1.value = Can1Title; // Set the cookie here, and then set the Can1Set variable to true document.CookieTest.CookieButton1 = "CookieButton1"; Can1Set = true; }else{ document.TestForm.TestStateDropDownBox.value = Can1State; document.TestForm.TestPointsDropDownBox.value = Can1Points; document.TestForm.TestTextArea.value = Can1Response; // document.TestForm.submit(); } } </script> <form name=TestForm> State: <select name=TestStateDropDownBox> <option value=new selected>New</option> <option value=open selected>Open</option> <option value=closed>Closed</option> </select> Points: <select name=TestPointsDropDownBox> <option value=1>1</option> <option value=1.5>1.5</option> <option value=2>2</option> <option value=2.5>2.5</option> <option value=3>3</option> <option value=3.5>3.5</option> <option value=4>4</option> <option value=4.5>4.5</option> <option value=5>5</option> <option value=5.5>5.5</option> <option value=6>6</option> <option value=6.5>6.5</option> <option value=7>7</option> <option value=7.5>7.5</option> <option value=8>8</option> <option value=8.5>8.5</option> <option value=9>9</option> <option value=9.5>9.5</option> <option value=10>10</option> </select> <p> Ticket information:<br> <textarea name=TestTextArea cols=50 rows=7></textarea> </form> <form name=CookieTest> <input type=button name=CookieButton1 value="Empty 1" onClick="javascript:Can1()"> </form>

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >