Search Results

Search found 1314 results on 53 pages for 'prepare'.

Page 11/53 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • eSTEP Newsletter November 2012

    - by uwes
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4;  Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11;  Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud Handbook You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Conference networking for the socially awkward

    - by Melanie Townsend
    Do you approach a room full of strangers with excitement at all the new people you’re going to chat to over coffee and a muffin as you swap tales of how you convinced your manager to give you the day “off”? Or, do you find rooms full of strangers intimidating and begin by scouting out a place you can stand quietly and not be in someone’s way until the next session begins? If you’re on the train to extrovert city, that’s great, well done, move along. If, on the other hand, a room full of strangers who all seem to inexplicably know each other already is more challenge than opportunity, then making those connections with other professionals can be more difficult. So, here’s some advice, some gleaned from other things I’ve read online when trying to overcome my own discomfort in large groups (hopefully minus the infuriating condescension), others are just things I’ve found helpful over the years. Start small Smaller groups are less intimidating, and, now that you’ve taken the plunge to show up, it’s harder to remain inconspicuous. I find it’s easier to speak to new people once the option NOT to has been taken away. You’re there now, smile through the awkward and you’ll be forever grateful when the three people you’ve met and gotten to know here are also at that gigantic conference later on (ideally, introducing you to other people). Smile, or at the very least, stop scowling You probably don’t even know you’re doing it. If your resting face doesn’t come across as manically happy, tinge that with some social anxiety and you become one great ball of unapproachable. Normally, I wouldn’t suggest this as a problem that needs fixing, I have personally honed this face to use while travelling alone all the time. However, if you are indeed hoping to meet some useful people and get the most out of this conference, you may need to remind yourself to smile. Prepare some ice breakers This is going to sound stupid, like “no one does this right?” stupid, but, just, trust me a minute. It’s okay to prepare. You don’t need to write word-for-word questions to ask people and practice them in a mirror – that would be strange. I’m suggesting to just have an arsenal of questions to ask people if you get stuck, what session has been your favorite, which ones are you most looking forward to, have you heard X presenter speak before, what did you think of them? Even just thinking about these things in advance can help, and, as a bonus, while the other person is answering it gives you a moment to tamp down that panic, I mean breathe, I mean get to know them. You’re not alone (in the least creepy way possible) See that person in the corner clutching their phone with a mild deer in the headlights look?  That is potentially your new conference buddy. Starting with something along the lines of: I don’t know about you, the sessions here are great but I find the crowds a little tough to deal with. Mind if I park here for a second? is a decent opener. Just walking around and looking at exhibitors (if applicable) is fine, but it’s a little too easy to wander about and not actually speak to anyone if that’s all you’re doing. If joining a group of people talking is too much to start with, one-on-one can be easier. Have goals Are there people in particular you wanted to speak to? Did you have a personal goal of speaking to at least “x” new people? Are you trying to get a contact in a specific company because you want to work with them on something? Does the business have vague goals as well that you may or may not be judged on later? Making specific goals you can accomplish lets you know whether you’ve actually succeeded in your “networking pursuits” or what you need to work on more for next time. Everyone’s got their own coping technique. Some people are able to remind themselves that “humans are fundamentally social creatures” and somehow that helps them, others drink which is not really something I recommend for professional conferences but to each their own, and some focus on the fact that networking can play a big role in their career path. Just do what works for you, and if there’re any tricks you’ve found helpful over the years, please share em.

    Read the article

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • eSTEP Newsletter November 2012

    - by mseika
    Dear Partners,We would like to inform you that the November '12 issue of our Newsletter is now available.The issue contains information to the following topics: News from CorpOracle Celebrates 25 Years of SPARC Innovation; IDC White Papers Finds Growing Customer Comfort with Oracle Solaris Operating System; Oracle Buys Instantis; Pillar Axiom OpenWorld Highlights; Announcement Oracle Solaris 11.1 Availability (data sheet, new features, FAQ's, corporate pages, internal blog, download links, Oracle shop); Announcing StorageTek VSM 6; Announcement Oracle Solaris Cluster 4.1 Availability (new features, FAQ's, cluster corp page, download site, shop for media); Announcement: Oracle Database Appliance 2.4 patch update becomes available Technical SectionOracle White papers on SPARC SuperCluster; Understanding Parallel Execution; With LTFS, Tape is Gaining Storage Ground with additional link to How to Create Oracle Solaris 11 Zones with Oracle Enterprise Manager Ops Center; Provisioning Capabilities of Oracle Enterprise Ops Center Manager 12c; Maximizing your SPARC T4 Oracle Solaris Application Performance with the following articles: SPARC T4 Servers Set World Record on Siebel CRM 8.1.1.4 Benchmark, SPARC T4-Based Highly Scalable Solutions Posts New World Record on SPECjEnterprise2010 Benchmark, SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; Oracle SUN ZFS Storage Appliance Reference Architecture for VMware vSphere4; Why 4K? - George Wilson's ZFS Day Talk; Pillar Axiom 600 with connected subjects: Oracle Introduces Pillar Axiom Release 5 Storage System Software, Driving down the high cost of Storage, This Provisioning with Pilar Axiom 600, Pillar Axiom 600- System overview and architecture; Migrate to Oracle;s SPARC Systems; Top 5 Reasons to Migrate to Oracle's SPARC Systems Learning & EventsRecently delivered Techcasts: Learning Paths; Oracle Database 11g: Database Administration (New) - Learning Path; Webcast: Drill Down on Disaster Recovery; What are Oracle Users Doing to Improve Availability and Disaster Recovery; SAP NetWeaver and Oracle Exadata Database Machine ReferencesARTstor Selects Oracle’s Sun ZFS Storage 7420 Appliances To Support Rapidly Growing Digital Image Library, Scottish Widows Cuts Sales Administration 20%, Reduces Time to Prepare Reports by 75%, and Achieves Return on Investment in First Year, Oracle's CRM Cloud Service Powers Innovation: Applications on Demand; Technology on Demand, How toHow to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; Using svcbundle to Create SMF Manifests and Profiles in Oracle Solaris 11; How to prepare a Sun ZFS Storage Appliance to Serve as a Storage Devise with Oracle Enterprise Manager Ops Center 12c; Command Summary: Basic Operations with the Image Packaging System In Oracle Solaris 11; How to Update to Oracle Solaris 11.1 Using the Image Packaging System, How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11; Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster; Ease the Chaos with Automated Patching: Oracle Enterprise Manager Cloud Control 12c; Book excerpt: Oracle Exalogic Elastic Cloud HandbookYou find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter number: number of bound variabl

    - by Thomas
    Hi, I'm working with PHP PDO and I have the following problem: Warning: PDOStatement::execute(): SQLSTATE[HY093]: Invalid parameter number: number of bound variables does not match number of tokens in /var/www/site/classes/enterprise.php on line 63 Here is my code: public function getCompaniesByCity(City $city, $options = null) { $database = Connection::getConnection(); if(empty($options)) { $statement = $database-prepare("SELECT * FROM empresas WHERE empresas.cidades_codigo = ?"); $statement-bindValue(1, $city-getId()); } else { $sql = "SELECT * FROM empresas INNER JOIN prods_empresas ON prods_empresas.empresas_codigo = empresas.codigo WHERE "; foreach($options as $option) { $sql .= 'prods_empresas.produtos_codigo = ? OR '; } $sql = substr($sql, 0, -4); $sql .= ' AND empresas.cidades_codigo = ?'; $statement = $database-prepare($sql); echo $sql; foreach($options as $i = $option) { $statement-bindValue($i + 1, $option-getId()); } $statement-bindValue(count($options), $city-getId()); } $statement-execute(); $objects = $statement-fetchAll(PDO::FETCH_OBJ); $companies = array(); if(!empty($objects)) { foreach($objects as $object) { $data = array( 'id' = $object-codigo, 'name' = $object-nome, 'link' = $object-link, 'email' = $object-email, 'details' = $object-detalhes, 'logo' = $object-logo ); $enterprise = new Enterprise($data); array_push($companies, $enterprise); } return $companies; } } Thank you very much!

    Read the article

  • Problems with MediaRecorder class setting audio source - setAudioSource() - unsupported parameter

    - by arakn0
    Hello everybody, I'm new in Android development and I have the next question/problem. I'm playing around with the MediaRecorder class to record just audio from the microphone. I'm following the steps indicated in the official site: http://developer.android.com/reference/android/media/MediaRecorder.html So I have a method that initializes and configure the MediaRecorder object in order to start recording. Here you have the code: this.mr = new MediaRecorder(); this.mr.setAudioSource(MediaRecorder.AudioSource.MIC); this.mr.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); this.mr.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); this.mr.setOutputFile(this.path + this.fileName); try { this.mr.prepare(); } catch (IllegalStateException e) { Log.d("Syso", e.toString()); e.printStackTrace(); } catch (IOException e) { Log.d("Syso", e.toString()); e.printStackTrace(); } When I execute this code in the simulator, thanks to logcat, I can see that the method setAudioSource(MediaRecorder.AudioSource.MIC) gives the next error (with the tag audio_ipunt) when it is called: ERROR/audio_input(34): unsupported parameter: x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value ERROR/audio_input(34): VerifyAndSetParameter failed And then when the method prepare() is called, I get the another error again: ERROR/PVOMXEncNode(34): PVMFOMXEncNode-Audio_AMRNB::DoPrepare(): Got Component OMX.PV.amrencnb handle If I start to record bycalling the method start()... I get lots of messages saying: AudioFlinger(34):RecordThread: buffer overflow Then...after stop and release,.... I can see that a file has been created, but it doesn't seem that it been well recorderd. Anway, if i try this in a real device I can record with no problems, but I CAN'T play what I just recorded. I gues that the key is in these errors that I've mentioned before. How can I fix them? Any suggestion or help?? Thanks in advanced!!

    Read the article

  • Shouldn't prepared statements be much more fsater?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • Modify code to change timestamp timezone in sitemap

    - by Aahan Krish
    Below is the code from a plugin I use for sitemaps. I would like to know if there's a way to "enforce" a different timezone on all date variable and functions in it. If not, how do I modify the code to change the timestamp to reflect a different timezone? Say for example, to America/New_York timezone. Please look for date to quickly get to the appropriate code blocks. Code on Pastebin. <?php /** * @package XML_Sitemaps */ class WPSEO_Sitemaps { ..... /** * Build the root sitemap -- example.com/sitemap_index.xml -- which lists sub-sitemaps * for other content types. * * @todo lastmod for sitemaps? */ function build_root_map() { global $wpdb; $options = get_wpseo_options(); $this->sitemap = '<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">' . "\n"; $base = $GLOBALS['wp_rewrite']->using_index_permalinks() ? 'index.php/' : ''; // reference post type specific sitemaps foreach ( get_post_types( array( 'public' => true ) ) as $post_type ) { if ( $post_type == 'attachment' ) continue; if ( isset( $options['post_types-' . $post_type . '-not_in_sitemap'] ) && $options['post_types-' . $post_type . '-not_in_sitemap'] ) continue; $count = $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(ID) FROM $wpdb->posts WHERE post_type = %s AND post_status = 'publish' LIMIT 1", $post_type ) ); // don't include post types with no posts if ( !$count ) continue; $n = ( $count > 1000 ) ? (int) ceil( $count / 1000 ) : 1; for ( $i = 0; $i < $n; $i++ ) { $count = ( $n > 1 ) ? $i + 1 : ''; if ( empty( $count ) || $count == $n ) { $date = $this->get_last_modified( $post_type ); } else { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt ASC LIMIT 1 OFFSET %d", $post_type, $i * 1000 + 999 ) ); $date = date( 'c', strtotime( $date ) ); } $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $post_type . '-sitemap' . $count . '.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } } // reference taxonomy specific sitemaps foreach ( get_taxonomies( array( 'public' => true ) ) as $tax ) { if ( in_array( $tax, array( 'link_category', 'nav_menu', 'post_format' ) ) ) continue; if ( isset( $options['taxonomies-' . $tax . '-not_in_sitemap'] ) && $options['taxonomies-' . $tax . '-not_in_sitemap'] ) continue; // don't include taxonomies with no terms if ( !$wpdb->get_var( $wpdb->prepare( "SELECT term_id FROM $wpdb->term_taxonomy WHERE taxonomy = %s AND count != 0 LIMIT 1", $tax ) ) ) continue; // Retrieve the post_types that are registered to this taxonomy and then retrieve last modified date for all of those combined. $taxobj = get_taxonomy( $tax ); $date = $this->get_last_modified( $taxobj->object_type ); $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $tax . '-sitemap.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } // allow other plugins to add their sitemaps to the index $this->sitemap .= apply_filters( 'wpseo_sitemap_index', '' ); $this->sitemap .= '</sitemapindex>'; } /** * Build a sub-sitemap for a specific post type -- example.com/post_type-sitemap.xml * * @param string $post_type Registered post type's slug */ function build_post_type_map( $post_type ) { $options = get_wpseo_options(); ............ // We grab post_date, post_name, post_author and post_status too so we can throw these objects into get_permalink, which saves a get_post call for each permalink. while ( $total > $offset ) { $join_filter = apply_filters( 'wpseo_posts_join', '', $post_type ); $where_filter = apply_filters( 'wpseo_posts_where', '', $post_type ); // Optimized query per this thread: http://wordpress.org/support/topic/plugin-wordpress-seo-by-yoast-performance-suggestion // Also see http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/ $posts = $wpdb->get_results( "SELECT l.ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM ( SELECT ID FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset ) o JOIN $wpdb->posts l ON l.ID = o.ID ORDER BY l.ID" ); /* $posts = $wpdb->get_results("SELECT ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset"); */ $offset = $offset + $steps; foreach ( $posts as $p ) { $p->post_type = $post_type; $p->post_status = 'publish'; $p->filter = 'sample'; if ( wpseo_get_value( 'meta-robots-noindex', $p->ID ) && wpseo_get_value( 'sitemap-include', $p->ID ) != 'always' ) continue; if ( wpseo_get_value( 'sitemap-include', $p->ID ) == 'never' ) continue; if ( wpseo_get_value( 'redirect', $p->ID ) && strlen( wpseo_get_value( 'redirect', $p->ID ) ) > 0 ) continue; $url = array(); $url['mod'] = ( isset( $p->post_modified_gmt ) && $p->post_modified_gmt != '0000-00-00 00:00:00' ) ? $p->post_modified_gmt : $p->post_date_gmt; $url['chf'] = 'weekly'; $url['loc'] = get_permalink( $p ); ............. } /** * Build a sub-sitemap for a specific taxonomy -- example.com/tax-sitemap.xml * * @param string $taxonomy Registered taxonomy's slug */ function build_tax_map( $taxonomy ) { $options = get_wpseo_options(); .......... // Grab last modified date $sql = "SELECT MAX(p.post_date) AS lastmod FROM $wpdb->posts AS p INNER JOIN $wpdb->term_relationships AS term_rel ON term_rel.object_id = p.ID INNER JOIN $wpdb->term_taxonomy AS term_tax ON term_tax.term_taxonomy_id = term_rel.term_taxonomy_id AND term_tax.taxonomy = '$c->taxonomy' AND term_tax.term_id = $c->term_id WHERE p.post_status = 'publish' AND p.post_password = ''"; $url['mod'] = $wpdb->get_var( $sql ); $url['chf'] = 'weekly'; $output .= $this->sitemap_url( $url ); } } /** * Build the <url> tag for a given URL. * * @param array $url Array of parts that make up this entry * @return string */ function sitemap_url( $url ) { if ( isset( $url['mod'] ) ) $date = mysql2date( "Y-m-d\TH:i:s+00:00", $url['mod'] ); else $date = date( 'c' ); $output = "\t<url>\n"; $output .= "\t\t<loc>" . $url['loc'] . "</loc>\n"; $output .= "\t\t<lastmod>" . $date . "</lastmod>\n"; $output .= "\t\t<changefreq>" . $url['chf'] . "</changefreq>\n"; $output .= "\t\t<priority>" . str_replace( ',', '.', $url['pri'] ) . "</priority>\n"; if ( isset( $url['images'] ) && count( $url['images'] ) > 0 ) { foreach ( $url['images'] as $img ) { $output .= "\t\t<image:image>\n"; $output .= "\t\t\t<image:loc>" . esc_html( $img['src'] ) . "</image:loc>\n"; if ( isset( $img['title'] ) ) $output .= "\t\t\t<image:title>" . _wp_specialchars( html_entity_decode( $img['title'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:title>\n"; if ( isset( $img['alt'] ) ) $output .= "\t\t\t<image:caption>" . _wp_specialchars( html_entity_decode( $img['alt'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:caption>\n"; $output .= "\t\t</image:image>\n"; } } $output .= "\t</url>\n"; return $output; } /** * Get the modification date for the last modified post in the post type: * * @param array $post_types Post types to get the last modification date for * @return string */ function get_last_modified( $post_types ) { global $wpdb; if ( !is_array( $post_types ) ) $post_types = array( $post_types ); $result = 0; foreach ( $post_types as $post_type ) { $key = 'lastpostmodified:gmt:' . $post_type; $date = wp_cache_get( $key, 'timeinfo' ); if ( !$date ) { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt DESC LIMIT 1", $post_type ) ); if ( $date ) wp_cache_set( $key, $date, 'timeinfo' ); } if ( strtotime( $date ) > $result ) $result = strtotime( $date ); } // Transform to W3C Date format. $result = date( 'c', $result ); return $result; } } global $wpseo_sitemaps; $wpseo_sitemaps = new WPSEO_Sitemaps();

    Read the article

  • Using maven-release-plugin to tag and commit to non-origin

    - by Ali G
    When I do a release of my project, I want to share the source with a wider group of people than I normally do during development. The code is shared via a Git repository. To do this, I have used the following: remote public repository - released code is pushed here, every week or so (http://example.com/public) remote private repository - non-release code is pushed here, more than daily (http://example.com/private) In my local git repository, I have the following remotes defined: origin http://example.com/private public http://example.com/public I am currently trying to configure the maven-release-plugin to manage versioning of the builds, and to manage tagging and pushing of code to the public repository. In my pom.xml, I have listed the <scm/ as follows: <scm><connection>scm:git:http://example.com/public</connection></scm> (Removing this line will cause mvn release:prepare to fail) However, when calling mvn release:clean release:prepare release:perform Maven calls git push origin tagname rather than pushing to the URL specified in the POM. So the questions are: Best practice: Should I just be tagging and committing in my private repo (origin), and pushing to public manually? Can I make Maven push to the repository that I choose, rather than defaulting to origin? I felt this was implied by the requirement of the <connection/ element in <scm/.

    Read the article

  • Error when calling Mysql Stored procedure

    - by devuser
    This is my stored procedure to search throgh all databases,tables and columns. This procedure got created with out any error. DELIMITER $$ DROP PROCEDURE IF EXISTS mydb.get_table$$ CREATE DEFINER=root@% PROCEDURE get_table(in_search varchar(50)) READS SQL DATA BEGIN DECLARE trunc_cmd VARCHAR(50); DECLARE search_string VARCHAR(250); DECLARE db,tbl,clmn CHAR(50); DECLARE done INT DEFAULT 0; DECLARE COUNTER INT; DECLARE table_cur CURSOR FOR SELECT concat('SELECT COUNT(*) INTO @CNT_VALUE FROM ', table_schema,'.', table_name, ' WHERE ', column_name,' REGEXP ''',in_search,'''' ) ,table_schema,table_name,column_name FROM information_schema.COLUMNS WHERE TABLE_SCHEMA NOT IN ('mydb','information_schema'); DECLARE CONTINUE HANDLER FOR NOT FOUND SET done=1; #Truncating table for refill the data for new search. PREPARE trunc_cmd FROM 'TRUNCATE TABLE temp_details'; EXECUTE trunc_cmd ; OPEN table_cur; table_loop:LOOP FETCH table_cur INTO search_string,db,tbl,clmn; #Executing the search SET @search_string = search_string; SELECT search_string; PREPARE search_string FROM @search_string; EXECUTE search_string; SET COUNTER = @CNT_VALUE; SELECT COUNTER; IF COUNTER0 THEN # Inserting required results from search to table INSERT INTO temp_details VALUES(db,tbl,clmn); END IF; IF done=1 THEN LEAVE table_loop; END IF; END LOOP; CLOSE table_cur; #Finally Show Results SELECT * FROM temp_details; END$$ DELIMITER ; But when calling this procedure following error occurs. call get_table('aaa') Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'delete REGEXP 'aaa'' at line 1 (0 ms taken)

    Read the article

  • Problems with MediaPlayer, raw resources, stop and start

    - by arakn0
    Hello everybody, I'm new in Android development and I have the next question/problem. I'm playing around with the MediaPlayer class to reproduce some sounds/music. I am playing raw resources (res/raw) and it looks kind of easy. To play a raw resource, the MediaPlayer has to be initialized like this: MediaPlayer mp = MediaPlayer.create(appContext, R.raw.song); mp.start(); Until here there is no problem. The sound is played, and everything works fine. My problem appears when I want to add more options to my application. Specifically when I add the "Stop" button/option. Basically, what I want to do is...when I press "Stop", the music stops. And when I press "Start", the song/sound starts over. (pretty basic!) To stop the media player, you only have to call stop(). But to play the sound again, the media player has to be reseted and prepared. mp.reset(); mp.setDataSource(params); mp.prepare(); The problem is that the method setDataSource() only accepts as params a file path, Content Provider URI, streaming media URL path, or File Descriptor. So, since this method doesn't accept a resource identifier, I don't know how to set the data source in order to call prepare(). In addition, I don't understand why you can't use a Resouce identifier to set the data source, but you can use a resource identifier when initializing the MediaPlayer. I guess that I'm missing something. I wonder if I am mixing concepts, and the method stop() doesn't have to be called in the "Stop" button. Any help? Thanks in advanced!!!

    Read the article

  • Delivery of JMS message before the transaction is committed

    - by ewernli
    Hi, I have a very simple scenario involving a database and a JMS in an application server (Glassfish). The scenario is dead simple: 1. an EJB inserts a row in the database and sends a message. 2. when the message is delivered with an MDB, the row is read and updated. The problem is that sometimes the message is delivered before the insert has been committed in the database. This is actually understandable if we consider the 2 phase commit protocol: 1. prepare JMS 2. prepare database 3. commit JMS 4. ( tiny little gap where message can be delivered before insert has been committed) 5. commit database I've discussed this problem with others, but the answer was always: "Strange, it should work out of the box". My questions are then: How could it work out-of-the box? My scenario sounds fairly simple, why isn't there more people with similar troubles? Am I doing something wrong? Is there a way to solve this issue correctly? Here are a bit more details about my understanding of the problem: This timing issue exist only if the participant are treated in this order. If the 2PC treats the participants in the reverse order (database first then message broker) that should be fine. The problem was randomly happening but completely reproducible. I found no way to control the order of the participants in the distributed transactions in the JTA, JCA and JPA specifications neither in the Glassfish documentation. We could assume they will be enlisted in the distributed transaction according to the order when they are used, but with an ORM such as JPA, it's difficult to know when the data are flushed and when the database connection is really used. Any idea?

    Read the article

  • Why can't I get a TRUE return in this prepared statement?

    - by Cortopasta
    I can't seem to get this to do anything but return false. My best guess is that the prepared statement isn't executing, but I have no idea why. private function check_credentials($plain_username, $md5_password) { global $dbcon; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); $id = $userid->fetch(); Return $id; } *EDIT:*I've even tried hard coding the username and password into the function itself to try and isolate the problem like this: private function check_credentials($plain_username, $md5_password) { global $dbcon; $plain_username = "jim"; $md5_username = "waffles"; $ac = new ac(); $ac->dbconnect(); $userid = $dbcon->prepare('SELECT id FROM users WHERE username = :username AND password = :password LIMIT 1'); $userid->bindParam(':username', $plain_username); $userid->bindParam(':password', $md5_password); $userid->execute(); print_r($dbcon->errorInfo()); $id = $userid->fetch(); Return $id; } Still nothing :-/

    Read the article

  • Preparing a MySQL INSERT/UPDATE statement with DEFAULT values

    - by Raveren
    Quoting MySQL INSERT manual - same goes for UPDATE: Use the keyword DEFAULT to set a column explicitly to its default value. This makes it easier to write INSERT statements that assign values to all but a few columns, because it enables you to avoid writing an incomplete VALUES list that does not include a value for each column in the table. Otherwise, you would have to write out the list of column names corresponding to each value in the VALUES list. So in short if I write INSERT INTO table1 (column1,column2) values ('value1',DEFAULT); A new row with column2 set as its default value - whatever it may be - is inserted. However if I prepare and execute a statement in PHP: $statement = $pdoObject-> prepare("INSERT INTO table1 (column1,column2) values (?,?)"); $statement->execute(array('value1','DEFAULT')); The new row will contain 'DEFAULT' as its text value - if the column is able to store text values. Now I have written an abstraction layer to PDO (I needed it) and to get around this issue am considering to introduce a const DEFAULT_VALUE = "randomstring"; So I could execute statements like this: $statement->execute(array('value1',mysql::DEFAULT_VALUE)); And then in method that does the binding I'd go through values that are sent to be bound and if some are equal to self::DEFAULT_VALUE, act accordingly. I'm pretty sure there's a better way to do this. Has someone else encountered similar situations?

    Read the article

  • Jquery Internet Explorer 8 compatibility issue, does not load data unless history is deleted...

    - by Scarface
    Hey guys, I have a weird problem. I have an update system that refreshes data on a time interval. It works well in all browsers except internet explorer 8. The problem is that once it loads the data, it does not matter if the data updates further, it will not update the data visually until the internet history is cleared. I am not using any cookies server-side...Anyone ever encounter something like this? Here is my javascript, thanks for any assistance in advance function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a></span>' + ' <span class="date">'+mytime+'</span><br>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function refresh() { $.getJSON(files+"shoutbox.php?action=view&time="+lastTime+"&topic_id="+topic_id, function(json) { if(json.length) { for(i=0; i < json.length; i++) { $('#daddy-shoutbox-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } var j = i-1; lastTime = json[j].time; } //alert(lastTime); }); timeoutID = setTimeout(refresh, 3000); } $(document).ready(function() { var options = { dataType: 'json', beforeSubmit: validate, success: function(response, status){ if (response.error=='success'){ success(response, status); } else { $.prompt(response.error); } } }; $('#daddy-shoutbox-form').ajaxForm(options); timeoutID = setTimeout(refresh, 100); });

    Read the article

  • application packaging

    - by user285825
    The application packaging (software deliverables) is a vague concept to me. I worked in web application before so all I know how to prepare the deliverables ie WAR and then put them into the servlet container and rest of this being taken care of. But when considering other java or c++-based application (.exe or .class) how to prepare the package? What should be considered for preparation of the package and deployment of the application? Like I understand there should be some entries made into the windows registry or some libraries to be put in /usr/bin directory and so on. And during the development phase the execution and testing is done in more informal manner like writing some shell script or so on. But the deployment scenario is a more formal thing. As I said I have only the idea of deploying web applications I am not much knowledgeable in the other areas. Are there any kind of conventions like there are conventions for documentations like javadoc or doxygen/man pages? Application packaging is not that sort of a thing that there are much discussion on books or online materials. This is very disappointing.

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • Stored procedure strange error when called through php

    - by ravi
    I have been coding a registration page(login system) in php and mysql for a website. I'm using two stored procedures for the same. First stored procedure checks wether the email address already exists in database.Second one inserts the user supplied data into mysql database. User has EXECUTE permission on both the procedures.When is execute them individually from php script they work fine. But when i use them together in script second Stored procedure(insert) not working. Stored procedure 1. DELIMITER $$ CREATE PROCEDURE reg_check_email(email VARCHAR(80)) BEGIN SET @email = email; SET @sql = 'SELECT email FROM user_account WHERE user_account.email=?'; PREPARE stmt FROM @sql; EXECUTE stmt USING @email; END$$ DELIMITER; Stored procedure 2 DELIMITER $$ CREATE PROCEDURE reg_insert_into_db(fname VARCHAR(40), lname VARCHAR(40), email VARCHAR(80), pass VARBINARY(32), licenseno VARCHAR(80), mobileno VARCHAR(10)) BEGIN SET @fname = fname, @lname = lname, @email = email, @pass = pass, @licenseno = licenseno, @mobileno = mobileno; SET @sql = 'INSERT INTO user_account(email,pass,last_name,license_no,phone_no) VALUES(?,?,?,?,?)'; PREPARE stmt FROM @sql; EXECUTE stmt USING @email,@pass,@lname,@licenseno,@mobileno; END$$ DELIMITER; When i test these from php sample script insert is not working , but first stored procedure(reg_check_email()) is working. If i comment off first one(reg_check_email), second stored procedure(reg_insert_into_db) is working fine. <?php require("/wamp/mysql.inc.php"); $r = mysqli_query($dbc,"CALL reg_check_email('[email protected]')"); $rows = mysqli_num_rows($r); if($rows == 0) { $r = mysqli_query($dbc,"CALL reg_insert_into_db('a','b','[email protected]','c','d','e')"); } ?> i'm unable to figure out the mistake. Thanks in advance, ravi.

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • Reverse ordered list for jquery submitted comments

    - by g-man
    Hey guys I have one more question lol. I am using a script that allows users to submit comments through jquery ajax, however when they are submitted, the submitted comments submit at the bottom of the other comments which are sorted in descending order (newest on top) when the page first loads (due to mysql query). Is there a way to make it submit on top through some sort of sorting javascript function? function prepare(response) { var d = new Date(); count++; d.setTime(response.time*1000); var mytime = d.getHours()+':'+d.getMinutes()+':'+d.getSeconds(); var string = '<li class="shoutbox-list" id="list-'+count+'">' + '<span class="date">'+mytime+'</span>' + '<span class="shoutbox-list-nick"><a href="statistics.php?user='+response.user+'">'+response.user+'</a>:</span>' + '<span class="msg">'+response.message+'</span>' +'</li>'; return string; } function success(response, status) { if(status == 'success') { lastTime = response.time; $('#daddy-shoutbox-list').append(prepare(response)); $('input[name=message]').attr('value', '').focus(); $('#list-'+count).fadeIn('slow'); timeoutID = setTimeout(refresh, 3000); } } <div id="daddy-shoutbox"> <ol id="daddy-shoutbox-list"></ol> </div>

    Read the article

  • Sending multi-part email from Google App Engine using Spring's JavaMailSender fails

    - by hleinone
    It works without the multi-part (modified from the example in Spring documentation): final MimeMessagePreparator preparator = new MimeMessagePreparator() { public void prepare(final MimeMessage mimeMessage) throws Exception { final MimeMessageHelper message = new MimeMessageHelper( mimeMessage); message.setTo(toAddress); message.setFrom(fromAddress); message.setSubject(subject); final String htmlText = FreeMarkerTemplateUtils .processTemplateIntoString(configuration .getTemplate(htmlTemplate), model); message.setText(htmlText, true); } }; mailSender.send(preparator); But once I change it to: final MimeMessagePreparator preparator = new MimeMessagePreparator() { public void prepare(final MimeMessage mimeMessage) throws Exception { final MimeMessageHelper message = new MimeMessageHelper( mimeMessage, true); ... message.setText(plainText, htmlText); } }; mailSender.send(preparator); I get: Failed message 1: javax.mail.MessagingException: Converting attachment data failed at com.google.appengine.api.mail.stdimpl.GMTransport.sendMessage(GMTransport.java:231) at org.springframework.mail.javamail.JavaMailSenderImpl.doSend(JavaMailSenderImpl.java:402) ... This is especially difficult since the GMTransport is proprietary Google class and no sources are available, which would make it a bit easier to debug. Anyone have any ideas what to try next? My bean config, for helping you to help me: <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl" p:username="${mail.username}" p:password="${mail.password}" p:protocol="gm" />

    Read the article

  • Inserting null fields with dbi:Pg

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • Binding a date string parameter in an MS Access PDO query

    - by harryg
    I've made a PDO database class which I use to run queries on an MS Access database. When querying using a date condition, as is common in SQL, dates are passed as a string. Access usually expects the date to be surrounded in hashes however. E.g. SELECT transactions.amount FROM transactions WHERE transactions.date = #2013-05-25#; If I where to run this query using PDO I might do the following. //instatiate pdo connection etc... resulting in a $db object $stmt = $db->prepare('SELECT transactions.amount FROM transactions WHERE transactions.date = #:mydate#;'); //prepare the query $stmt->bindValue('mydate', '2013-05-25', PDO::PARAM_STR); //bind the date as a string $stmt->execute(); //run it $result = $stmt->fetch(); //get the results As far as my understanding goes the statement that results from the above would look like this as binding a string results in it being surrounded by quotes: SELECT transactions.amount FROM transactions WHERE transactions.date = #'2013-05-25'#; This causes an error and prevents the statement from running. What's the best way to bind a date string in PDO without causing this error? I'm currently resorting to sprintf-ing the string which I'm sure is bad practise. Edit: if I pass the hash-surrounded date then I still get the error as below: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[22018]: Invalid character value for cast specification: -3030 [Microsoft][ODBC Microsoft Access Driver] Data type mismatch in criteria expression. (SQLExecute[-3030] at ext\pdo_odbc\odbc_stmt.c:254)' in C:\xampp\htdocs\ips\php\classes.php:49 Stack trace: #0 C:\xampp\htdocs\ips\php\classes.php(49): PDOStatement-execute() #1 C:\xampp\htdocs\ips\php\classes.php(52): database-execute() #2 C:\xampp\htdocs\ips\try2.php(12): database-resultset() #3 {main} thrown in C:\xampp\htdocs\ips\php\classes.php on line 49

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >