Search Results

Search found 22656 results on 907 pages for 'free amazon plugin'.

Page 47/907 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Free SVN repo server without requiring a project

    - by Nathan Campos
    I want to know if there are any free Subversion repository hosting servers where you don't need to have a 'project' to host your C++ files in. I don't have a actual project, but I want to store my C++ in an SVN repository. I am looking for something like OpenSVN where you can upload your C++ files, but it requires you to have a project. Can you recommend a Subversion hosting service where you can upload your files to an account, rather than a project. Something like: http://www.test-svn.com/~nathanpc/

    Read the article

  • Why is my amazon EC2 in asian pacific region having a US ip address?

    - by Turner
    I recently give a free trial to amazon EC2 service, I created a free tier micro instance(AMI is windows server 2008) in the Asian Pacific(Tokyo) region, but when it's done the public DNS it provided is ec2-54-238-181-35.ap-northeast-1.compute.amazonaws.com. The corresponding IP is 54.238.181.35, which I think is in the U.S. I tried to allocate some more elastic IPs but all of them seem to have a U.S. origin. Anyone please help explain to me ?

    Read the article

  • Cascading S3 Sink Tap not being deleted with SinkMode.REPLACE

    - by Eric Charles
    We are running Cascading with a Sink Tap being configured to store in Amazon S3 and were facing some FileAlreadyExistsException (see [1]). This was only from time to time (1 time on around 100) and was not reproducable. Digging into the Cascading codem, we discovered the Hfs.deleteResource() is called (among others) by the BaseFlow.deleteSinksIfNotUpdate(). Btw, we were quite intrigued with the silent NPE (with comment "hack to get around npe thrown when fs reaches root directory"). From there, we extended the Hfs tap with our own Tap to add more action in the deleteResource() method (see [2]) with a retry mechanism calling directly the getFileSystem(conf).delete. The retry mechanism seemed to bring improvement, but we are still sometimes facing failures (see example in [3]): it sounds like HDFS returns isDeleted=true, but asking directly after if the folder exists, we receive exists=true, which should not happen. Logs also shows randomly isDeleted true or false when the flow succeeds, which sounds like the returned value is irrelevant or not to be trusted. Can anybody bring his own S3 experience with such a behavior: "folder should be deleted, but it is not"? We suspect a S3 issue, but could it also be in Cascading or HDFS? We run on Hadoop Cloudera-cdh3u5 and Cascading 2.0.1-wip-dev. [1] org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory s3n://... already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:132) at com.twitter.elephantbird.mapred.output.DeprecatedOutputFormatWrapper.checkOutputSpecs(DeprecatedOutputFormatWrapper.java:75) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:923) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:882) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:882) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:856) at cascading.flow.hadoop.planner.HadoopFlowStepJob.internalNonBlockingStart(HadoopFlowStepJob.java:104) at cascading.flow.planner.FlowStepJob.blockOnJob(FlowStepJob.java:174) at cascading.flow.planner.FlowStepJob.start(FlowStepJob.java:137) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:122) at cascading.flow.planner.FlowStepJob.call(FlowStepJob.java:42) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.j [2] @Override public boolean deleteResource(JobConf conf) throws IOException { LOGGER.info("Deleting resource {}", getIdentifier()); boolean isDeleted = super.deleteResource(conf); LOGGER.info("Hfs Sink Tap isDeleted is {} for {}", isDeleted, getIdentifier()); Path path = new Path(getIdentifier()); int retryCount = 0; int cumulativeSleepTime = 0; int sleepTime = 1000; while (getFileSystem(conf).exists(path)) { LOGGER .info( "Resource {} still exists, it should not... - I will continue to wait patiently...", getIdentifier()); try { LOGGER.info("Now I will sleep " + sleepTime / 1000 + " seconds while trying to delete {} - attempt: {}", getIdentifier(), retryCount + 1); Thread.sleep(sleepTime); cumulativeSleepTime += sleepTime; sleepTime *= 2; } catch (InterruptedException e) { e.printStackTrace(); LOGGER .error( "Interrupted while sleeping trying to delete {} with message {}...", getIdentifier(), e.getMessage()); throw new RuntimeException(e); } if (retryCount == 0) { getFileSystem(conf).delete(getPath(), true); } retryCount++; if (cumulativeSleepTime > MAXIMUM_TIME_TO_WAIT_TO_DELETE_MS) { break; } } if (getFileSystem(conf).exists(path)) { LOGGER .error( "We didn't succeed to delete the resource {}. Throwing now a runtime exception.", getIdentifier()); throw new RuntimeException( "Although we waited to delete the resource for " + getIdentifier() + ' ' + retryCount + " iterations, it still exists - This must be an issue in the underlying storage system."); } return isDeleted; } [3] INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] at least one sink is marked for delete INFO [pool-2-thread-15] (BaseFlow.java:1287) - [...] sink oldest modified date: Wed Dec 31 23:59:59 UTC 1969 INFO [pool-2-thread-15] (HiveSinkTap.java:148) - Now I will sleep 1 seconds while trying to delete s3n://... - attempt: 1 INFO [pool-2-thread-15] (HiveSinkTap.java:130) - Deleting resource s3n://... INFO [pool-2-thread-15] (HiveSinkTap.java:133) - Hfs Sink Tap isDeleted is true for s3n://... ERROR [pool-2-thread-15] (HiveSinkTap.java:175) - We didn't succeed to delete the resource s3n://... Throwing now a runtime exception. WARN [pool-2-thread-15] (Cascade.java:706) - [...] flow failed: ... java.lang.RuntimeException: Although we waited to delete the resource for s3n://... 0 iterations, it still exists - This must be an issue in the underlying storage system. at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:179) at com.qubit.hive.tap.HiveSinkTap.deleteResource(HiveSinkTap.java:40) at cascading.flow.BaseFlow.deleteSinksIfNotUpdate(BaseFlow.java:971) at cascading.flow.BaseFlow.prepare(BaseFlow.java:733) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:761) at cascading.cascade.Cascade$CascadeJob.call(Cascade.java:710) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • What is the correct approach i should use for an application that requires amazon S3 uploads and SimpleDB data management?

    - by Luis Oscar
    I am developing an application for iOS and that is going smoothly, the problem is that I am very new at server sided things. I am totally confused about how to correctly use Amazon Web Services for this purpose. What I want to do is very simple. I want my application to be able to query a servlet hosted in EC2 to be able to retrieve pictures and data based on some criteria from S3 and SImpleDB respectively. Also the application should be able to upload pictures into a S3 bucket and register the information in the SImpleDB. My main concerns are security and costs, So far i was using Amazon Token Vending Machine but I haven't been successful when trying to customize it, and while researching I discovered that on the long run it is very expensive. The ultimate goal is to handle a "social" picture service for my iOS application. Being able to register new users, authenticate these users. See what permissions they have to which pictures from the bucked. And all this without having to worry about Third party people from accessing the private pictures of my users. Sorry for this question but I am really clueless about how to handle this... I have tried reading many articles but all these server stuff looks very scary.

    Read the article

  • Co-Authors Wordpress Plugin: coauthors_wp_list_authors function not working correctly

    - by rayne
    The Co-Authors Plus Plugin for Wordpress has a very annoying bug. The custom function coauthors_wp_list_authors lists authors the same way the wordpress function wp_list_authors does, but it does not include authors in the list who don't have a post of their own - if they have only entries in which they are listed as co-author but not as author, they will not be included in the list. That is of course missing a very important point. I've looked at the faulty SQL statement, but unfortunately my knowledge of advanced SQL, especially when it comes to JOINs, as well as my knowledge of the wp database structure is too limited and I remain clueless. There is a topic in the WP support forum, but unfortunately the information there is very outdated and the fix is not applicable anymore. I couldn't find any other, more current solutions on the internet. I'd be glad if somewhere here could help fix the SQL statement so it also lists co-authors who don't have posts where they're the sole author, as well as display the correct post count for all authors. Here's the entire function for reference purposes with a comment marking the SQL statement: function coauthors_wp_list_authors($args = '') { global $wpdb, $coauthors_plus; $defaults = array( 'optioncount' => false, 'exclude_admin' => true, 'show_fullname' => false, 'hide_empty' => true, 'feed' => '', 'feed_image' => '', 'feed_type' => '', 'echo' => true, 'style' => 'list', 'html' => true ); $r = wp_parse_args( $args, $defaults ); extract($r, EXTR_SKIP); $return = ''; $authors = $wpdb->get_results("SELECT ID, user_nicename from $wpdb->users " . ($exclude_admin ? "WHERE user_login <> 'admin' " : '') . "ORDER BY display_name"); $author_count = array(); # this is the SQL statement which doesn't work correctly: $query = "SELECT DISTINCT $wpdb->users.ID AS post_author, $wpdb->terms.name AS user_name, $wpdb->term_taxonomy.count AS count"; $query .= " FROM $wpdb->posts"; $query .= " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id)"; $query .= " INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; $query .= " INNER JOIN $wpdb->terms ON ($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id)"; $query .= " INNER JOIN $wpdb->users ON ($wpdb->terms.name = $wpdb->users.user_login)"; $query .= " WHERE post_type = 'post' AND " . get_private_posts_cap_sql( 'post' ); $query .= " AND $wpdb->term_taxonomy.taxonomy = '$coauthors_plus->coauthor_taxonomy'"; $query .= " GROUP BY post_author"; foreach ((array) $wpdb->get_results($query) as $row) { $author_count[$row->post_author] = $row->count; } foreach ( (array) $authors as $author ) { $link = ''; $author = get_userdata( $author->ID ); $posts = (isset($author_count[$author->ID])) ? $author_count[$author->ID] : 0; $name = $author->display_name; if ( $show_fullname && ($author->first_name != '' && $author->last_name != '') ) $name = "$author->first_name $author->last_name"; if( !$html ) { if ( $posts == 0 ) { if ( ! $hide_empty ) $return .= $name . ', '; } else $return .= $name . ', '; continue; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= '<li>'; if ( $posts == 0 ) { if ( ! $hide_empty ) $link = $name; } else { $link = '<a href="' . get_author_posts_url($author->ID, $author->user_nicename) . '" title="' . esc_attr( sprintf(__("Posts by %s", 'co-authors-plus'), $author->display_name) ) . '">' . $name . '</a>'; if ( (! empty($feed_image)) || (! empty($feed)) ) { $link .= ' '; if (empty($feed_image)) $link .= '('; $link .= '<a href="' . get_author_feed_link($author->ID) . '"'; if ( !empty($feed) ) { $title = ' title="' . esc_attr($feed) . '"'; $alt = ' alt="' . esc_attr($feed) . '"'; $name = $feed; $link .= $title; } $link .= '>'; if ( !empty($feed_image) ) $link .= "<img src=\"" . esc_url($feed_image) . "\" style=\"border: none;\"$alt$title" . ' />'; else $link .= $name; $link .= '</a>'; if ( empty($feed_image) ) $link .= ')'; } if ( $optioncount ) $link .= ' ('. $posts . ')'; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= $link . '</li>'; else if ( ! $hide_empty ) $return .= $link . ', '; } $return = trim($return, ', '); if ( ! $echo ) return $return; echo $return; }

    Read the article

  • Lock free multiple readers single writer

    - by dummzeuch
    I have got an in memory data structure that is read by multiple threads and written by only one thread. Currently I am using a critical section to make this access threadsafe. Unfortunately this has the effect of blocking readers even though only another reader is accessing it. There are two options to remedy this: use TMultiReadExclusiveWriteSynchronizer do away with any blocking by using a lock free approach For 2. I have got the following so far (any code that doesn't matter has been left out): type TDataManager = class private FAccessCount: integer; FData: TDataClass; public procedure Read(out _Some: integer; out _Data: double); procedure Write(_Some: integer; _Data: double); end; procedure TDataManager.Read(out _Some: integer; out _Data: double); var Data: TDAtaClass; begin InterlockedIncrement(FAccessCount); try // make sure we get both values from the same TDataClass instance Data := FData; // read the actual data _Some := Data.Some; _Data := Data.Data; finally InterlockedDecrement(FAccessCount); end; end; procedure TDataManager.Write(_Some: integer; _Data: double); var NewData: TDataClass; OldData: TDataClass; ReaderCount: integer; begin NewData := TDataClass.Create(_Some, _Data); InterlockedIncrement(FAccessCount); OldData := TDataClass(InterlockedExchange(integer(FData), integer(NewData)); // now FData points to the new instance but there might still be // readers that got the old one before we exchanged it. ReaderCount := InterlockedDecrement(FAccessCount); if ReaderCount = 0 then // no active readers, so we can safely free the old instance FreeAndNil(OldData) else begin /// here is the problem end; end; Unfortunately there is the small problem of getting rid of the OldData instance after it has been replaced. If no other thread is currently within the Read method (ReaderCount=0), it can safely be disposed and that's it. But what can I do if that's not the case? I could just store it until the next call and dispose it there, but Windows scheduling could in theory let a reader thread sleep while it is within the Read method and still has got a reference to OldData. If you see any other problem with the above code, please tell me about it. This is to be run on computers with multiple cores and the above methods are to be called very frequently. In case this matters: I am using Delphi 2007 with the builtin memory manager. I am aware that the memory manager probably enforces some lock anyway when creating a new class but I want to ignore that for the moment. Edit: It may not have been clear from the above: For the full lifetime of the TDataManager object there is only one thread that writes to the data, not several that might compete for write access. So this is a special case of MREW.

    Read the article

  • free() on stack memory

    - by vidicon
    I'm supporting some c code on Solaris, and I've seen something weird at least I think it is: char new_login[64]; ... strcpy(new_login, (char *)login); ... free(new_login); My understanding is that since the variable is a local array the memory comes from the stack and does not need to be freed, and moreover since no malloc/calloc/realloc was used the behaviour is undefined. This is a real-time system so I think it is a waste of cycles. Am I missing something obvious?

    Read the article

  • free RSS feed caching

    - by cherouvim
    Hello I've got an application which serves an rss feed of headlines and I need to provide this rss feed to other consumers. I don't want to provide the rss directly from my server though, due to limited server resources, so I need to proxy (cache) it through some service which will handle the load. Assuming the rss feed URL of my application is http://example.com/rss I initially provided my consumers with the url http://ajax.googleapis.com/ajax/services/feed/load?v=1.0&q=http%3A%2F%2Fexample.com%2Frss which solved my server load problem but introduced a liveness problem. The headlines are minutes to hours late from the actual feed (haven't exactly measured how much late). I've also tried distributing through feedburner so the url became something like http://feeds.feedburner.com/example123?format=xml but the liveness problem still exists. Is there a public and free solution for this problem? Anything below 5 minutes of liveness delay would be totally acceptable. thanks

    Read the article

  • Free US sales-tax lookup (per zip etc.)?

    - by Shimmy
    I am creating a pricing program. I need to calculate the amounts according to the current tax list in the US (in various places). I want to have a button 'Update taxes' in the administrative settings of the application, so when the user clicks it, it should download from somewhere the active tax amounts. So I actually want to have a function decimal GetTax(string zip). Does anyone knows about a free downloadable xml, or RSS accessible or even a website that I can crawle in and get this info from?

    Read the article

  • SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

    - by pinaldave
    Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud. In summary, SQLBackupAndFTP Creates SQL Server database and file backups on schedule Compresses and encrypts the backups Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3 Sends email notifications of job’s success or failure SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers. What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above) Connect to you SQL server and select databases to be backed up Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3) Enter your email to receive email confirmations Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule) Press “Run Now” button to test You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list A detailed tutorial is available on the developer’s website. Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups. However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often. You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect. On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself. This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup. I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here. Please share your experience about using this tool. I am eager to receive your feedback regarding this article. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLServer, T SQL, Technology

    Read the article

  • Google+ Platform Office Hours for April 11, 2012: Recent Activity jQuery Plugin

    Google+ Platform Office Hours for April 11, 2012: Recent Activity jQuery Plugin Here is the edited video from last week's Google+ platform office hours. Discuss this video on Google+: goo.gl This week we spent the first half of the show live coding a jQuery plugin that fetched recent public activity from a Google+ profile or page for inclusion on your website. Get the source code: goo.gl 1:15 - A demo of the implemented plugin 2:04 - The design of the plugin 2:57 - The coding begins! - Use the Google+ badge config tool to discover your userId (goo.gl or follow these instructions for pages: goo.gl Q&A 17:10 - Is there any kind of beta group that I can join for Google+? - Sign up for for the publisher preview group - goo.gl 19:03 - When will the API be available? 20:11 - When will there be more moderation tools for hangouts? 21:36 - How do I get Hangouts on air? 22:18 - An update on last week's report of Google Analytics social actions not agreeing with the +1 button count 25:53 - How do I join the hangout for these office hours? 26:44 - Using the activities search API is there any way to only see new activities? 28:18 - An announcement about our future office hours schedule From: GoogleDevelopers Views: 3387 47 ratings Time: 29:29 More in Science & Technology

    Read the article

  • Presentation Plugin for NetBeans IDE 7.2

    - by Geertjan
    I got some excellent help from Mark Stephens, who is from IDR Solutions, which produces JPedal. Using the LGPL version of JPedal, and code provided by Mark, it's now possible to right-click the node that appears in the Presentation Window: ...after which, using a file browser (to locate a file on disk) or a URL (a very simple check is done, the URL must start with "http" and end with "pdf"), you can now open PDF files as images (thanks to conversion from PDF to images done by JPedal) into NetBeans IDE, typically (I imagine) for presentation purposes: Note that you should consider the plugin in "alpha" state. But, despite that, I've had good results. Try it and use the URL below, as a control test (since it works fine for me), which produces the result shown above: http://edu.netbeans.org/contrib/slides/netbeans-platform/presentation-4-actions.pdf  However, for some PDFs, the plugin doesn't work, and I don't know why yet (trying to figure it out with Mark), resulting in this stack trace: java.lang.ArrayIndexOutOfBoundsException: 8 at org.jpedal.objects.acroforms.formData.SwingData.completeField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createField(Unknown Source) at org.jpedal.objects.acroforms.rendering.DefaultAcroRenderer.createDisplayComponentsForPage(Unknown Source) at org.jpedal.PDFtoImageConvertor.convert(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) at org.jpedal.PdfDecoder.getPageAsImage(Unknown Source) Here's the location of the plugin, install it into NetBeans IDE 7.2; feedback is very welcome: http://plugins.netbeans.org/plugin/44525

    Read the article

  • Installing Chrome Java Plugin

    - by kyleskool
    I've been trying to install the Java plugin for Chrome for a couple hours now, and I figured it was time to ask people with more experience. I can't seem to get it working. My current Java version is the 64-bit OpenJDK 1.6.0_24. I tried installing the IcedTea plugin to no avail. I have Ubuntu 12.04 64-bit installed at the moment. When I tried testing if java was enabled in Chrome, any website with a Java applet would not load (when I disabled the plugin, they loaded, but not the applet). I followed the instructions from here: http://technonstop.com/install-java-plugin-ubuntu-linux which said to create this script and run it: JAVA_HOME=/usr/lib/jvm/jdk1.7.0 MOZILLA_HOME=~/.mozilla mkdir $MOZILLA_HOME/plugins ln -s $JAVA_HOME/jre/lib/i386/libnpjp2.so $MOZILLA_HOME/plugins Note: You may need to change the value of JAVA_HOME so that it correctly points to your installation of the JDK. 64-bit users will need to change the final line to: ln -s $JAVA_HOME/jre/lib/amd64/libnpjp2.so $MOZILLA_HOME/plugins but this did not work as well. I just test it in Firefox and it's working. Still nothing for Chrome

    Read the article

  • Maven2 - problem with pluginManagement and parent-child relationship

    - by Newtopian
    from maven documentation pluginManagement: is an element that is seen along side plugins. Plugin Management contains plugin elements in much the same way, except that rather than configuring plugin information for this particular project build, it is intended to configure project builds that inherit from this one. However, this only configures plugins that are actually referenced within the plugins element in the children. The children have every right to override pluginManagement definitions. Now : if I have this in my parent POM <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <executions> Some stuff for the children </execution> </executions> </plugin> </plugins> </pluginManagement> </build> and I run mvn help:effective-pom on the parent project I get what I want, namely the plugins part directly under build (the one doing the work) remains empty. Now if I do the following : <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <executions> Some stuff for the children </execution> </executions> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>2.0.2</version> <inherited>true</inherited> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build> mvn help:effective-pom I get again just what I want, the plugins contains just what is declared and the pluginManagement section is ignored. BUT changing with the following <build> <pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <executions> Some stuff for the children </execution> </executions> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <inherited>false</inherited> <!-- this perticular config is NOT for kids... for parent only --> <executions> some stuff for adults only </execution> </executions> </plugin> </plugins> </build> and running mvn help:effective-pom the stuff from pluginManagement section is added on top of what is declared already. as such : <build> <pluginManagement> ... </pluginManagement> <plugins> <plugin> <artifactId>maven-dependency-plugin</artifactId> <version>2.0</version> <inherited>false</inherited> <!-- this perticular config is NOT for kids... for parent only --> <executions> Some stuff for the children </execution> <executions> some stuff for adults only </execution> </executions> </plugin> </plugins> </build> Is there a way to exclude the part for children from the parent pom's section ? In effect what I want is for the pluginManagement to behave exactly as the documentation states, that is I want it to apply for children only but not for the project in which it is declared. As a corrolary, is there a way I can override the parts from the pluginManagement by declaring the plugin in the normal build section of a project ? whatever I try I get that the section is added to executions but I cannot override one that exists already. EDIT: I never did find an acceptable solution for this and as such the issue remains open. Closest solution was offered below and is currently the accepted solution for this question until something better comes up. Right now there are three ways to achieve the desired result (modulate plugin behaviour depending on where in the inheritance hierarchy the current POM is): 1 - using profiles, it will work but you must beware that profiles are not inherited, which is somewhat counter intuitive. They are (if activated) applied to the POM where declared and then this generated POM is propagated down. As such the only way to activate the profile for child POM is specifically on the command line (least I did not find another way). Property, file and other means of activation fail to activate the POM because the trigger is not in the POM where the profile is declared. 2 - (this is what I ended up doing) Declare the plugin as not inherited in the parent and re-declare (copy-paste) the tidbit in every child where it is wanted. Not ideal but it is simple and it works. 3 - Split the aggregation nature and parent nature of the parent POM. Then since the part that only applies to the parent is in a different project it is now possible to use pluginManagement as firstly intended. However this means that a new artificial project must be created that does not contribute to the end product but only serves the could system. This is clear case of conceptual bleed. Also this only applies to my specific and is hard to generalize, so I abandoned efforts to try and make this work in favor of the not-pretty but more contained cut and paste patch described in 2. If anyone coming across this question has a better solution either because of my lack of knowledge of Maven or because the tool evolved to allow this please post the solution here for future reference. Thank you all for your help :-)

    Read the article

  • In WPF: Children.Remove or Children.Clear doesn't free objects

    - by Bart Roozendaal
    I create some UIElements from code behind and was anticipating the garbage collection to clear up stuff. However, the objects are not free-ed at the time I expected it. I was expecting them to be freeed at RemoveAt(0), but they are only freed at the end of the program. How can I make the objects be freed when removed from the Children collection of the Canvas? <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" MouseDown="Window_MouseDown"> <Grid> <Canvas x:Name="main" /> </Grid> </Window> The code behind is: public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void Window_MouseDown(object sender, MouseButtonEventArgs e) { if (main.Children.Count == 0) main.Children.Add(new MyControl() { Background = Brushes.Yellow, Width = 100, Height = 50 }); else main.Children.RemoveAt(0); } } public class MyControl : UserControl { ~MyControl() { Debug.WriteLine("Goodbye"); } }

    Read the article

  • How to monitor and maintain my grails application in live/production environment?

    - by fabien7474
    It is the first time I have ever launched live a website (with Grails web framework under Amazon EC2 platform and Cloud Foundry) and I realized quickly that I am not ready for monitoring and maintening correctly my application in production mode (fortunately the website is accessible to a very limited number of users) . The issues I have faced so far are: Cannot change my views. I need to redeploy my application I have no monitoring. I don't know who is connected, when do they sign in / sign out... Redploying my application (upload WAR + deploy) takes at least 30 minutes. I don't know how to restart my Tomcat server without a redeploy through Cloud Foundry ! ... So, my question is very simple: What tools (including grails plugins) and methods can you recommend me for taking me out from my current blindness?

    Read the article

  • PHP - "Fat Free Framework" Find Methods and Showing Results in Template

    - by user1672808
    Just started trying the "Fat Free Framework" I'm building a site using a MySQL DB with 265 fields, and 5000+ rows in the DB; I can load() a specific record easily, no problems. When using find(), afind(), and even "select()", template will show blank lines or lines with "filler" text, with the correct number of rows for the query results, but no text/data from the DB itself; Same problem whether using objects or simply arrays from result (afind() and find()). I've copied/pasted the code verbatim from examples and from documentation, with only the DB specific items changed. Still, no luck. CODE IN PHP FILE (function from CLASS): static function home() { $featured=new Axon('boats'); $F3::set('boatlist',$featured->afind('D_CustomerID=173')); F3::set('content',TEMPLATE_DIR .'/home.html'); echo Template::serve(TEMPLATE_DIR .'/layout.html'); } TEMPLATE home.html: <div class="span8"> <h3> Featured Boats </h3> <F3:repeat group="{{@boatlist}}" value="{{@boat}}"> <div style="margin-left: 2em" class="thumbnails"> <p> <a href="boat/{{@boat['D_BoatNum']}}">{{trim(@boat['D_Description'])}}</a> by {{@boat['D_CustomerID']}} </p> <p> {{@boat['D_Price']}} </p> </div> </F3:repeat> </div> The number of rows this produces coincides with the correct number of rows in the DB. However, the actual data from each field does not show. Any ideas?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >