Search Results

Search found 22656 results on 907 pages for 'free amazon plugin'.

Page 15/907 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Amazon S3 not sending Content-Type header

    - by Luke____
    I have an application that downloads content from various sources. It relies on the "Content-Type" header being set on images. The majority of web-servers do this correctly but it appears Amazon S3 server is not setting the Content-Type. I assume Amazon servers are configured correctly so what could be the problem? Are these images not uploaded correctly? Or should I not be relying on content type being set? Example Thanks

    Read the article

  • Amazon EC2, still cant ping or "http" it

    - by DarkFire21
    I am new at Amazon Cloud technologies. I ve set up an Amazon Linux instance created my keys and assigned elastic IP. Also, I opened all TCP, UDP, ICMP ports(ok, it's very dangerous, but I am using it for test purposes). I ve also installed Apache server and enabled it. But still cant ping or access my instance via IP. Any ideas? EDIT: Please see a screenshot of the security groups settings. All ports are open... Check this out

    Read the article

  • Eclipse Java Code Formatter in NetBeans Plugin Manager

    - by Geertjan
    Great news for Eclipse refugees everywhere. Benno Markiewicz forked the Eclipse formatter plugin that I blogged about sometime ago (here and here)... and he fixed many bugs, while also adding new features. It's a handy plugin when you're (a) switching from Eclipse to NetBeans and want to continue using your old formatting rules and (b) working in a polyglot IDE team, i.e., now the formatting rules defined in Eclipse can be imported into NetBeans IDE and everyone will happily be able to conform to the same set of formatting standards. And now you can get it directly from Tools | Plugins in NetBeans IDE 7.4: News from Benno on the plugin, received from him today: The plugin is verified by the NetBeans community and available in the Plugin Manager in NetBeans IDE 7.4 (as shown above) and also at the NetBeans Plugin Portal here, where you can also read quite some info about the plugin:  http://plugins.netbeans.org/plugin/50877/eclipse-code-formatter-for-java The issue with empty undo buffer was solved with the help of junichi11: https://github.com/markiewb/eclipsecodeformatter_for_netbeans/issues/18 The issue with the lost breakpoints remains unsolved and there was no further feedback. That is the main reason why the save action isn't activated by default. See also the open known issues at https://github.com/markiewb/eclipsecodeformatter_for_netbeans/issues?state=open Features are as follows:  Global configuration and project specific configuration.  On save action, which is disabled by default. Show the used formatter as a notification, which is enabled by default.  Finally, Benno testifies to the usefulness, stability, and reliability of the plugin: I use the Eclipse formatter provided by this plugin every day at work. Before I commit, I format the sources. It works and that's it. I am pleased with it. Here's where the Eclipse formatter is defined globally in Tools | Options: And here is per-project configuration, i.e., use the Project Properties dialog of any project to override the global settings:  Interested to hear from anyone who tries the plugin and has any feedback of any kind! 

    Read the article

  • Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:aspectj-maven-plugin:1.0

    - by alexm
    I switched from q4e Helios to Indigo m2e plugin and my Maven 2 project no longer works. I had a ROO-generated Spring MVC project. This is what I get: Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:aspectj-maven-plugin:1.0:test-compile (execution: default, phase: process-test-sources) Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:aspectj-maven-plugin:1.0:compile (execution: default, phase: process-sources) Any insight is greatly appreciated. Thank you.

    Read the article

  • .NET Reflector is no longer free - how does everyone feel about this? [closed]

    - by Schnapple
    The upcoming version of .NET Reflector, coming in March, will no longer have a free version. .NET Reflector started out as a free utility written by programmer Lutz Roeder and quickly became fairly indispensable to a lot of programmers. After about four years he sold it to RedGate software, who has maintained a free version ever since, as well as a "Pro" version about a year ago which adds capabilities and starts at $99/seat. The new version will no longer have a free version, will be $35 for the non-Pro versions, and the existing free versions will still work until the end of May. On the one hand it's annoying that the existing free versions will die and obviously I'd prefer there be a free version going forward. On the other hand I respect where RedGate is coming from and the cost for a license isn't prohibitively expensive. Plus it may encourage more frequent updates. EDIT: I originally said it was $35 for everyone but according to this FAQ there's still going to be a Pro version.

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • WebSphere Plugin Keystore Unreadable by IHS - GSK_ERROR_BAD_KEYFILE_PASSWORD

    - by Seer
    Running WAS 6.1.xx in a network deployment. The IBM provided plugin keystore's "plugin-key.kdb" password expires on april 26th along with the personal cert inside it. So no problem right? Create new cert and set new password on the kdb, restash the password and off we go! Well no! On restart of IBM HTTP Server we see [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: logSSLError: str_security (gsk error 408): GSK_ERROR_BAD_KEYFILE_PASSWORD [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: initializeSecurity: Failed to initialize GSK environment [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_transport: transportInitializeSecurity: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: HTTPS Transport is skipped [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: logSSLError: str_security (gsk error 408): GSK_ERROR_BAD_KEYFILE_PASSWORD [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: lib_security: initializeSecurity: Failed to initialize GSK environment [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_transport: transportInitializeSecurity: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: Failed to initialize security [Tue Apr 24 14:11:22 2012] 00b00004 00000001 - ERROR: ws_server: serverAddTransport: HTTPS Transport is skipped Here is the thing ... I can open the keystore using the new password with ikeyman and keytool. Using some "slightly dodgy" script, I can reverse the stash file and see that indeed the new password is set. Then, even if I restore the old keystore files (plugin-key.kdb,plugin-key.crl,plugin-key.sth,plugin-key.rdb) they no longer work either! So it must be permissions right? Well the permissions are the same as before, if I switch to apache user I can browse right through to the files and read them. I have even chown'ed them to apache:apache and/or chmod 777 and still its the same error! Does anyone have a clue what is going on here? Its pretty urgent as our site will be without HTTPS in a couple of days if this isn't resolved - thats bad for a retail web site :)

    Read the article

  • IPSec Tunnel to Amazon EC2 - Netkey, NAT, and routing problem

    - by Ernest Mueller
    Hey all, I'm working on getting an IPSec VPN working between Amazon EC2 and my on-premise. The goal is to be able to safely administer stuff, up/download data, etc. over that tunnel. I have gotten the tunnel up in openswan between a Fedora 12 instance with an elastic IP and a Cisco router that's also NATted. I think the ipsec part is OK, but I'm having trouble figuring out how to route traffic that way; there's no "ipsec0" virutal interface because on Amazon you have to use netkey and not KLIPS for the vpn. I hear iptables may be required and I'm an iptables noob. On the left (Amazon), I have a 10. network. Box 1 is privately 10.254.110.A, publically IP 184.73.168.B. Netkey tunnel is up. Box 2 is publically 130.164.26.C, privately 130.164.0.D And my .conf is: conn ni type= tunnel authby= secret left= 10.254.110.A leftid= 184.73.168.B leftnexthop= %defaultroute leftsubnet= 10.254.0.0/32 right= 130.164.26.C rightid= 130.164.0.D rightnexthop= %defaultroute rightsubnet= 130.164.0.0/18 keyexchange= ike pfs= no auto= start keyingtries= 3 disablearrivalcheck=no ikelifetime= 240m auth= esp compress= no keylife= 60m forceencaps= yes esp= 3des-md5 I added a route to box 1 (130.164.0.0/18 via 10.254.110.A dev eth0) but that doesn't do it for predictable reasons, when I traceroute the traffic's still going "around" and not through the vpn. Routing table: 10.254.110.0/23 dev eth0 proto kernel scope link src 10.254.110.A 130.164.0.0/18 via 10.254.110.178 dev eth0 src 10.254.110.A 169.254.0.0/16 dev eth0 scope link metric 1002 Anyone know how to do the routing with a netkey ipsec tunnel where both sides are NATted? Thanks...

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • IPSec Tunnel to Amazon EC2 - Netkey, NAT, and routing issue

    - by Ernest Mueller
    I'm working on getting an IPSec VPN working between Amazon EC2 and my on-premise. The goal is to be able to safely administer stuff, up/download data, etc. over that tunnel. I have gotten the tunnel up in openswan between a Fedora 12 instance with an elastic IP and a Cisco router that's also NATted. I think the ipsec part is OK, but I'm having trouble figuring out how to route traffic that way; there's no "ipsec0" virutal interface because on Amazon you have to use netkey and not KLIPS for the vpn. I hear iptables may be required and I'm an iptables noob. On the left (Amazon), I have a 10. network. Box 1 is privately 10.254.110.A, publically IP 184.73.168.B. Netkey tunnel is up. Box 2 is publically 130.164.26.C, privately 130.164.0.D And my .conf is: conn ni type= tunnel authby= secret left= 10.254.110.A leftid= 184.73.168.B leftnexthop= %defaultroute leftsubnet= 10.254.0.0/32 right= 130.164.26.C rightid= 130.164.0.D rightnexthop= %defaultroute rightsubnet= 130.164.0.0/18 keyexchange= ike pfs= no auto= start keyingtries= 3 disablearrivalcheck=no ikelifetime= 240m auth= esp compress= no keylife= 60m forceencaps= yes esp= 3des-md5 I added a route to box 1 (130.164.0.0/18 via 10.254.110.A dev eth0) but that doesn't do it for predictable reasons, when I traceroute the traffic's still going "around" and not through the vpn. Routing table: 10.254.110.0/23 dev eth0 proto kernel scope link src 10.254.110.A 130.164.0.0/18 via 10.254.110.178 dev eth0 src 10.254.110.A 169.254.0.0/16 dev eth0 scope link metric 1002 Anyone know how to do the routing with a netkey ipsec tunnel where both sides are NATted? Thanks...

    Read the article

  • Wordpress plugin installation error.

    - by Steve
    I'm trying to upload secure-wordpress.1.0.6, and I receive the following error: Warning: touch() [function.touch]: open_basedir restriction in effect. File(/abs_path/wordpress/tmp/secure-wordpress.tmp) is not within the allowed path(s): (/abs_path/:/abs_path/:/usr/local/lib/php:/tmp/php_upload) in /abs_path/public/www/wordpress/wp-admin/includes/file.php on line 199 Download failed. Could not create Temporary file. The /wp-content folder and all it's subfolders have 777 permission. I've added the following two lines to wp-config.php: putenv('TMPDIR='.ini_get('upload_tmp_dir') ); define('WP_TEMP_DIR', ABSPATH .'wp-content/uploads/'); What else should I try? I am using Wordpress 3.04 in a PHP 4.49 environment.

    Read the article

  • Using AWS or Azure, what to do about emails?

    - by Paul
    I'm coming from a background of paying a hosting company X amount per month for a server. This server comes with IIS, WebsitePanel and Smartermail all bundled together. When I create a new domain using WebsitePanel it automatically creates my email account. All I then need to do is configure my DNS to point to the server. I've decided that it is more cost efficient to move to AWS / Azure. Has anyone come from a similar background and moved onto a cloud system? I'd be interested to know what you did regarding emails. So far, these are the suggestions I've seen: Use Google Apps for each domain Use something like Elastic Email to sent out emails Launch a new instance and host an email server on that The first option seems like quite a lot of manual configuration, the second one works good with outgoing emails but what about receiving? Option 3 would make it less cost effective. What is your experience?

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • How can I deploy my .NET app to Amazon EC2?

    - by Khash
    I have a .NET Windows service and a .NET Web Application that I would like to deploy to my Amazon EC2 Windows 2008 instances. At this point, all I need to do is to copy the zipped files across to the EC2 box and remote desktop to the EC2 instance and finish the deployment. In order to do this, I have tried LogMeIn Hamachi2 to create a P2P VPN and use RoboCopy to copy the files, however it seems Hamachi doesn't work on Windows EC2. What is your solution for deploying your .NET apps to Windows EC2 instances? I want to avoid running an FTP server on the box just to get my files up on the server and don't have a VPN server (like OpenVPN) running to run a cloud based VPN solution. Perhaps I can find a simple way of using Amazon S3 as a strategy? Any ideas? Suggestions?

    Read the article

  • ItemSearch totally different from Amazon.com search - What am I doing wrong?

    - by RadiantHex
    I'm using ItemSearch in order to get a list of books ordered by 'salesrank' aka 'bestselling', problem is that the books that pop up for any BrowseNode are totally different from the Amazon list. Test Case Author:"J R R Tolkien", SearchIndex:"Books", Sort:"salesrank" Using the API: J.R.R. Tolkien Boxed Set (The Hobbit and The Lord of the Rings) The Hobbit: 70th Anniversary Edition The Lord of the Rings: 50th Anniversary, One Vol. Edition The Legend of Sigurd and Gudrun The Silmarillion The Lord of the Rings The Children of Hurin The Fellowship of the Ring: Being the First Part of The Lord of the Rings The Return of the King: Being the Third Part of The Lord of the Rings Using Amazon.com Adv. Search: The Lord of the Rings (Trilogy) The Hobbit J.R.R. Tolkien Boxed Set (The Hobbit and The Lord of the Rings) The Legend of Sigurd and Gudrun The Lord of the Rings: 50th Anniversary, One Vol. Edition The Silmarillion The Fellowship of the Ring The Children of Hurin Lord of the Rings, The Return of the King, The (Vol 3) Help would very much appreciated

    Read the article

  • [WordPress 3.1.3] Sreen option is disabled when a plugin is activated

    - by RNorbe
    I'm pretty new to wordpress. I was assigned to create a custom plugin for one of our projects here. The plugin worked as expected and there is no problem activating/deactivating it. When I was exploring the admin panel I noticed that the screen option is off. I read from a blog somewhere that deactivating the plugin one by one to check which plugin has caused this. I did just this and found out that the custom plugin I created was the cause. My question is, is there way to check what have caused this? Some log file I can look into? There is no error message or warning when I activated the plugin and it is giving the output required. This is my first plugin, any advice will be helpful. Btw, this plugin will display a comment (most recent will be shown first) in a widget and there is prev/next navigation to go through the rest of the comments. Thanks, RNorbe

    Read the article

  • Amazon Upgrades FreeTime; More Content for the Kid-Friendly Walled Garden

    - by Jason Fitzpatrick
    Earlier this year Amazon introduced FreeTime, a walled garden area intended to provide a kids-only app gallery on the Kindle Fire. It was up to parents to populate the content but now, with the recent update, Amazon brings together unlimited books, movies, games, and apps. Intended for children ages 3-8 the upgraded service eschews the you-pick-it-all approach and goes with a hand-curated collection of games, educational apps, books and more. In addition to the pile of hand-curated content, FreeTime also has built in time limits and individual profiles for different children. Every Kindle Fire, Kindle Fire HD, and Kindle Fire HD 8.9″ user can try out the service for thirty days without charge. After the thirty day trial the subscription price is $4.99 per month ($2.99 for Prime members). Hit up the link below to check out the full description of the service. Amazon FreeTime [Amazon] Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus?

    Read the article

  • Shared to Dedicated or Amazon CloudFront to improve performances and keep secured?

    - by user978548
    I have a Wordpress which currently takes about 1.8s to 2.5s for the home page to completely load in my country. The page weight is about 700Ko (static content included). In order to increase performances, I'm considering two solutions: Switching to a dedicated host. Using amazon s3 cloudfront to serve static contents. My current shared hosting have servers in a neighboring country but not exactly in mine, and both amazon and the dedicated hosting have some, so that's already an advantage. So considering all that, I still have three questions remaining: Currently having a low traffic (100 unique visitors/days, but growing) will it make a huge difference between my shared hosting and a dedicated server ? Knowing that I already use a cookie-less domain to deliver static contents (but using a redirection to the same server), would using amazon s3 make a real difference ? Talking about the cons of dedicated vs amazon s3, if I choose for the dedicated server something like Ubuntu server and do daily package updates and have only port 80 open, would it be sufficient in terms of security (in comparison with my current shared hosting which manage everything for me) ?

    Read the article

  • Does AWS resolve same-datacenter hostnames to 10.* addresses for different customer accounts?

    - by Scott Ritchie
    If I bring up two Amazon EC2 instances and run nslookup on one for the other's hostname, amazon will return a 10.* address. This is routable within amazon, and works just fine. But does this work between different accounts? If I use one of my nodes to nslookup a hostname belonging to another customer (but still in the same datacenter) will it resolve as a 10.* address or will it give the standard public IP?

    Read the article

  • What happens when I delete the first of several AWS EBS snapshots?

    - by Jepper
    On http://aws.amazon.com/ebs/pricing/ it says: "EBS Snapshots [...] For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. For each incremental snapshot, only the changed part of your Amazon EBS volume is saved." I intend to snapshot some of my instances daily and keep snapshots for 7 days after which the snapshots are destroyed. What happens when, eventually, I destroy the first snapshot? Will the subsequent snapshots be worthless given the first is no longer available?

    Read the article

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

  • Parent Objects

    - by Ali Bahrami
    Support for Parent Objects was added in Solaris 11 Update 1. The following material is adapted from the PSARC arc case, and the Solaris Linker and Libraries Manual. A "plugin" is a shared object, usually loaded via dlopen(), that is used by a program in order to allow the end user to add functionality to the program. Examples of plugins include those used by web browsers (flash, acrobat, etc), as well as mdb and elfedit modules. The object that loads the plugin at runtime is called the "parent object". Unlike most object dependencies, the parent is not identified by name, but by its status as the object doing the load. Historically, building a good plugin is has been more complicated than it should be: A parent and its plugin usually share a 2-way dependency: The plugin provides one or more routines for the parent to call, and the parent supplies support routines for use by the plugin for things like memory allocation and error reporting. It is a best practice to build all objects, including plugins, with the -z defs option, in order to ensure that the object specifies all of its dependencies, and is self contained. However: The parent is usually an executable, which cannot be linked to via the usual library mechanisms provided by the link editor. Even if the parent is a shared object, which could be a normal library dependency to the plugin, it may be desirable to build plugins that can be used by more than one parent, in which case embedding a dependency NEEDED entry for one of the parents is undesirable. The usual way to build a high quality plugin with -z defs uses a special mapfile provided by the parent. This mapfile defines the parent routines, specifying the PARENT attribute (see example below). This works, but is inconvenient, and error prone. The symbol table in the parent already describes what it makes available to plugins — ideally the plugin would obtain that information directly rather than from a separate mapfile. The new -z parent option to ld allows a plugin to link to the parent and access the parent symbol table. This differs from a typical dependency: No NEEDED record is created. The relationship is recorded as a logical connection to the parent, rather than as an explicit object name However, it operates in the same manner as any other dependency in terms of making symbols available to the plugin. When the -z parent option is used, the link-editor records the basename of the parent object in the dynamic section, using the new tag DT_SUNW_PARENT. This is an informational tag, which is not used by the runtime linker to locate the parent, but which is available for diagnostic purposes. The ld(1) manpage documentation for the -z parent option is: -z parent=object Specifies a "parent object", which can be an executable or shared object, against which to link the output object. This option is typically used when creating "plugin" shared objects intended to be loaded by an executable at runtime via the dlopen() function. The symbol table from the parent object is used to satisfy references from the plugin object. The use of the -z parent option makes symbols from the object calling dlopen() available to the plugin. Example For this example, we use a main program, and a plugin. The parent provides a function named parent_callback() for the plugin to call. The plugin provides a function named plugin_func() to the parent: % cat main.c #include <stdio.h> #include <dlfcn.h> #include <link.h> void parent_callback(void) { printf("plugin_func() has called parent_callback()\n"); } int main(int argc, char **argv) { typedef void plugin_func_t(void); void *hdl; plugin_func_t *plugin_func; if (argc != 2) { fprintf(stderr, "usage: main plugin\n"); return (1); } if ((hdl = dlopen(argv[1], RTLD_LAZY)) == NULL) { fprintf(stderr, "unable to load plugin: %s\n", dlerror()); return (1); } plugin_func = (plugin_func_t *) dlsym(hdl, "plugin_func"); if (plugin_func == NULL) { fprintf(stderr, "unable to find plugin_func: %s\n", dlerror()); return (1); } (*plugin_func)(); return (0); } % cat plugin.c #include <stdio.h> extern void parent_callback(void); void plugin_func(void) { printf("parent has called plugin_func() from plugin.so\n"); parent_callback(); } Building this in the traditional manner, without -zdefs: % cc -o main main.c % cc -G -o plugin.so plugin.c % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() As noted above, when building any shared object, the -z defs option is recommended, in order to ensure that the object is self contained and specifies all of its dependencies. However, the use of -z defs prevents the plugin object from linking due to the unsatisfied symbol from the parent object: % cc -zdefs -G -o plugin.so plugin.c Undefined first referenced symbol in file parent_callback plugin.o ld: fatal: symbol referencing errors. No output written to plugin.so A mapfile can be used to specify to ld that the parent_callback symbol is supplied by the parent object. % cat plugin.mapfile $mapfile_version 2 SYMBOL_SCOPE { global: parent_callback { FLAGS = PARENT }; }; % cc -zdefs -Mplugin.mapfile -G -o plugin.so plugin.c However, the -z parent option to ld is the most direct solution to this problem, allowing the plugin to actually link against the parent object, and obtain the available symbols from it. An added benefit of using -z parent instead of a mapfile, is that the name of the parent object is recorded in the dynamic section of the plugin, and can be displayed by the file utility: % cc -zdefs -zparent=main -G -o plugin.so plugin.c % elfdump -d plugin.so | grep PARENT [0] SUNW_PARENT 0xcc main % file plugin.so plugin.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent main, dynamically linked, not stripped % ./main ./plugin.so parent has called plugin_func() from plugin.so plugin_func() has called parent_callback() We can also observe this in elfedit plugins on Solaris systems running Solaris 11 Update 1 or newer: % file /usr/lib/elfedit/dyn.so /usr/lib/elfedit/dyn.so: ELF 32-bit LSB dynamic lib 80386 Version 1, parent elfedit, dynamically linked, not stripped, no debugging information available Related Other Work The GNU ld has an option named --just-symbols that can be used in a similar manner: --just-symbols=filename Read symbol names and their addresses from filename, but do not relocate it or include it in the output. This allows your output file to refer symbolically to absolute locations of memory defined in other programs. You may use this option more than once. -z parent is a higher level operation aimed specifically at simplifying the construction of high quality plugins. Although it employs the same operation, it differs from --just symbols in 2 significant ways: There can only be one parent. The parent is recorded in the created object, and can be displayed by 'file', or other similar tools.

    Read the article

  • Geek Deals: Discounted Monitors, Cheap Peripherals, and Free Apps

    - by Jason Fitzpatrick
    Looking to save some cash while stocking up on computers, peripherals, apps, and other goodies? Hit up our deal list for discounts on all manner of geeky gear. We’ve combed the net and grabbed some fresh off the press deals for you to take advantage of. Unlike traditional brick and mortar sales internet deals are fast and furious so don’t be surprised if by the time you get to a particularly hot deal the stock is gone or the uses-per-coupon rate has been exceeded. How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • How to measure the conversion rate of your Amazon affiliate program?

    - by user359650
    I plan on selling products through the Amazon affiliate program. What I know I can track is: -what products people view on my website (default Google Analytics pageview behaviour). -what affiliate links people click on my website (with GA _trackEvent). What am I missing is: -what products people end up buying after clicking on the affiliate links. Does the Amazon affiliate program offers you any mechanism for linking a purchase with some data from your website? I noticed that I was able to add custom parameters and values to my affiliate links and the link checker was still happy with them, if Amazon gave the links that initiated an order then I would be able to cross reference the orders using custom parameters...

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >