Search Results

Search found 27954 results on 1119 pages for 'static library'.

Page 32/1119 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • static block instance block java Order

    - by Rollerball
    Having read this question In what order are the different parts of a class initialized when a class is loaded in the JVM? and the related JLS. I would like to know in more detail why for example having class Animal (superclass) and class Dog (subclass) as following: class Animal { static{ System.out.println("This is Animal's static block speaking"): } { System.out.println("This is Animal's instance block speaking"); } class Dog{ static{ System.out.println("This is Dog's static block speaking"); } { System.out.println("This is Dog's instance block speaking"); } public static void main (String [] args) { Dog dog = new Dog(); } } Ok before instantiating a class its direct superclass needs to be initialized (therefore all the statics variables and block need to be executed). So basically the question is: Why after initializing the static variables and static blocks of the super class, control goes down to the subclass for static variables initialization rather then finishing off the initialization of also the instance member? The control goes like: superclass (Animal): static variables and static blocks subclass (Dog): static variables and static blocks superclass (Animal): instance variables and instance blocks sublcass (Dog):instance variables and instance blocks What is the reason why it is in this way rather than : superclass -> static members superclass -> instance members subclass -> static members sublcass-> instance members

    Read the article

  • Adventures on Enterprise Library 5.0: Who moved my cheese (namespace)

    - by Junior Mayhé
    Jesus, Krishna, Budda! I've migrated to EntLib 5.0, but classes like ISymmetricCryptoProvider are not recognized anymore. Funny to say that Data, Logging and other blocks are working compiling fine. Here's the problematic class: using System; using System.Collections.Generic; using System.Text; using Microsoft.Practices.EnterpriseLibrary.Common.Configuration;//-->it's not working anymore using Microsoft.Practices.EnterpriseLibrary.Security.Cryptography;//-->it's not working anymore namespace MyClassLibrary.Security.EnterpriseLibrary { public sealed class Crypto { public static ISymmetricCryptoProvider MyProvider { get { //IConfigurationSource is not recognized either, neither SystemConfigurationSource IConfigurationSource cs = new SystemConfigurationSource(); SymmetricCryptoProviderFactory scpf = new SymmetricCryptoProviderFactory(cs); ISymmetricCryptoProvider p = scpf.CreateDefault(); return p; } } The references are fine on project too. I really don't know why this particular project it's causing too many trouble on VS2010! Older references were deleted, project was cleaned, rebuilt, but can't make it compile :-( The references are: Microsoft.Practices.EnterpriseLibrary.Common Microsoft.Practices.EnterpriseLibrary.Logging Microsoft.Practices.EnterpriseLibrary.Logging.Database Microsoft.Practices.EnterpriseLibrary.Security Microsoft.Practices.EnterpriseLibrary.Security.Cryptography Why some namespaces can be found while others can't?

    Read the article

  • What is a good use case for static import of methods?

    - by Miserable Variable
    Just got a review comment that my static import of the method was not a good idea. The static import was of a method from a DA class, which has mostly static methods. So in middle of the business logic I had a da activity that apparently seemed to belong to the current class: import static some.package.DA.*; class BusinessObject { void someMethod() { .... save(this); } } The reviewer was not keen that I change the code and I didn't but I do kind of agree with him. One reason given for not static-importing was it was confusing where the method was defined, it wasn't in the current class and not in any superclass so it too some time to identify its definition (the web based review system does not have clickable links like IDE :-) I don't really think this matters, static-imports are still quite new and soon we will all get used to locating them. But the other reason, the one I agree with, is that an unqualified method call seems to belong to current object and should not jump contexts. But if it really did belong, it would make sense to extend that super class. So, when does it make sense to static import methods? When have you done it? Did/do you like the way the unqualified calls look? EDIT: The popular opinion seems to be that static-import methods if nobody is going to confuse them as methods of the current class. For example methods from java.lang.Math and java.awt.Color. But if abs and getAlpha are not ambiguous I don't see why readEmployee is. As in lot of programming choices, I think this too is a personal preference thing. Thanks for your response guys, I am closing the question.

    Read the article

  • Snow Leopard & Ruby on Rails - SQLite3 issue

    - by spin-docta
    I just upgraded to snow leopard. Before, I had everything running fine, but now when I start the server from the terminal I get: => Booting WEBrick => Rails 2.3.3 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2009-08-28 23:18:19] INFO WEBrick 1.3.1 [2009-08-28 23:18:19] INFO ruby 1.8.7 (2008-08-11) [universal-darwin10.0] [2009-08-28 23:18:19] INFO WEBrick::HTTPServer#start: pid=845 port=3000 Then when I got to generated page, it seems like it isn't working with sqlite3. How do I fix? Here's what the server prints out when I go to a scripted view page: /!\ FAILSAFE /!\ Fri Aug 28 23:18:34 -0400 2009 Status: 500 Internal Server Error uninitialized constant SQLite3::Driver::Native::Driver::API /Library/Ruby/Gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:105:in `const_missing' /Library/Ruby/Gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/driver/native/driver.rb:76:in `open' /Library/Ruby/Gems/1.8/gems/sqlite3-ruby-1.2.5/lib/sqlite3/database.rb:76:in `initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in `synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/query_cache.rb:9:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/query_cache.rb:28:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/head.rb:9:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/methodoverride.rb:24:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/params_parser.rb:15:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/session/cookie_store.rb:93:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/reloader.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/failsafe.rb:26:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/lock.rb:11:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/lock.rb:11:in `synchronize' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/lock.rb:11:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.3/lib/action_controller/dispatcher.rb:106:in `call' /Library/Ruby/Gems/1.8/gems/rails-2.3.3/lib/rails/rack/static.rb:31:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/urlmap.rb:46:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/urlmap.rb:40:in `each' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/urlmap.rb:40:in `call' /Library/Ruby/Gems/1.8/gems/rails-2.3.3/lib/rails/rack/log_tailer.rb:17:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/content_length.rb:13:in `call' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler/webrick.rb:46:in `service' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:162:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:95:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:92:in `each' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:92:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:23:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/webrick/server.rb:82:in `start' /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler/webrick.rb:13:in `run' /Library/Ruby/Gems/1.8/gems/rails-2.3.3/lib/commands/server.rb:111 /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

  • How can I easily maintain a cross-file JavaScript Library Development Environment

    - by John
    I have been developing a new JavaScript application which is rapidly growing in size. My entire JavaScript Application has been encapsulated inside a single function, in a single file, in a way like this: (function(){ var uniqueApplication = window.uniqueApplication = function(opts){ if (opts.featureOne) { this.featureOne = new featureOne(opts.featureOne); } if (opts.featureTwo) { this.featureTwo = new featureTwo(opts.featureTwo); } if (opts.featureThree) { this.featureThree = new featureThree(opts.featureThree); } }; var featureOne = function(options) { this.options = options; }; featureOne.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureTwo = function(options) { this.options = options; }; featureTwo.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureThree = function(options) { this.options = options; }; featureThree.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; })(); In the same file after the anonymous function and execution I do something like this: (function(){ var instanceOfApplication = new uniqueApplication({ featureOne:"dataSource", featureTwo:"drawingCanvas", featureThree:3540 }); })(); Before uploading this software online I pass my JavaScript file, and all it's dependencies, into Google Closure Compiler, using just the default Compression, and then I have one nice JavaScript file ready to go online for production. This technique has worked marvelously for me - as it has created only one global footprint in the DOM and has given me a very flexible framework to grow each additional feature of the application. However - I am reaching the point where I'd really rather not keep this entire application inside one JavaScript file. I'd like to move from having one large uniqueApplication.js file during development to having a separate file for each feature in the application, featureOne.js - featureTwo.js - featureThree.js Once I have completed offline development testing, I would then like to use something, perhaps Google Closure Compiler, to combine all of these files together - however I want these files to all be compiled inside of that scope, as they are when I have them inside one file - and I would like for them to remain in the same scope during offline testing too. I see that Google Closure Compiler supports an argument for passing in modules but I haven't really been able to find a whole lot of information on doing something like this. Anybody have any idea how this could be accomplished - or any suggestions on a development practice for writing a single JavaScript Library across multiple files that still only leaves one footprint on the DOM?

    Read the article

  • java multipart POST library

    - by tom
    Is there a multipart POST library out there that achieve the same effect of doing a POST from a html form? for example - upload a file programmingly in Java versus upload the file using a html form. And on the server side, it just blindly expect the request from client side to be a multipart POST request and parse out the data as appropriate. Has anyone tried this? specifically, I am trying to see if I can simulate the following with Java The user creates a blob by submitting an HTML form that includes one or more file input fields. Your app sets blobstoreService.createUploadUrl() as the destination (action) of this form, passing the function a URL path of a handler in your app. When the user submits the form, the user's browser uploads the specified files directly to the Blobstore. The Blobstore rewrites the user's request and stores the uploaded file data, replacing the uploaded file data with one or more corresponding blob keys, then passes the rewritten request to the handler at the URL path you provided to blobstoreService.createUploadUrl(). This handler can do additional processing based on the blob key. Finally, the handler must return a headers-only, redirect response (301, 302, or 303), typically a browser redirect to another page indicating the status of the blob upload. Set blobstoreService.createUploadUrl as the form action, passing the application path to load when the POST of the form is completed. <body> <form action="<%= blobstoreService.createUploadUrl("/upload") %>" method="post" enctype="multipart/form-data"> <input type="file" name="myFile"> <input type="submit" value="Submit"> </form> </body> Note that this is how the upload form would look if it were created as a JSP. The form must include a file upload field, and the form's enctype must be set to multipart/form-data. When the user submits the form, the POST is handled by the Blobstore API, which creates the blob. The API also creates an info record for the blob and stores the record in the datastore, and passes the rewritten request to your app on the given path as a blob key.

    Read the article

  • Google I/O 2010 - Opening up Closure Library

    Google I/O 2010 - Opening up Closure Library Google I/O 2010 - Opening up Closure Library Tech Talks Nathan Naze Closure Library is the open-source JavaScript library behind some of Google's big web apps like Gmail and Google Docs. This session will tour the broad library, its object-oriented design, and its namespaced organization. We'll explain how it works and how to integrate it in your setup, both for development and optimized for a live application using Closure Compiler. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 116 0 ratings Time: 01:00:38 More in Science & Technology

    Read the article

  • Die Tape Library, die mitwächst

    - by A&C Redaktion
    Mit der Storage Tek SL150 Modular Tape Library hat Oracle eine Archiv-Lösung entwickelt, die zusammen mit dem Unternehmen wachsen kann. Die Ziele waren hoch gesteckt: Die neue Bandbibliothek sollte nicht nur extrem skalierbar, sondern auch günstig sein, denn sie ist als Einstiegs-Library für kleinere, wachsende und mittelständische Firmen gedacht. Zum Launch der Tape Library legt Oracle beeindruckende Zahlen und Fakten vor: - 75% günstiger in der Anschaffung, als vergleichbare Produkte - platzsparend durch 40% höhere Dichte - höchste Sicherheitsstandards - erweiterbar von 30 auf bis zu 300 Slots, und damit 900 Terabyte - einfache Bedienung dank intuitiver Benutzeroberfläche auf Basis der Oracle Fusion Middleware und Oracle Linux - die Installation dauert nur 30 Minuten - unterstützt viele verschiedene Systemumgebungen Partner haben die Möglichkeit, zu diesem neuen Mitglied der Oracle Produktfamilie eigene Support Services anzubieten. Details zu den Resell und Support Anforderungen finden Sie hier (mit OPN-Login): SL150 Produktübersicht Partner Support Option mit StorageTek SL150 Modular Tape Library FAQ - Partner Support Option mit StorageTek SL150 Modular Tape Library Auch die englischsprachige Pressemitteilung zum Launch bietet ausführliche Informationen und Details, von den Maßen bis zum Energieverbrauch, finden Sie hier im Storage Tek SL150 Data Sheet. Natürlich wollen wir Ihnen die ersten Stimmen aus der deutschsprachigen Fachpresse zur Storage Tek SL 150 nicht vorenthalten: SpeicherguideIT SecCityIT AdministratorDOAG

    Read the article

  • Die Tape Library, die mitwächst

    - by A&C Redaktion
    Mit der Storage Tek SL150 Modular Tape Library hat Oracle eine Archiv-Lösung entwickelt, die zusammen mit dem Unternehmen wachsen kann. Die Ziele waren hoch gesteckt: Die neue Bandbibliothek sollte nicht nur extrem skalierbar, sondern auch günstig sein, denn sie ist als Einstiegs-Library für kleinere, wachsende und mittelständische Firmen gedacht. Zum Launch der Tape Library legt Oracle beeindruckende Zahlen und Fakten vor: - 75% günstiger in der Anschaffung, als vergleichbare Produkte - platzsparend durch 40% höhere Dichte - höchste Sicherheitsstandards - erweiterbar von 30 auf bis zu 300 Slots, und damit 900 Terabyte - einfache Bedienung dank intuitiver Benutzeroberfläche auf Basis der Oracle Fusion Middleware und Oracle Linux - die Installation dauert nur 30 Minuten - unterstützt viele verschiedene Systemumgebungen Partner haben die Möglichkeit, zu diesem neuen Mitglied der Oracle Produktfamilie eigene Support Services anzubieten. Details zu den Resell und Support Anforderungen finden Sie hier (mit OPN-Login): SL150 Produktübersicht Partner Support Option mit StorageTek SL150 Modular Tape Library FAQ - Partner Support Option mit StorageTek SL150 Modular Tape Library Auch die englischsprachige Pressemitteilung zum Launch bietet ausführliche Informationen und Details, von den Maßen bis zum Energieverbrauch, finden Sie hier im Storage Tek SL150 Data Sheet. Natürlich wollen wir Ihnen die ersten Stimmen aus der deutschsprachigen Fachpresse zur Storage Tek SL 150 nicht vorenthalten: SpeicherguideIT SecCityIT AdministratorDOAG

    Read the article

  • Licensing approach for .NET library that might be used desktop / web-service / cloud environment

    - by Bobrovsky
    I am looking for advice how to architect licensing for a .NET library. I am not asking for tool/service recommendations or something like that. My library can be used in a regular desktop application, in an ASP.NET solution. And now Azure services come into play. Currently, for desktop applications the library checks if the application and company names from the version history are the same as the names the key was generated for. In other cases the library compares hardware IDs. Now there are problems: an Azure-enabled web-application can be run on different hardware each time (AFAIK) sometimes the hardware ID for the same hardware changes unexpectedly checking the hardware ID or version info might not be allowed in some circumstances (shared hosting for example) So, I am thinking about what approach I can take to architect a licensing scheme that: is friendly to customers (I do not try to fight piracy, but I do want to warn the customer if he uses the library on more servers than he paid for) can be used when there is no internet connection can be used on shared hosting What would you recommend?

    Read the article

  • Depending on another open source library: copy/paste code or include

    - by user5794
    I'm working on a large class and started implementing new features that need graphics. I started writing the graphics functions myself, but I know that open source libraries exist that can provide me with this functionality without me having to write it myself. The problem is that I prefer the class to be self-sufficient and not dependent on any other library. If I don't write it myself, I would have to ask the user to make sure a graphics library is already installed (less user-friendly). If I write it myself, I do a lot more work than I have to. I could also copy/paste some of the relevant code into my own class, but not sure about the disadvantages of doing this (it's an open source library that matches my license, so I'm not concerned with legality, just programming-wise if there are disadvantages). So what should I do: copy paste code from the external library write the code myself so it's truly self-sufficient ask the user to download and install another library

    Read the article

  • Why is my eth0 getting a dynamic ip when it is configured to be static?

    - by sdek
    For some reason our office linux box is being assigned an ip address via dhcp and I don't know why. What is confusing to me is that when I check system-config-network it shows that my eth0 is setup to be a static ip address. And /etc/sysconfig/network-scripts/ifcfg-eth0 also shows it is setup to be a static ip, yet it is getting a different ip address than the one specified in the ifcfg-eth0. Let me know if you have any suggestions on or ideas on where I can look next. Here are a few details that might help you figure out what an idiot I am :) Fedora 11 Router in front of this box is running dhcp, starting at 10.42.1.100 This box is configured to be 10.42.1.50 (at least I think it is!), subnet 255.255.255.0 (which is same as the router's lan subnet) Instead of having the static IP, this box is getting assigned 10.42.1.100. Here are the ifcfg-eth0 details DEVICE=eth0 BOOTPROTO=none ONBOOT=yes TYPE=Ethernet USERCTL=no NM_CONTROLLED=no NETMASK=255.255.255.0 IPADDR=10.42.1.50 GATEWAY=10.42.1.1

    Read the article

  • How do i mitigate DDOS attacks on static servers?

    - by acidzombie24
    Here is a slightly different take on DDOS attacks. Rather than a server with dynamic content being attack i was curious how to deal with attacks on servers with STATIC CONTENT. This means cpu tends to not be an issue. Its either bandwidth or connection problems. How would i mitigate a DDOS attack knowing nothing about the attacker (for example country, ip address or anything else). I was wondering is shorting the timeout and increasing amount of connections is an acceptable solution? Or maybe that is completely useless? Also i would limit the amount of connections per IP address. Would the above help or be pointless? Keeping in mind everything is static checking for multiple request of the same page (html, css, js, etc) could be a sign of a attack. What are some measures i can take on a static content server?

    Read the article

  • Two routers, one off-site, same ISP-assigned static IP. A recipe for conflict?

    - by boost
    This is the situation I've inherited: There are two routers, one off-site. Both are connected to the ISP. The ISP assigns both of them the same static IP (or so it seems). Presumably, the network problems we're having are related to the idea that you can't have two instances of the same IP. So we rang up the folk off-site and told them to turn off the router. Now everything's working okay here. How do I get around this? Get another static IP? Figure out how to get the router to ask for a dynamic IP (as we're not using the static IP for anything)?

    Read the article

  • Eclipse PDT - Failed to load JavaHL Library.

    - by alexus
    all of the sudden, when I work in my Eclipse PDT I get this error msg, not sure where they came from nor how to get rid of them Failed to load JavaHL Library. These are the errors that were encountered: no libsvnjavahl-1 in java.library.path no svnjavahl-1 in java.library.path no svnjavahl in java.library.path java.library.path = .:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java any ideas?

    Read the article

  • Cloning a WebCenter Portal Managed Server

    - by Maiko Rocha
    I had to run some tests on a WebCenter Portal application deployed in a cluster. I've got a development VM with WebCenter PS4 (this also works on PS5) and I was trying to figure out how could I easily add a new managed server to my single-node domain, and make it a cluster. Creating the machine and cluster are a piece of cake, you can do it pretty quick through WLS Console. Now, you'd guess that using the clone option on WLS Console would do the magic of cloning an existing instance, right? Well, it does, but all you get is an "empty" managed server: with no target libraries.  It was a good surprise to find that WebCenter provides a way of cloning an existing WebCenter Portal managed server through a simple WLST command: cloneWebCenterManagedServer  This is a screenshot of my starting point. I want to clone WC_CustomPortal managed server: These are the steps to clone my WC_CustomPortal managed server: 1. In the command line, invoke WLST. It should be on <ORACLE_HOME_for_component>/common/bin/wlst.sh. In my case, it is ./product/Middleware/WebCenterPortal/common/bin/wlst.sh 2. Connect to the Admin Server:  connect ('<wls_admin_username>','<password>','t3://<server>:<port>') 3. Execute the following command: wls:/webcenter/serverConfig> cloneWebCenterManagedServer(baseManagedServer='WC_CustomPortal', newManagedServer='WC_CustomPortal2', newManagedServerPort=8893, verbose=1) I've turned on verbose output on purpose so I could see what the script was doing while executing. This is the output:  [...] Creating the Managed Server "WC_CustomPortal2" MBean type Server with name WC_CustomPortal2 has been created successfully. Targeting the library "oracle.bi.adf.model.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.view.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.webcenter.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.wsm.seedpolicies#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jsp.next#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.dconfig-infra#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "orai18n-adf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.dconfigbeans#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.pwdgen#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jrf.system.filter" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.businesseditor#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.management#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jsf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jstl#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "UIX#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-rcf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-uix#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration.model#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.jbips#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.skin#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.core#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.sdp.client#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.workflow.wc#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.worklist.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.ucm.ridc.app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-app-lib-base#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-core-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jaxrs-framework-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jersey-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-util-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-services-client-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.view#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.forum.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.jive.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.spaces.fwk#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.activitygraph.lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the datasource "mds-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "WebCenter-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "Activities-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the application "wsil-wls" to the Managed Server "WC_CustomPortal2" Targeting the application "DMS Application#11.1.1.1.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_webapp1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_application1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JRF Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JPS Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "ODL-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Audit Loader Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "AWT Application Context Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JMX Framework Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Web Services Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JOC-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "DMS-Startup" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "JOC-Shutdown" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "DMSShutdown" to the Managed Server "WC_CustomPortal2" Validating changes ... Validated the changes successfully [...] And this is the newly created WC_CustomPortal2 managed server showing up on Weblogic console:  Here is the full reference to WebCenter Portal Custom WLST Commands. Special thanks to Todd Vender for pointing this one out! :-)

    Read the article

  • Publish Static Content to WebLogic

    - by James Taylor
    Most people know WebLogic has a built in web server. Typically this is not an issue as you deploy java applications and WebLogic publishes to the web. But what if you just want to display a simple static HTML page. In WebLogic you can develop a simple web application to display static HTML content. In this example I used WLS 10.3.3. I want to display 2 files, an HTML file, and an xsd for reference. Create a directory of your choice, this is what I will call the document root. mkdir /u01/oracle/doc_root Copy the static files to this directory  In the document root directory created in step 1 create the directory WEB-INF mkdir WEB-INF In the WEB-INF directory create a file called web.xml with the following content <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/j2ee/dtds/web-app_2_3.dtd"> <web-app> </web-app> Login to the WebLogic console to deploy application Click on Deployments Click on Lock & Edit Click Install and set the path to the directory created in step 1 Leave default "Install this deployment as an application" and click Next Select a Managed Server to deploy to and click Next Accept the defaults and click Finish  Deployment completes successfully, now click the Activate Changes You should now see the application started in the deployments You can now access your static content via the following URL http://localhost:7001/doc_root/helloworld.html

    Read the article

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • Add static ARP entries when network is brought up

    - by jozzas
    I have some pretty dumb IP devices on a subnet with my Ubuntu server, and the server receives streaming data from each device. I have run into a problem in that when an ARP request is issued to the device while it is streaming data to the server, the request is ignored, the cache entry times out and the server stops receiving the stream. So, to prevent the server from sending ARP requests to these devices altogether, I would like to add a static ARP entry for each, something like arp -i eth2 -s ip.of.the.device mac:of:the:device But these "static" ARP entries are lost if networking is disabled / enabled or if the server is rebooted. Where is the best place to automatically add these entries, preferably somewhere that will re-add them every time the interface eth2 is brought up? I really don't want to have to write a script that monitors the output of arp and re-adds the cache entries if they're missing. Edit to add what my final script was: Created the file /etc/network/if-up.d/add-my-static-arp With the contents: #!/bin/sh arp -i eth0 -s 192.168.0.4 00:50:cc:44:55:55 arp -i eth0 -s 192.168.0.5 00:50:cc:44:55:56 arp -i eth0 -s 192.168.0.6 00:50:cc:44:55:57 And then obviously add the permission to allow it to be executed: chmod +x /etc/network/if-up.d/add-my-static-arp And these arp entries will be manually added or re-added every time any network interface is brought up.

    Read the article

  • is it possible to load a shared library on a shared memory?

    - by quimm2003
    I have a server and a client written in C. I try to load a shared library in the server and then pass library function pointers to the client. This way I can change the library without have to compile the client. Because of every process has its own separate memory space, I wonder if it is possible to load a shared library on a shared memory, pass the function pointers and map the shared memory on the client and then make the client execute the code of the library loaded by the server.

    Read the article

  • get_called_class hack not working with eval-code.

    - by Ekampp
    Hi there. I am using a ge_called_class hack for allowing late static binding in php version 5.2 (found here). I have the following in my code: # db_record.php $ac = "ForumThread"; $objects = $ac::find("all"); This will not work in php 5.2 for some reason, so I have done this: # db_record.php $ac = "ForumThread"; eval("\$objects = {$ac}::find('all');"); This on the other hand will not work with the get_called_class function. I get an error that the file function can't read the evaled section of code. So how do I solve this problem? Best regards.

    Read the article

  • Compilation Error related

    - by aparna
    Given: 11. public static void test(String str) { 12. int check = 4; 13. if (check = str.length()) { 14. System.out.print(str.charAt(check -= 1) +", "); 15. } else { 16. System.out.print(str.charAt(0) + ", "); 17. } 18. } and the invocation: 21. test("four"); 22. test("tee"); 23. test("to"); What is the result? A. r, t, t, B. r, e, o, C. Compilation fails. D. An exception is thrown at runtime. Answer: C Can you explain why the compilation fails please?

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • problem on static link to libsmbclient.a

    - by Rex
    I've tried to develop a library using libsmbclient-dev to access net share folders. I wish to static link libsmbclient.a in my project , but a lot of error happens when I tried to static link it, it seems that the libsmbclient.a doesn't include some objects it needed , while everything goes well with libsmbclient.so. Because my project need to support various linux versions like centOS , Ubuntu ..., we wish not to install libsmbclient-dev everywhere. Can anybody tell me that if I can static link this libsmbclient.a in an easy way ? Or what kind of installation strategy should I take to cause less trouble to users? If anyone have use this library to develop projects before , can you tell me how you deal with this case? Thank you very much!!!

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >