Search Results

Search found 3984 results on 160 pages for 'modules'.

Page 19/160 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • PHP Code (modules) included via MySQL database, good idea?

    - by ionFish
    The main script includes "modules" which add functionality to it. Each module is set up like this: <?php //data collection stuff //(...) approx 80 lines of code //end data collection $var1 = 'some data'; $var2 = 'more data'; $var3 = 'other data'; ?> Each module has the same exact variables, just the data collection is different. I was wondering if it's a reasonable idea to store the module data in MySQL like this: [database] |_modules |_name |_function (the raw PHP data from above) |_description |_author |_update-url |_version |_enabled ...and then include the PHP-data from the database and execute it? Something like, a tab-navigation system at the top of the page for each module name, then inside each of those tabs the page content would function by parsing the database-stored code of the module from the function section. The purpose would be to save code space (fewer lines), allow for easy updates, and include/exclude modules based on the enabled option. This is how many other web-apps work, some of my own too. But never had I thought about this so deeply. Are there any drawbacks or security risks to this?

    Read the article

  • Pass Arguments to Included Module in Ruby?

    - by viatropos
    I'm hoping to implement something like all of the great plugins out there for ruby, so that you can do this: acts_as_commentable has_attached_file :avatar But I have one constraint: That helper method can only include a module; it can't define any variables or methods. The reason for this is because, I want the options hash to define something like type, and that could be converted into one of say 20 different 'workhorse' modules, all of which I could sum up in a line like this: def dynamic_method(options = {}) include ("My::Helpers::#{options[:type].to_s.camelize}").constantize(options) end Then those 'workhorses' would handle the options, doing things like: has_many "#{options[:something]}" Here's what the structure looks like, and I'm wondering if you know the missing piece in the puzzle: # 1 - The workhorse, encapsuling all dynamic variables module My::Module def self.included(base) base.extend ClassMethods base.class_eval do include InstanceMethods end end module InstanceMethods self.instance_eval %Q? def #{options[:my_method]} "world!" end ? end module ClassMethods end end # 2 - all this does is define that helper method module HelperModule def self.included(base) base.extend(ClassMethods) end module ClassMethods def dynamic_method(options = {}) # don't know how to get options through! include My::Module(options) end end end # 3 - send it to active_record ActiveRecord::Base.send(:include, HelperModule) # 4 - what it looks like class TestClass < ActiveRecord::Base dynamic_method :my_method => "hello" end puts TestClass.new.hello #=> "world!" That %Q? I'm not totally sure how to use, but I'm basically just wanting to somehow be able to pass the options hash from that helper method into the workhorse module. Is that possible? That way, the workhorse module could define all sorts of functionality, but I could name the variables whatever I wanted at runtime.

    Read the article

  • Zend Framework - How do make a hierarchy without it being a module?

    - by Josh
    Here is my specific issue. I want to make an api level which then under that you can select which method you will use. For example: test.com/api/rest test.com/api/xmlprc Currently I have api mapping to a module directory. I then setup a route to make it a rest route. test.com/api is a rest route, but I would rather have it be test.com/api/rest. This would allow me later to add others. In Bootstrap.php: $front = Zend_Controller_Front::getInstance(); $router = $front->getRouter(); $route = new Zend_Controller_Router_Route( 'api/:module/:controller/:id/*', array('module' =>'default') ); $router-addRoute('api', $route); $restRoute = new Zend_Rest_Route($front, array(), array( 'rest' )); $router-addRoute('rest', $restRoute); return $router; In application.ini: resources.frontController.moduleDirectory = APPLICATION_PATH "/modules" I know it will involve routes, but sometimes I find the Zend Framework documentation to be a little hard to follow. When I go to test.com/rest/controller/ it works how it should, but if I go to test.com/api/rest/ it tells me that my module is api and controller is rest.

    Read the article

  • Developing Modular Flex Applications

    - by ukdavo
    Hi there I'd like to be able to understand how to develop a Flex application such that I could provide implementation classes at runtime. In the Java world I'd specify interfaces in an JAR (e.g. myapp-api.jar), the implementation in a separate JAR (e.g. myapp-impl.jar) and package these along with other resources in the application WAR (e.g. myapp.war). Within the code of the application I would instantiate the implementation classes dynamically. Is this approach possible in Flex? I'm aware that I can instantiate classes dynamically so that's a good start. I'm a bit confused by modules, RSLs and SWCs though. I was hoping to create a SWF application that had references to an interfaces SWC and an implementation SWC. The idea is that if I need to tweak the application for a specific customer then I could create a new implementation SWC and not have to modify the SWF or interface SWC. Any ideas?

    Read the article

  • How to install VMware tools for Ubuntu 11.04 hosted on VMware ESXi?

    - by Dmitri Toubelis
    I'm running Vmware ESX 4.1 and I have a development VM that I recently upgraded from Ubuntu 10.04 to 11.04. Then I tried to re-install VMware Tools and some of the modules gave me an error and would not compile. As a result I'm having problems with backing up this virtual machine now and I suspect VMware tools is the reason. I installed latest patches for VMware host, that included an update to VMware Tools (v8.3.7 build-381511) but I'm still getting the same error. The error I'm getting is like this: ... /tmp/vmware-root/modules/vmhgfs-only/super.c:73:4: error: unknown field \u2018clear_inode\u2019 specified in initializer make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/super.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' and also this: /tmp/vmware-root/modules/vmci-only/vmci_drv.c:91:4: error: unknown field \u2018ioctl\u2019 specified in initializer /tmp/vmware-root/modules/vmci-only/vmci_drv.c:91:4: warning: initialization from incompatible pointer type /tmp/vmware-root/modules/vmci-only/vmci_drv.c: In function \u2018vmci_init\u2019: /tmp/vmware-root/modules/vmci-only/vmci_drv.c:151:4: error: implicit declaration of function \u2018init_MUTEX\u2019 make[2]: *** [/tmp/vmware-root/modules/vmci-only/vmci_drv.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmci-only' Any ideas?

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • RTL8168B/8111B Lan card is not detected in Redhat..Error is make ***/lib/modules/2.6.18-53.e15/build

    - by Deepak Narwal
    0 Hello friends... In My computer Lan card model is Realtek RTL8168B/8111B PCI-E GIGABIT ETHERNET NIC (NDIS 6.20) My system is dual boot windows 7 and redhat 5.1.Redhat is not picking up this model of Lan card automaticlly. I tried it by downloading from realtak site for this particular model and find some .tar packages for my kernal and when i tried to install them ... check old drivers & unload it build the module and install make */lib/modules/2.6.18-53.e15/build: no such file or directory stop make[1]: *[modules] error 2 make : [modules] error 2 i downloaded tar files from sites and unpack according to their instrution i tried to run autorun.sh script as mentioned in readme file but after doing this it is showing above error... Now what to do i am not getting

    Read the article

  • Silent and scripted install of CPAN and Perl modules?

    - by Mikael Grönfelt
    I need to install CPAN and some Perl modules automatically in a Scientific Linux (RHEL) installation script. Unfortunately the specific modules I want (at least one of them) cannot be found as RPM:s as far as I've seen. So I need to install CPAN, configure it automatically (or with a config file) and then install the wanted modules (including dependencies) automatically as well. This doesn't seem like a very unusual requirement, but I haven't seen any really good documentation on this. The problem is that whenever CPAN is launched for the first time an interactive configuration runs. Can this be skipped somehow? And how do I launch module installations directly from the command line?

    Read the article

  • apache eat up too many ram per child

    - by mrc4r7m4n
    Hello to everyone. I've got fallowing problem: Apache eat to many ram per child. The fallowing comments shows: cat /etc/redhat-release -- Fedora release 8 (Werewolf) free -m: total used free shared buffers cached Mem: 3566 3136 429 0 339 1907 -/+ buffers/cache: 889 2676 Swap: 4322 0 4322 I know that you will say that there is nothing to worry about because swap is not use, but i think it's not use for now. 3.httpd -v: Server version: Apache/2.2.14 (Unix) 4.httpd -l: Compiled in modules: core.c mod_authn_file.c mod_authn_default.c mod_authz_host.c mod_authz_groupfile.c mod_authz_user.c mod_authz_default.c mod_auth_basic.c mod_include.c mod_filter.c mod_log_config.c mod_env.c mod_setenvif.c mod_version.c mod_ssl.c prefork.c http_core.c mod_mime.c mod_status.c mod_autoindex.c mod_asis.c mod_cgi.c mod_negotiation.c mod_dir.c mod_actions.c mod_userdir.c mod_alias.c mod_rewrite.c mod_so.c 5.List of loaded dynamic modules: LoadModule authz_host_module modules/mod_authz_host.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule cgi_module modules/mod_cgi.so 6.My prefrok directive <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 25 ServerLimit 80 MaxClients 80 MaxRequestsPerChild 4000 </IfModule> KeepAliveTimeout 6 MaxKeepAliveRequests 100 KeepAlive On 7.top -u apache: ctrl+ M top - 09:19:42 up 2 days, 19 min, 2 users, load average: 0.85, 0.87, 0.80 Tasks: 113 total, 1 running, 112 sleeping, 0 stopped, 0 zombie Cpu(s): 7.3%us, 15.7%sy, 0.0%ni, 75.7%id, 0.0%wa, 0.7%hi, 0.7%si, 0.0%st Mem: 3652120k total, 3149964k used, 502156k free, 348048k buffers Swap: 4425896k total, 0k used, 4425896k free, 1944952k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16956 apache 20 0 700m 135m 100m S 0.0 3.8 2:16.78 httpd 16953 apache 20 0 565m 130m 96m S 0.0 3.7 1:57.26 httpd 16957 apache 20 0 587m 129m 102m S 0.0 3.6 1:47.41 httpd 16955 apache 20 0 567m 126m 93m S 0.0 3.6 1:43.60 httpd 17494 apache 20 0 626m 125m 96m S 0.0 3.5 1:58.77 httpd 17515 apache 20 0 540m 120m 88m S 0.0 3.4 1:45.57 httpd 17516 apache 20 0 573m 120m 88m S 0.0 3.4 1:50.51 httpd 16954 apache 20 0 551m 120m 88m S 0.0 3.4 1:52.47 httpd 17493 apache 20 0 586m 120m 94m S 0.0 3.4 1:51.02 httpd 17279 apache 20 0 568m 117m 87m S 16.0 3.3 1:51.87 httpd 17302 apache 20 0 560m 116m 90m S 0.3 3.3 1:59.06 httpd 17495 apache 20 0 551m 116m 89m S 0.0 3.3 1:47.51 httpd 17277 apache 20 0 476m 114m 81m S 0.0 3.2 1:37.14 httpd 30097 apache 20 0 536m 113m 83m S 0.0 3.2 1:47.38 httpd 30112 apache 20 0 530m 112m 81m S 0.0 3.2 1:40.15 httpd 17513 apache 20 0 516m 112m 85m S 0.0 3.1 1:43.92 httpd 16958 apache 20 0 554m 111m 82m S 0.0 3.1 1:44.18 httpd 1617 apache 20 0 487m 111m 85m S 0.0 3.1 1:31.67 httpd 16952 apache 20 0 461m 107m 75m S 0.0 3.0 1:13.71 httpd 16951 apache 20 0 462m 103m 76m S 0.0 2.9 1:28.05 httpd 17278 apache 20 0 497m 103m 76m S 0.0 2.9 1:31.25 httpd 17403 apache 20 0 537m 102m 79m S 0.0 2.9 1:52.24 httpd 25081 apache 20 0 412m 101m 70m S 0.0 2.8 1:01.74 httpd I guess thats all information needed to help me solve this problem. I think the virt memory is to big, the same res. The consumption of ram is increasing all the time. Maybe it's memory leak because i see there is so many static modules compiled. Could someone help me with this issue? Thank you in advance.

    Read the article

  • Modules already committed, client doesn't pay, what should I do?

    - by John
    So the story is simple, early stage EU portal hired me to do some extra modules. I got all the source code for local testing, did my job, committed new code. Now I am out of this project but client still haven't paid me yet and he is not even thinking about that. It has been couple of months and no contract was signed so I can't take any legal actions. What should I do with all the source code? Sell it? Run exact copy of that portal? Make all portal publicly available?

    Read the article

  • python metaprogramming

    - by valya
    I'm trying to archive a task which turns out to be a bit complicated since I'm not very good at Python metaprogramming. I want to have a module locations with function get_location(name), which returns a class defined in a folder locations/ in the file with the name passed to function. Name of a class is something like NameLocation. So, my folder structure: program.py locations/ __init__.py first.py second.py program.py will be smth with with: from locations import get_location location = get_location('first') and the location is a class defined in first.py smth like this: from locations import Location # base class for all locations, defined in __init__ (?) class FirstLocation(Location): pass etc. Okay, I've tried a lot of import and getattribute statements but now I'm bored and surrender. How to archive such behaviour?

    Read the article

  • How to find unneccesary dependencies in a maven multi-project?

    - by hstoerr
    If you are developing a large evolving multi module maven project it seems inevitable that there are some dependencies given in the poms that are unneccesary, since they are transitively included by other dependencies. For example this happens if you have a module A that originally includes C. Later you refactor and have A depend on a module B which in turn depends on C. If you are not careful enough you'll wind up with both B and C in A's dependency list. But of course you do not need to put C into A's pom, since it is included transitively, anyway. Is there tool to find such unneccesary dependencies? (These dependencies do not actually hurt, but they might obscure your actual module structure and having less stuff in the pom is usually better. :-)

    Read the article

  • Python: Define Classes in Packages

    - by rfkrocktk
    I'm learning Python and I have been playing around with packages. I wanted to know the best way to define classes in packages. It seems that the only way to define classes in a package is to define them in init.py of that package. Coming from Java, I'd kind of like to define individual files for my classes. Is this a recommended practice? I'd like to have my directory look somewhat like this: recursor/ __init__.py RecursionException.py RecursionResult.py Recursor.py So I could refer to my classes as "recursor.Recursor," "recursor.RecursionException," and "recursor.RecursionResult.py". Is this "do-able" or recommended in Python?

    Read the article

  • How do I extend a python module? (python-twitter)

    - by user319045
    What are the best practices for extending a python module -- in this case I want to extend python-twitter by adding new methods to the base API class. I've looked at tweepy, and I like that as well, I just find python-twitter easier to understand and extend with the functionality I want. I have the methods written already, I'm just trying to figure out the best way to add them into the module, without changing the core.

    Read the article

  • Why is this the output of this python program?

    - by Andrew Moffat
    Someone from #python suggested that it's searching for module "herpaderp" and finding all the ones listed as its searching. If this is the case, why doesn't it list every module on my system before raising ImportError? Can someone shed some light on what's happening here? import sys class TempLoader(object): def __init__(self, path_entry): if path_entry == 'test': return raise ImportError def find_module(self, fullname, path=None): print fullname, path return None sys.path.insert(0, 'test') sys.path_hooks.append(TempLoader) import herpaderp output: 16:00:55 $> python wtf.py herpaderp None apport None subprocess None traceback None pickle None struct None re None sre_compile None sre_parse None sre_constants None org None tempfile None random None __future__ None urllib None string None socket None _ssl None urlparse None collections None keyword None ssl None textwrap None base64 None fnmatch None glob None atexit None xml None _xmlplus None copy None org None pyexpat None problem_report None gzip None email None quopri None uu None unittest None ConfigParser None shutil None apt None apt_pkg None gettext None locale None functools None httplib None mimetools None rfc822 None urllib2 None hashlib None _hashlib None bisect None Traceback (most recent call last): File "wtf.py", line 14, in <module> import herpaderp ImportError: No module named herpaderp

    Read the article

  • this.loaderInfo is null in Flex

    - by Viliam Husár
    I have a problem with Flex module. I want to access url variables by this.loaderInfo.url, i call a function in createionComplete handler of module and sometimes it works and sometimes it doesn't. (Can't access... null). Any suggestions?

    Read the article

  • First Call to a Controller, Constant is defined, Second call, "uninitialized constant Oauth"?

    - by viatropos
    I am trying to get the OAuth gem to work with Rails 3 and I'm running into this weird problem... (independent of the gem, I think I've run into this once before) I have a controller called "OauthTestController", and a model called "ConsumerToken". The model looks like this. require 'oauth/models/consumers/token' class ConsumerToken < ActiveRecord::Base include Oauth::Models::Consumers::Token end When I go to "/oauth_test/twitter", it loads the Oauth::Models::Consumers::Token module and I'm able to connect to twitter no problem. But the second time I try it (just refresh the /oauth_test/twitter url), it gives me this error: NameError (uninitialized constant Oauth): app/models/consumer_token.rb:4 app/models/twitter_token.rb:2 app/controllers/oauth_test_controller.rb:66:in `load_consumer' Why is that? It has something to do with load paths or being in development mode maybe?

    Read the article

  • Drupal Module Development: How to Communicate between form_submit and page handler functions

    - by Aaron
    I am writing a module and I need to retrieve values set in a form_submit function from a page handler function. The reason is that I am rendering results of a form submit on the same page as the page handler. I have this working, but I am using global variables, which I don't like. I'd like to be able to use the $form_state['storage'] for this, but can't since I don't have access to the $form_state variable from the page handler. Any suggestions?

    Read the article

  • Lamp with mod_fastcgi

    - by Jonathan
    Hi! I am building a cgi application, and now I would like it to be like an application that stands and parses each connection, with this, I can have all session variables saved in memory instead of saving them to file(or anyother place) and loading them again on a new connection I am using lamp within a linux vmware but I can't seem to find how to install the module for it to work and what to change in the httpd.conf. I tried to compile the module, but I couldn't because my apache isn't a regular instalation, its a lamp already built one, and it seems that the mod needs the apache directory to be compiled. I saw some coding examples out there, so I guess is not that hard once its runing ok with Apache Can you help me with this please? Thanks, Joe

    Read the article

  • module compiled with swig not found by python

    - by openbas
    Hello, I've a problem with SWIG and python. I've a c-class that compiles correctly, but the python script says it can't find the module. I compile with: swig -c++ -python codes/codes.i g++ -c -Wall -O4 -fPIC -pedantic codes/*.cc g++ -I/usr/include/python2.6 -shared codes/codes_wrap.cxx *.o -o _codes.so This gives me a _codes.so file, as I would expect, but then I have this python file: import sys import codes (rest of the code omitted) It gives me: Traceback (most recent call last): File "script.py", line 3, in <module> import codes ImportError: No module named codes According to http://www.swig.org/Doc1.3/Introduction.html#Introduction_nn8 this is all I should have to do... The files are in the same directory, so the path should not be a problem ?

    Read the article

  • Drupal 7: File field causes error with Dependable Dropdowns

    - by LoneWolfPR
    I'm building a Form in a module using the Form API. I've had a couple of dependent dropdowns that have been working just fine. The code is as follows: $types = db_query('SELECT * FROM {touchpoints_metric_types}') -> fetchAllKeyed(0, 1); $types = array('0' => '- Select -') + $types; $selectedType = isset($form_state['values']['metrictype']) ? $form_state['values']['metrictype'] : 0; $methods = _get_methods($selectedType); $selectedMethod = isset($form_state['values']['measurementmethod']) ? $form_state['values']['measurementmethod'] : 0; $form['metrictype'] = array( '#type' => 'select', '#title' => t('Metric Type'), '#options' => $types, '#default_value' => $selectedType, '#ajax' => array( 'event' => 'change', 'wrapper' => 'method-wrapper', 'callback' => 'touchpoints_method_callback' ) ); $form['measurementmethod'] = array( '#type' => 'select', '#title' => t('Measurement Method'), '#prefix' => '<div id="method-wrapper">', '#suffix' => '</div>', '#options' => $methods, '#default_value' => $selectedMethod, ); Here are the _get_methods and touchpoints_method_callback functions: function _get_methods($selected) { if ($selected) { $methods = db_query("SELECT * FROM {touchpoints_m_methods} WHERE mt_id=$selected") -> fetchAllKeyed(0, 2); } else { $methods = array(); } $methods = array('0' => "- Select -") + $methods; return $methods; } function touchpoints_method_callback($form, &$form_state) { return $form['measurementmethod']; } This all worked fine until I added a file field to the form. Here is the code I used for that: $form['metricfile'] = array( '#type' => 'file', '#title' => 'Attach a File', ); Now that the file is added if I change the first dropdown it hangs with the 'Please wait' message next to it without ever loading the contents of the second dropdown. I also get the following error in my JavaScript console: "Uncaught TypeError: Object function (a,b){return new p.fn.init(a,b,c)} has no method 'handleError'" What am I doing wrong here?

    Read the article

  • iOS LocationManager is not updating location (Titanium Appcelerator module)

    - by vale4674
    I've made Appcelerator Titanium Module for fetching device's rotaion and location. Source can be found on GitHub. The problem is that it fetches only one cached location but device motion data is OK and it is refreshing. I don't use delegate, I pull that data in my Titanium Javascript Code. If I set "City Run" in Simulator - Debug - Location nothing happens. The same cached location is returning. Pulling of location is OK because I tried with native app wich does this: textView.text = [NSString stringWithFormat:@"%f %f\n%@", locationManager.location.coordinate.longitude, locationManager.location.coordinate.latitude, textView.text]; And it is working in simulator and on device. But the same code as you can see on GitHub is not working as Titanium module. Any ideas? EDIT: I am looking at GeolocationModule src and I see nothing special there. As I said, my code in my module has to work since it is working in native app. "Only" problem is that it is not updating location and it always returns me that cached location.

    Read the article

  • How can I use a variable as a module name in Perl?

    - by mjn12
    I know it is possible to use a variable as a variable name for package variables in Perl. I would like to use the contents of a variable as a module name. For instance: package Foo; our @names =("blah1", "blah2"); 1; And in another file I want to be able be able to set the contents of a scalar to "foo" and then access the names array in Foo through that scalar. my $packageName = "Foo"; Essentially I want to do something along the lines of: @{$packageName}::names; #This obviously doesn't work. I know I can use my $names = eval '$'. $packageName . "::names" But only if Foo::names is a scalar. Is there another way to do this without the eval statement?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >