Search Results

Search found 2952 results on 119 pages for 'dependencies'.

Page 34/119 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • IoC - Dynamic Composition of object instances

    - by Joshua Starner
    Is there a way using IoC, MEF [Imports], or another DI solution to compose dependencies on the fly at object creation time instead of during composition time? Here's my current thought. If you have an instance of an object that raises events, but you are not creating the object once and saving it in memory, you have to register the event handlers every time the object is created. As far as I can tell, most IoC containers require you to register all of the classes used in composition and call Compose() to make it hook up all the dependencies. I think this may be horrible design (I'm dealing with a legacy system here) to do this due to the overhead of object creation, dependency injection, etc... but I was wondering if it was possible using one of the emergent IoC technologies. Maybe I have some terminology mixed up, but my goal is to avoid writing a framework to "hook up all the events" on an instance of an object, and use something like MEF to [Export] handlers (dependencies) that adhere to a very specific interface and [ImportMany] them into an object instance so my exports get called if the assemblies are there when the application starts. So maybe all of the objects could still be composed when the application starts, but I want the system to find and call all of them as the object is created and destroyed.

    Read the article

  • Maintaining a Python web application: heavier vs lighter framework?

    - by Tiberiu Ana
    Five+ years from now, you are hired to support and extend a data-centric web application written in Python that hasn't been kept up to date. Would you rather prefer it was written in the current version of Django/Pylons at the time, using the available standard components, or kept minimal with something like CherryPy/web.py and a few library dependencies? Heavy framework Advantages: standard approach to application design and structure, as encouraged by framework; less application code to worry about. Disadvantages: requires learning the framework to understand how things work; broken things in old version of framework difficult to fix; upgrading to new version potentially difficult due to changing APIs; finding relevant documentation/help potentially difficult due to changing APIs. Light framework Advantages: most application code is directly "visible"; only needed features are implemented; architecture should be simpler to understand; less need to upgrade external dependencies; easier to upgrade external dependencies. Disadvantages: some reinventing the wheel; non-standard design and structure (with the associated unique issues and bugs). I will update the list with any helpful answers.

    Read the article

  • Ruby on rails: Staring mongrel server

    - by spin-docta
    I know mongrel is the default server for "script/server" but when I do that command I get webrick. (I had it working before with mongrel). Now when I tell is to use mongrel ("script/server mongrel") the server fails to start up in the terminal. I get this: $ script/server mongrel ^C/Library/Ruby/Gems/1.8/gems/mongrel-1.1.5/lib/mongrel/gems.rb:11:in `require': Interrupt from /Library/Ruby/Gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb:17 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:36:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:156:in `require' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Library/Ruby/Gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:156:in `require' from /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler/mongrel.rb:1 from /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler.rb:17:in `const_get' from /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler.rb:17:in `each' from /Library/Ruby/Gems/1.8/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from /Users/devinross14/.gem/ruby/1.8/gems/rails-2.3.3/lib/commands/server.rb:45 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 I just upgraded to snow leopard if that helps...

    Read the article

  • Error while running RSpec test cases.

    - by alokswain
    Following is the error i receive while running test cases written using rpsec. The strange thing is the test were running fine until early yesterday. Can someone guide me towards a solution. I am new to RSpec and and using Rspec and rspec-rails as plugins in my app. and i have no clue to what went wrong. F:/Spritle/programs/ruby 1.86/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_suppor t/dependencies.rb:105:in `const_missing': uninitialized constant Test::Unit::TestResult::TestResultFailureSupport (NameError) from F:/Spritle/programs/ruby 1.86/lib/ruby/gems/1.8/gems/test-unit-2.0.1/lib/test/unit/testresult.rb:28 from F:/Spritle/programs/ruby 1.86/lib/ruby/site_ruby/1.8/rubygems/custom_require. rb:31:in `gem_original_require' from F:/Spritle/programs/ruby 1.86/lib/ruby/site_ruby/1.8/rubygems/custom_require. rb:31:in `require' from F:/Spritle/programs/ruby 1.86/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/ active_support/dependencies.rb:158:in `require' from F:/Spritle/projects/Evaluation/vendor/plugins/rspec/lib/spec/interop/test.rb: 34 from F:/Spritle/programs/ruby 1.86/lib/ruby/site_ruby/1.8/rubygems/custom_require. rb:31:in `gem_original_require' from F:/Spritle/programs/ruby 1.86/lib/ruby/site_ruby/1.8/rubygems/custom_require. rb:31:in `require' from F:/Spritle/programs/ruby 1.86/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/ active_support/dependencies.rb:158:in `require' ... 15 levels... from F:/Spritle/projects/Evaluation/vendor/plugins/rspec/lib/spec/runner/example_g roup_runner.rb:14:in `load_files' from F:/Spritle/projects/Evaluation/vendor/plugins/rspec/lib/spec/runner/options.r b:133:in `run_examples' from F:/Spritle/projects/Evaluation/vendor/plugins/rspec/lib/spec/runner/command_l ine.rb:9:in `run' from F:/Spritle/projects/Evaluation/vendor/plugins/rspec/bin/spec:5 rake aborted! Command "F:/Spritle/programs/ruby 1.86/bin/ruby.exe" -I"lib" "F:/Spritle/projects/Evaluation/vendor/plugins/rspec/bin/spec" "spec/controllers/articles_controller_spec.rb" --option s "F:/Spritle/projects/Evaluation/spec/spec.opts" failed

    Read the article

  • Controlling the order of PicoContainer startup

    - by Trejkaz
    I have been tasked with doing some refactoring work on how we start up applications. Basically we have a bunch of console apps which were depending on the GUI application startup code, causing bogus dependencies which have kick-on effects for which libraries we need to ship, and which dependencies other modules need to declare. So I have written a simple startup framework where I basically just throw a bunch of Runnable objects into a list and then run them in order - and it works. But I was thinking - we already have PicoContainer in our project, so all these things that need to be run on startup could potentially be thrown into a PicoContainer, and if they implement Startable they will start... But in some cases we want to specify the ordering between them. For example, I don't want any other component writing to the log before we write a header into the log indicating that the application is starting up. I know I can introduce ordering by introducing injection dependencies, but this feels like a hack in this case - I would need to add the log header writer as a dependency for every other component which might write to the log, which isn't great at all. Nonetheless it seems like it would be nice to control the order of PicoContainer startup, so is there perhaps some other way? Alternatively I could just keep it simple and stick to my list of Runnable. It does, after all, work.

    Read the article

  • My framework will utilise other frameworks, but I'd like this to be transparent to the end-user

    - by d11wtq
    I'm building a framework, which aims to provide a new development environment for the user, but I need to use things like RegexKit and almost certainly some other established frameworks in order to do this. Any functionality exposed from such frameworks would be abstracted through classes and methods in my own framework for maintenance reasons (allowing me to change my mind on which dependencies I want). In an ideal world I just want to ship a single .framework. However I'm aware that unlike with standard bundles and applications it is not possibile to embed a framework inside the framework bundle. Do I have any other option other than to tell the end user that they must also install RegexKit and any other dependencies? I have a feeling this lessens the appeal value of the easy to use embedded framework I'd envisaged building. Right now I'm feeling like I have some limited options: Force the user to install the dependencies. Write my own classes that provide the same functionality -- ugh! If at all possible, try to statically link the third party frameworks (is this possible??) My end-product is ideally a single .framework bundle that uses @rpath and so can be installed in the system or simply bundled with the app that uses it.

    Read the article

  • eclipse, one classpath for compiling, another for launching

    - by DragonFax
    example: For logging, my code uses log4j. but other jars my code is dependent upon, uses slf4j instead. So both jars must be in the build path. Unfortunately, its possible for my code to directly use (depend on) slf4j now, either by context-assist, or some other developers changes. I would like any use of slf4j to show up as an error, but my application (and tests) will still need it in the classpath when running. explanation: I'd like to find out if this is possible in eclipse. This scenario happens often for me. I'll have a large project, that uses alot of 3rd party libraries. And of course those 3rd party jars have their own dependencies as well. So I have to include all dependencies in the classpath ("build path" in eclipse) for the application and its tests to compile and run (from within eclipse). But I don't want my code to use all of those jars, just the few direct dependencies I've decided upon myself. So if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do. I know I can manually configure the classpath when running outside of eclipse, and even within eclipse I can modify the classpath for a specific class I'm running (in the run configurations), but thats not manageable if you run alot of individual test cases, or have alot of main() classes.

    Read the article

  • Replace .sln with MSBuild and wrap contained projects into targets

    - by Filburt
    I'd like to create a MSBuild project that reflects the project dependencies in a solution and wraps the VS projects inside reusable targets. The problem I like solve doing this is to svn-export, build and deploy a specific assembly (and its dependencies) in an BizTalk application. My question is: How can I make the targets for svn-exporting, building and deploying reusable and also reuse the wrapped projects when they are built for different dependencies? I know it would be simpler to just build the solution and deploy only the assemblies needed but I'd like to reuse the targets as much as possible. The parts The project I like to deploy <Project DefaultTargets="Deploy" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <ExportRoot Condition="'$(Export)'==''">Export</ExportRoot> </PropertyGroup> <Target Name="Clean_Export"> <RemoveDir Directories="$(ExportRoot)\My.Project.Dir" /> </Target> <Target Name="Export_MyProject"> <Exec Command="svn export svn://xxx/trunk/Biztalk2009/MyProject.btproj --force" WorkingDirectory="$(ExportRoot)" /> </Target> <Target Name="Build_MyProject" DependsOnTargets="Export_MyProject"> <MSBuild Projects="$(ExportRoot)\My.Project.Dir\MyProject.btproj" Targets="Build" Properties="Configuration=Release"></MSBuild> </Target> <Target Name="Deploy_MyProject" DependsOnTargets="Build_MyProject"> <Exec Command="BTSTask AddResource -ApplicationName:CORE -Source:MyProject.dll" /> </Target> </Project> The projects it depends upon look almost exactly like this (other .btproj and .csproj).

    Read the article

  • Compiling gstreamer plugin in windows

    - by utnapistim
    Hello all, My question: What is the correct way to compile a gstreamer plugin in windows, so that it will be accepted by gstreamer (actually Songbird on top of gstreamer). My setup: I have downloaded the songbird sources following the steps described here and I have a trunk/dependencies/windows-i686-msvc8 directory within my svn sources with all the gstreamer binaries. I have created a gstreamer empty plugin skeleton following the steps detailed in the GStreamer Plugin Writer's Guide, and compiled it against the gstreamer binaries in the Songbird dependencies folder. The compilation was done with VS2010 RC1 (Visual Studio 2008 yelded the same results), using an empty DLL project with the .h and .c files generated using the GStreamer Plugin Writer's Guide. The DLL was lined with libcpmt.lib, libcmt.lib, ws2_32.lib, gobject-2.0.lib, gthread-2.0.lib, gstreamer-0.10-0.lib, glib-2.0.lib, kernel32.lib, nspr4.lib and ignoring all default libraries. I have compiled the files as both .c and .cpp with the same results. Testing: I have installed the Songbird binaries corresponding to the correct svn version, then installed Songbird Developer Tools addon and used it to create an addon for testing my gstreamer plugin. Songbird will not load the pluggin. I have also tried to load it with gst-launch.exe from the trunk/dependencies/windows-i686-msvc8/[...] directory and that generated runtime error R6034: An application has made an attempt to load the C runtime library incorrectly. Most resources I found for this problem recommended restarting or reinstalling windows :(.

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • action mailer gem and tlsmail gem not working in heroku after GIT PUSH HEROKU

    - by user163352
    I'm using heroku as my host..It was working fine. Then I installed action_mailer_tls and tlsmail. Then I comitted it and pushed it heroku.. After that I got error in myapp.heroku.com. The error is /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require': no such file to load -- smtp_tls (MissingSourceFile) from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:158:in require' from /disk1/home/slugs/154378_e47562d_b59c/mnt/config/initializers/smtp_gmail.rb:3 from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:147:inload_without_new_constant_marking' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.3/lib/active_support/dependencies.rb:147:in load' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/initializer.rb:622:inload_application_initializers' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/initializer.rb:621:in each' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/initializer.rb:621:inload_application_initializers' ... 19 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 Do I need to push the gems..If so I tried git add .gems It also gives fatal error. any suggestion would be greatly appreciated.

    Read the article

  • Maven: Repository no longer exists! Now what?

    - by pek
    I have just begun my journey in Maven2 and found the dependency repositories logic a little weird... As far as I understand, I need to point Maven to a repository from which it can fetch the various POMs found in my dependencies. In other words, instead of downloading all the dependencies in my lib folder, as I did in the Ant era, I now have to look into various Maven repositories and, hopefully, find what I need. OK, thanks to MVNBrowser things get a little easier. But! What if the Maven repository no longer exists? For example, I use Slick in my project. Among the other dependencies, slick uses JNLP (for some reason). The artifact for jnlp is: <dependency> <groupId>javax.jnlp</groupId> <artifactId>jnlp</artifactId> <version>1.2</version> </dependency> According to MVNBrowser, javax.jnlp can only be found in one repository, Freehep. Which is no longer available. So now what?

    Read the article

  • Is there any way to provide custom factory for .Net Framework creation Entities from EF4 ?

    - by ILICH
    There are a lot of posts about how cool POCO objects are and how Entity Framework 4 supports them. I decided to try it out with domain driven development oriented architecture and finished with domain entities that has dependencies from services. So far so good. Imagine my Products are POCO objects. When i query for objects like this: NorthwindContext db = new NorthwindContext(); var products = db.Products.ToList(); EF creates instances of products for me. Now I want to inject dependencies in my POCO objects (products) The only way I see is make some method within NorthwindContext that makes something like pseudo-code below: public List<Product> GetProducts(){ var products = database.Products.ToList(); container.BuildUp(products); //inject dependencies return products; } But what if i want to make my repository to be more flexible like this: public ObjectSet<Product> GetProducts() { ... } So, I really need a factory to make it more lazy and linq friendly. Please help !

    Read the article

  • Powershell finding services using a cmdlet dll

    - by bartonm
    I need to upgrade a dll assemblies, written in C#, in our installation. Before I replace the DLL file, I want to check if the file has a lock and if so display a message. How do I implement this in powershell? I was thinking iterate through Get-Process checking dependencies. Solved. I iterated through list looking a file path match. function IsCaradigmPowershellDLLFree() { # The list of DLLs to check for locks by running processes. $DllsToCheckForLocks = "$env:ProgramFiles\Caradigm Platform\System 3.0\Platform\PowerShell\Caradigm.Platform.Powershell.dll", "$env:ProgramFiles\Caradigm Platform\System 3.0\Platform\PowerShell\Caradigm.Platform.Powershell.InternalPlatformSetup.dll"; # Assume true, then check all process dependencies $result = $true; # Iterate through each process and check module dependencies foreach ($p in Get-Process) { # Iterate through each dll used in a given process foreach ($m in Get-Process -Name $p.ProcessName -Module -ErrorAction SilentlyContinue) { # Check if dll dependency match any DLLs in list foreach ($dll in $DllsToCheckForLocks) { # Compare the fully-qualified file paths, # if there's a match then a lock exists. if ( ($m.FileName.CompareTo($dll) -eq 0) ) { $pName = $p.ProcessName.ToString() Write-Error "$dll is locked by $pName. This dll must be have zero locked prior to upgrade. Stop this service to release this lock on $m1." $result = $false; } } } } return $result; }

    Read the article

  • Strategies for dealing with Circular references caused by JPA relationships?

    - by ams
    I am trying to partition my application into modules by features packaged into separate jars such as feature-a.jar, feature-b.jar, ... etc. Individual feature jars such as feature-a.jar should contain all the code for a feature a including jpa entities, business logic, rest apis, unit tests, integration test ... etc. The problem I am running into is that bi-directional relationships between JPA entities cause circular references between the jar files. For example Customer entity should be in customer.jar and the Order should be in order.jar but Customer references order and order references customer making it hard to split them into separate jars / eclipse projects. Options I see for dealing with the circular dependencies in JPA entities: Option 1: Put all the entities into one jar / one project Option 2: Don't map certain bi-directianl relationships to avoid circular dependencies across projects. Questions: What rules / principles have you used to decide when to do bi-directional mapping vs. not? Have you been able to break jpa entities into their own projects / jar by features if so how did you avoid the circular dependencies issues?

    Read the article

  • GNU/Linux developement n00b needs help porting C++ application from windows to GNU/Linux.

    - by AndrejaKo
    Hi! These questions may not be perfectly suited for this site, so I apologize in advance for asking them here. I'm trying to port a computer game from windows to GNU/Linux. It uses Ogre3D,CEGUI, ogreogg and ogrenewt. As far as I know all dependencies work on GNU/Linux and in the game itself there is no ooze-specific code. Here's the questions part: Is there any easy way to port visual studio 2008 project to GNU/Linux tool-chain? How do I manage dependencies? In Visual Studio, I'd just add them in property sheets or default directories. I assume on GNU/Linux autoconf and make take care of that, but in which way? Do I have to add each .cpp and .hpp manually or is there some way to automate things? How do I solve the problem of dependencies on different locations on different systems? I'd like to use Eclipse as IDE under GNU/Linux. I know that the best answer to most of my questions is RTFM, but I'm not sure what I exactly need and where to start looking.

    Read the article

  • no such file to load -- for several gems unpacked in a Rails 2.3.8 app

    - by vincentp
    Hi, I unpacked several gems into the /vendor/gems folder, and I get the same error message for 5 of these gems when I try to start my Rails application. The date-performance one as an example : no such file to load -- date_performance.so /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' /path_to_my_app/vendor/gems/date-performance-0.4.8/lib/date/performance.rb:34 ... Here is the line 34 : require 'date_performance.so' I'm including the gem using the following code : config.gem "date-performance", :lib => "date/performance" The '.so' file is under /path_to_my_app/vendor/gems/date-performance-0.4.8/lib/ Any idea on why things were working while the gems were not unpacked? Do you have any idea about this behavior? I'm using : Rails 2.3.8 REE 1.8.7 gem 1.3.6 Mac OS X Thanks! Vincent

    Read the article

  • OOP + MVC advice on Member Controller

    - by dan727
    Hi, I am trying to follow good practices as much as possible while I'm learning using OOP in an MVC structure, so i'm turning to you guys for a bit of advice on something which is bothering me a little here. I am writing a site where I will have a number of different forms for members to fill in (mainly data about themselves), so i've decided to set up a Member controller where all of the forms relating to the member are represented as individual methods. This includes login/logout methods, as well as editing profile data etc. In addition to these methods, i also have a method to generate the member's control panel widget, which is a constant on every page on the site while the member is logged in. The only thing is, all of the other methods in this controller all have the same dependencies and form templates, so it would be great to generate all this in the constructor, but as the control_panel method does not have the same dependencies etc, I cannot use the constructor for this purpose, and instead I have to redeclare the dependencies and same template snippets in each method. This obviously isn't ideal and doesn't follow DRY principle, but I'm wondering what I should do with the control_panel method, as it is related to the member and that's why I put it in that controller in the first place. Am I just over-complicating things here and does it make sense to just move the control_panel method into a simple helper class? Here are the basic methods of the controller: class Member_Controller extends Website_Controller { public function __construct() { parent::__construct(); if (request::is_ajax()) { $this->auto_render = FALSE; // disable auto render } } public static function control_panel() { //load control panel view $panel = new View('user/control_panel'); return $panel; } public function login() { } public function register() { } public function profile() { } public function household() { } public function edit_profile() { } public function logout() { } }

    Read the article

  • Rendering ASP.NET Script References into the Html Header

    - by Rick Strahl
    One thing that I’ve come to appreciate in control development in ASP.NET that use JavaScript is the ability to have more control over script and script include placement than ASP.NET provides natively. Specifically in ASP.NET you can use either the ClientScriptManager or ScriptManager to embed scripts and script references into pages via code. This works reasonably well, but the script references that get generated are generated into the HTML body and there’s very little operational control for placement of scripts. If you have multiple controls or several of the same control that need to place the same scripts onto the page it’s not difficult to end up with scripts that render in the wrong order and stop working correctly. This is especially critical if you load script libraries with dependencies either via resources or even if you are rendering referenced to CDN resources. Natively ASP.NET provides a host of methods that help embedding scripts into the page via either Page.ClientScript or the ASP.NET ScriptManager control (both with slightly different syntax): RegisterClientScriptBlock Renders a script block at the top of the HTML body and should be used for embedding callable functions/classes. RegisterStartupScript Renders a script block just prior to the </form> tag and should be used to for embedding code that should execute when the page is first loaded. Not recommended – use jQuery.ready() or equivalent load time routines. RegisterClientScriptInclude Embeds a reference to a script from a url into the page. RegisterClientScriptResource Embeds a reference to a Script from a resource file generating a long resource file string All 4 of these methods render their <script> tags into the HTML body. The script blocks give you a little bit of control by having a ‘top’ and ‘bottom’ of the document location which gives you some flexibility over script placement and precedence. Script includes and resource url unfortunately do not even get that much control – references are simply rendered into the page in the order of declaration. The ASP.NET ScriptManager control facilitates this task a little bit with the abililty to specify scripts in code and the ability to programmatically check what scripts have already been registered, but it doesn’t provide any more control over the script rendering process itself. Further the ScriptManager is a bear to deal with generically because generic code has to always check and see if it is actually present. Some time ago I posted a ClientScriptProxy class that helps with managing the latter process of sending script references either to ClientScript or ScriptManager if it’s available. Since I last posted about this there have been a number of improvements in this API, one of which is the ability to control placement of scripts and script includes in the page which I think is rather important and a missing feature in the ASP.NET native functionality. Handling ScriptRenderModes One of the big enhancements that I’ve come to rely on is the ability of the various script rendering functions described above to support rendering in multiple locations: /// <summary> /// Determines how scripts are included into the page /// </summary> public enum ScriptRenderModes { /// <summary> /// Inherits the setting from the control or from the ClientScript.DefaultScriptRenderMode /// </summary> Inherit, /// Renders the script include at the location of the control /// </summary> Inline, /// <summary> /// Renders the script include into the bottom of the header of the page /// </summary> Header, /// <summary> /// Renders the script include into the top of the header of the page /// </summary> HeaderTop, /// <summary> /// Uses ClientScript or ScriptManager to embed the script include to /// provide standard ASP.NET style rendering in the HTML body. /// </summary> Script, /// <summary> /// Renders script at the bottom of the page before the last Page.Controls /// literal control. Note this may result in unexpected behavior /// if /body and /html are not the last thing in the markup page. /// </summary> BottomOfPage } This enum is then applied to the various Register functions to allow more control over where scripts actually show up. Why is this useful? For me I often render scripts out of control resources and these scripts often include things like a JavaScript Library (jquery) and a few plug-ins. The order in which these can be loaded is critical so that jQuery.js always loads before any plug-in for example. Typically I end up with a general script layout like this: Core Libraries- HeaderTop Plug-ins: Header ScriptBlocks: Header or Script depending on other dependencies There’s also an option to render scripts and CSS at the very bottom of the page before the last Page control on the page which can be useful for speeding up page load when lots of scripts are loaded. The API syntax of the ClientScriptProxy methods is closely compatible with ScriptManager’s using static methods and control references to gain access to the page and embedding scripts. For example, to render some script into the current page in the header: // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Same again - shouldn't be rendered because it's the same id ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "function helloWorld() { alert('hello'); }", true, ScriptRenderModes.Header); // Create a second script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function2", "function helloWorld2() { alert('hello2'); }", true, ScriptRenderModes.Header); // This just calls ClientScript and renders into bottom of document ClientScriptProxy.Current.RegisterStartupScript(this,typeof(ControlResources), "call_hello", "helloWorld();helloWorld2();", true); which generates: <html xmlns="http://www.w3.org/1999/xhtml" > <head><title> </title> <script type="text/javascript"> function helloWorld() { alert('hello'); } </script> <script type="text/javascript"> function helloWorld2() { alert('hello2'); } </script> </head> <body> … <script type="text/javascript"> //<![CDATA[ helloWorld();helloWorld2();//]]> </script> </form> </body> </html> Note that the scripts are generated into the header rather than the body except for the last script block which is the call to RegisterStartupScript. In general I wouldn’t recommend using RegisterStartupScript – ever. It’s a much better practice to use a script base load event to handle ‘startup’ code that should fire when the page first loads. So instead of the code above I’d actually recommend doing: ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "call_hello", "$().ready( function() { alert('hello2'); });", true, ScriptRenderModes.Header); assuming you’re using jQuery on the page. For script includes from a Url the following demonstrates how to embed scripts into the header. This example injects a jQuery and jQuery.UI script reference from the Google CDN then checks each with a script block to ensure that it has loaded and if not loads it from a server local location: // load jquery from CDN ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js", ScriptRenderModes.HeaderTop); // check if jquery loaded - if it didn't we're not online string scriptCheck = @"if (typeof jQuery != 'object') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));"; string jQueryUrl = ClientScriptProxy.Current.GetWebResourceUrl(this, typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jquery_register", string.Format(scriptCheck,jQueryUrl),true, ScriptRenderModes.HeaderTop); // Load jquery-ui from cdn ClientScriptProxy.Current.RegisterClientScriptInclude(this, typeof(ControlResources), "http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js", ScriptRenderModes.Header); // check if we need to load from local string jQueryUiUrl = ResolveUrl("~/scripts/jquery-ui-custom.min.js"); ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "jqueryui_register", string.Format(scriptCheck, jQueryUiUrl), true, ScriptRenderModes.Header); // Create script block in header ClientScriptProxy.Current.RegisterClientScriptBlock(this, typeof(ControlResources), "hello_function", "$().ready( function() { alert('hello'); });", true, ScriptRenderModes.Header); which in turn generates this HTML: <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=DIykvYhJ_oXCr-TA_dr35i4AayJoV1mgnQAQGPaZsoPM2LCdvoD3cIsRRitHKlKJfV5K_jQvylK7tsqO3lQIFw2&t=633979863959332352' type='text/javascript'%3E%3C/script%3E")); </script> <title> </title> <script src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof jQuery != 'object') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/scripts/jquery-ui-custom.min.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> $().ready(function() { alert('hello'); }); </script> </head> <body> …</body> </html> As you can see there’s a bit more control in this process as you can inject both script includes and script blocks into the document at the top or bottom of the header, plus if necessary at the usual body locations. This is quite useful especially if you create custom server controls that interoperate with script and have certain dependencies. The above is a good example of a useful switchable routine where you can switch where scripts load from by default – the above pulls from Google CDN but a configuration switch may automatically switch to pull from the local development copies if your doing development for example. How does it work? As mentioned the ClientScriptProxy object mimicks many of the ScriptManager script related methods and so provides close API compatibility with it although it contains many additional overloads that enhance functionality. It does however work against ScriptManager if it’s available on the page, or Page.ClientScript if it’s not so it provides a single unified frontend to script access. There are however many overloads of the original SM methods like the above to provide additional functionality. The implementation of script header rendering is pretty straight forward – as long as a server header (ie. it has to have runat=”server” set) is available. Otherwise these routines fall back to using the default document level insertions of ScriptManager/ClientScript. Given that there is a server header it’s relatively easy to generate the script tags and code and append them to the header either at the top or bottom. I suspect Microsoft didn’t provide header rendering functionality precisely because a runat=”server” header is not required by ASP.NET so behavior would be slightly unpredictable. That’s not really a problem for a custom implementation however. Here’s the RegisterClientScriptBlock implementation that takes a ScriptRenderModes parameter to allow header rendering: /// <summary> /// Renders client script block with the option of rendering the script block in /// the Html header /// /// For this to work Header must be defined as runat="server" /// </summary> /// <param name="control">any control that instance typically page</param> /// <param name="type">Type that identifies this rendering</param> /// <param name="key">unique script block id</param> /// <param name="script">The script code to render</param> /// <param name="addScriptTags">Ignored for header rendering used for all other insertions</param> /// <param name="renderMode">Where the block is rendered</param> public void RegisterClientScriptBlock(Control control, Type type, string key, string script, bool addScriptTags, ScriptRenderModes renderMode) { if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; if (control.Page.Header == null || renderMode != ScriptRenderModes.HeaderTop && renderMode != ScriptRenderModes.Header && renderMode != ScriptRenderModes.BottomOfPage) { RegisterClientScriptBlock(control, type, key, script, addScriptTags); return; } // No dupes - ref script include only once const string identifier = "scriptblock_"; if (HttpContext.Current.Items.Contains(identifier + key)) return; HttpContext.Current.Items.Add(identifier + key, string.Empty); StringBuilder sb = new StringBuilder(); // Embed in header sb.AppendLine("\r\n<script type=\"text/javascript\">"); sb.AppendLine(script); sb.AppendLine("</script>"); int? index = HttpContext.Current.Items["__ScriptResourceIndex"] as int?; if (index == null) index = 0; if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if(renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1,new LiteralControl(sb.ToString())); HttpContext.Current.Items["__ScriptResourceIndex"] = index; } Note that the routine has to keep track of items inserted by id so that if the same item is added again with the same key it won’t generate two script entries. Additionally the code has to keep track of how many insertions have been made at the top of the document so that entries are added in the proper order. The RegisterScriptInclude method is similar but there’s some additional logic in here to deal with script file references and ClientScriptProxy’s (optional) custom resource handler that provides script compression /// <summary> /// Registers a client script reference into the page with the option to specify /// the script location in the page /// </summary> /// <param name="control">Any control instance - typically page</param> /// <param name="type">Type that acts as qualifier (uniqueness)</param> /// <param name="url">the Url to the script resource</param> /// <param name="ScriptRenderModes">Determines where the script is rendered</param> public void RegisterClientScriptInclude(Control control, Type type, string url, ScriptRenderModes renderMode) { const string STR_ScriptResourceIndex = "__ScriptResourceIndex"; if (string.IsNullOrEmpty(url)) return; if (renderMode == ScriptRenderModes.Inherit) renderMode = DefaultScriptRenderMode; // Extract just the script filename string fileId = null; // Check resource IDs and try to match to mapped file resources // Used to allow scripts not to be loaded more than once whether // embedded manually (script tag) or via resources with ClientScriptProxy if (url.Contains(".axd?r=")) { string res = HttpUtility.UrlDecode( StringUtils.ExtractString(url, "?r=", "&", false, true) ); foreach (ScriptResourceAlias item in ScriptResourceAliases) { if (item.Resource == res) { fileId = item.Alias + ".js"; break; } } if (fileId == null) fileId = url.ToLower(); } else fileId = Path.GetFileName(url).ToLower(); // No dupes - ref script include only once const string identifier = "script_"; if (HttpContext.Current.Items.Contains( identifier + fileId ) ) return; HttpContext.Current.Items.Add(identifier + fileId, string.Empty); // just use script manager or ClientScriptManager if (control.Page.Header == null || renderMode == ScriptRenderModes.Script || renderMode == ScriptRenderModes.Inline) { RegisterClientScriptInclude(control, type,url, url); return; } // Retrieve script index in header int? index = HttpContext.Current.Items[STR_ScriptResourceIndex] as int?; if (index == null) index = 0; StringBuilder sb = new StringBuilder(256); url = WebUtils.ResolveUrl(url); // Embed in header sb.AppendLine("\r\n<script src=\"" + url + "\" type=\"text/javascript\"></script>"); if (renderMode == ScriptRenderModes.HeaderTop) { control.Page.Header.Controls.AddAt(index.Value, new LiteralControl(sb.ToString())); index++; } else if (renderMode == ScriptRenderModes.Header) control.Page.Header.Controls.Add(new LiteralControl(sb.ToString())); else if (renderMode == ScriptRenderModes.BottomOfPage) control.Page.Controls.AddAt(control.Page.Controls.Count-1, new LiteralControl(sb.ToString())); HttpContext.Current.Items[STR_ScriptResourceIndex] = index; } There’s a little more code here that deals with cleaning up the passed in Url and also some custom handling of script resources that run through the ScriptCompressionModule – any script resources loaded in this fashion are automatically cached based on the resource id. Raw urls extract just the filename from the URL and cache based on that. All of this to avoid doubling up of scripts if called multiple times by multiple instances of the same control for example or several controls that all load the same resources/includes. Finally RegisterClientScriptResource utilizes the previous method to wrap the WebResourceUrl as well as some custom functionality for the resource compression module: /// <summary> /// Returns a WebResource or ScriptResource URL for script resources that are to be /// embedded as script includes. /// </summary> /// <param name="control">Any control</param> /// <param name="type">A type in assembly where resources are located</param> /// <param name="resourceName">Name of the resource to load</param> /// <param name="renderMode">Determines where in the document the link is rendered</param> public void RegisterClientScriptResource(Control control, Type type, string resourceName, ScriptRenderModes renderMode) { string resourceUrl = GetClientScriptResourceUrl(control, type, resourceName); RegisterClientScriptInclude(control, type, resourceUrl, renderMode); } /// <summary> /// Works like GetWebResourceUrl but can be used with javascript resources /// to allow using of resource compression (if the module is loaded). /// </summary> /// <param name="control"></param> /// <param name="type"></param> /// <param name="resourceName"></param> /// <returns></returns> public string GetClientScriptResourceUrl(Control control, Type type, string resourceName) { #if IncludeScriptCompressionModuleSupport // If wwScriptCompression Module through Web.config is loaded use it to compress // script resources by using wcSC.axd Url the module intercepts if (ScriptCompressionModule.ScriptCompressionModuleActive) { string url = "~/wwSC.axd?r=" + HttpUtility.UrlEncode(resourceName); if (type.Assembly != GetType().Assembly) url += "&t=" + HttpUtility.UrlEncode(type.FullName); return WebUtils.ResolveUrl(url); } #endif return control.Page.ClientScript.GetWebResourceUrl(type, resourceName); } This code merely retrieves the resource URL and then simply calls back to RegisterClientScriptInclude with the URL to be embedded which means there’s nothing specific to deal with other than the custom compression module logic which is nice and easy. What else is there in ClientScriptProxy? ClientscriptProxy also provides a few other useful services beyond what I’ve already covered here: Transparent ScriptManager and ClientScript calls ClientScriptProxy includes a host of routines that help figure out whether a script manager is available or not and all functions in this class call the appropriate object – ScriptManager or ClientScript – that is available in the current page to ensure that scripts get embedded into pages properly. This is especially useful for control development where controls have no control over the scripting environment in place on the page. RegisterCssLink and RegisterCssResource Much like the script embedding functions these two methods allow embedding of CSS links. CSS links are appended to the header or to a form declared with runat=”server”. LoadControlScript Is a high level resource loading routine that can be used to easily switch between different script linking modes. It supports loading from a WebResource, a url or not loading anything at all. This is very useful if you build controls that deal with specification of resource urls/ids in a standard way. Check out the full Code You can check out the full code to the ClientScriptProxyClass here: ClientScriptProxy.cs ClientScriptProxy Documentation (class reference) Note that the ClientScriptProxy has a few dependencies in the West Wind Web Toolkit of which it is part of. ControlResources holds a few standard constants and script resource links and the ScriptCompressionModule which is referenced in a few of the script inclusion methods. There’s also another useful ScriptContainer companion control  to the ClientScriptProxy that allows scripts to be placed onto the page’s markup including the ability to specify the script location and script minification options. You can find all the dependencies in the West Wind Web Toolkit repository: West Wind Web Toolkit Repository West Wind Web Toolkit Home Page© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  JavaScript  

    Read the article

  • AIX 5.3 Package Dependency

    - by user60899
    I want to install gettext but i cannot because my AIX says that gettext is dependent on glib and when i try to install glib it says that I cannot because glib is in turn dependent on gettext. Please let me know how I could get past this situation? root [rover]% rpm -i gettext-0.17-1.aix5.1.ppc.rpm error: failed dependencies: libglib-2.0.a(libglib-2.0.so.0) is needed by gettext-0.17-1 libxlsmp.a(smprt.o) is needed by gettext-0.17-1 root [rover]% rpm -i glib2-2.22.5-2.aix5.1.ppc.rpm error: failed dependencies: gettext is needed by glib2-2.22.5-2 Regards, Anurag

    Read the article

  • How does software installation in Linux work?

    - by Saif Bechan
    I am new to Linux and trying to set up a server. For this project I need some additional software, but how to install the software I'm interested in is not always obvious. For example, this was my experience with a PHP package called htscanner: There is no installation guide on the website. The website says that the version I used did not have any dependencies, but this is incorrect. "Release 0.8.1: No dependencies registered." There is no configuration guide on the website. After I managed to get it installed it still didn't work. I decided to just uninstall it and find a better solution. Uninstalling isn't straightforward; the best answer I got is to manually look for the files and delete them. What are the basic concepts of installing and uninstalling software on Linux? How is this supposed to work?

    Read the article

  • Install RT Failed: DateTime >= 0.44 ...MISSING

    - by javano
    I am trying to install RT-4.0.5 (Request Tracker) but I keep getting the following output; $ make fixdeps <output cut> SOME DEPENDENCIES WERE MISSING. CORE missing dependencies: DateTime >= 0.44 ...MISSING make: *** [fixdeps] Error 1 The full output is here (it's quite long); http://pastebin.com/raw.php?i=Tn7GrkYw $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy $ perl --version This is perl 5, version 14, subversion 2 (v5.14.2) built for i686-linux $ cpan --version /usr/local/bin/cpan version 1.57 calling Getopt::Std::getopts (version 1.06 [paranoid]), running under Perl version 5.14.2. [Now continuing due to backward compatibility and excessive paranoia. See ``perldoc Getopt::Std'' about $Getopt::Std::STANDARD_HELP_VERSION.] Nothing to install! I can't see why this is a problem; $ cpan DateTime Going to read '/root/.cpan/Metadata' Database was generated on Thu, 08 Mar 2012 16:11:26 GMT DateTime is up to date (0.72).

    Read the article

  • MariaDB doesn't upgrade, 2 versions are installed

    - by zahorak
    I have a server running on Debian wheezy with MaraiDB and OwnCloud. Few days ago, I wanted to update the packages because of the OwnCloud updates but something went wrong. Usually in this case I'd probably try to remove and again install the problematic packages, but on a server which is used by different people it doesn't seem like a valid solution anymore. Here you can see my console output: user@server:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libmariadbclient18 : Depends: libmysqlclient18 (= 10.0.4+maria-1~wheezy) but 10.0.5+maria-1~wheezy is installed libmysqlclient18 : Depends: libmariadbclient18 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-client-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server : Depends: mariadb-server-10.0 (= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed mariadb-server-core-10.0 : Depends: libmariadbclient18 (>= 10.0.5+maria-1~wheezy) but 10.0.4+maria-1~wheezy is installed E: Unmet dependencies. Try using -f. user@server:~$ sudo apt-get upgrade -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be upgraded: libmariadbclient18 mariadb-server-10.0 owncloud 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 7 not fully installed or removed. Need to get 0 B/37.2 MB of archives. After this operation, 3,565 kB of additional disk space will be used. Do you want to continue [Y/n]? Y Preconfiguring packages ... (Reading database ... 35901 files and directories currently installed.) Preparing to replace libmariadbclient18 10.0.4+maria-1~wheezy (using .../libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb) ... Unpacking replacement libmariadbclient18 ... dpkg: error processing /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb (--unpack): trying to overwrite '/usr/lib/mysql/plugin/dialog.so', which is also in package mariadb-server-10.0 10.0.4+maria-1~wheezy Errors were encountered while processing: /var/cache/apt/archives/libmariadbclient18_10.0.5+maria-1~wheezy_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I tried removing the 10.0.4 package version of libmariadbclient18 but I wasn't really successful in doing that. So my last hope is here, do you have any ideas how exactly I could fix this issue? Thx very much

    Read the article

  • install git on RHEL3

    - by Dan Littlejohn
    having a problem installing git on redhat enterprise 3. When I try and install the rpm it gives a circular dependency problem. [root@tflaus001 tmp]# rpm -i git-1.5.2.1-1.el3.rf.i386.rpm warning: git-1.5.2.1-1.el3.rf.i386.rpm: V3 DSA signature: NOKEY, key ID 6b8d79e6 error: Failed dependencies: perl(Git) is needed by git-1.5.2.1-1.el3.rf [root@tflaus001 tmp]# rpm -i perl-Git-1.5.2.1-1.el3.rf.i386.rpm warning: perl-Git-1.5.2.1-1.el3.rf.i386.rpm: V3 DSA signature: NOKEY, key ID 6b8d79e6 error: Failed dependencies: git = 1.5.2.1-1.el3.rf is needed by perl-Git-1.5.2.1-1.el3.rf perl(Error) is needed by perl-Git-1.5.2.1-1.el3.rf can anyone give me an idea of how to fix this or what I need to add to yum.conf to fix this?

    Read the article

  • Given a debian source package - How do I install the build-deps?

    - by Rory McCann
    I have a debian (well technically ubuntu) source package, i.e. the .dsc, the .tar.gz, etc., I want to build this. The dpkg-buildpackage fails, since I don't have all the build dependencies. Normally I'd use apt-get build-dep, but this package isn't in apt. Is there a 'clean', 'proper' way to install all the build dependencies, given a source package. I know I could just open the debian/control file, but I'm curious if there's a 'proper' way.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >