Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 303/916 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • phing FtpDeploy "connection to host failed"

    - by Jorre
    I'm getting the following error when trying to deploy a ZIP file to a remote FTP server. I tried connecting to the server using an FTP client (filezilla) and all goes well. Also, when connecting to a public ftp like ftp.belnet.be connections work fine. I'm trying to send the file to a VSFTPD server behind a router using port forwarding. Again, this works fine from any location using Filezilla, phing is not connecting though... BUILD FAILED /deployment/build.xml:60:12: Could not connect to FTP server x.x.x.x on port 21: Connection to host failed Total time: 2 minutes 30.09 seconds

    Read the article

  • CMS for custom application

    - by RH
    We are building a custom application using LAMP with P being PHP. We also need to have CMS to manage various aspects of the site. The two options for the CMS are Build a complete custom CMS from scratch Extend an existing open source CMS to fit our needs. This way we can use some of the features out of the box and others we will build ourselves. I would like to get your feedback on the following What are your experiences with option number 2? Which CMS would you recommend that we can further customize and extend for our use? What are the best ways to integrate a custom application with other external CMS?

    Read the article

  • Installing bitarray in Python 2.6 on Windows

    - by John Fouhy
    I would like to install bitarray in Windows running python 2.6. I have mingw32 installed, and I have C:\Python26\Lib\distutils\distutils.cfg set to: [build] compiler = mingw32 If I type, in a cmd.exe window: C:\Documents and Settings\john\My Documents\bitarray-0.3.5>python setup.py install I get: [normal python messages skipped] C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python26\include -IC:\Python26\PC -c bitarray/_bitarray.c -o build\temp.win32-2.6\Release\bitarray\_bitarray.o bitarray/_bitarray.c:2197: error: initializer element is not constant bitarray/_bitarray.c:2197: error: (near initialization for `BitarrayIter_Type.tp_getattro') bitarray/_bitarray.c:2206: error: initializer element is not constant bitarray/_bitarray.c:2206: error: (near initialization for `BitarrayIter_Type.tp_iter') bitarray/_bitarray.c:2232: error: initializer element is not constant bitarray/_bitarray.c:2232: error: (near initialization for `Bitarraytype.tp_getattro') bitarray/_bitarray.c:2253: error: initializer element is not constant bitarray/_bitarray.c:2253: error: (near initialization for `Bitarraytype.tp_alloc') bitarray/_bitarray.c:2255: error: initializer element is not constant bitarray/_bitarray.c:2255: error: (near initialization for `Bitarraytype.tp_free') error: command 'gcc' failed with exit status 1 Can anyone help?

    Read the article

  • How to get bash to insert ' in the output

    - by ~danieljamesthomas
    Hi everybody, I'm rather new to bash, and somehow just haven't found out what I'm doing wrong here: (this is a small bash script calling my generator) if [ -n $folder ]; then $zorbalocation -q $generator -f -e files=\"$lFiles\" -e folder=\"lFolder\" else $zorbalocation -q $generator -f -e files=\"$lFiles\" -e folder=\".\" fi Now, obviously I want bash to execute these commands, depending on the content of folder. But, for some reason, bash insists on putting apostrophes ( ' ) around files=... and folder =... So, it tries to execute ../../../zorba/build/bin/zorba -q generator.xq -f -e 'files="test.xqlib"' -e 'folder="."' instead of ../../../zorba/build/bin/zorba -q generator.xq -f -e files="test.xqlib" -e folder="." Does anybody know why bash insists on inserting the apostrophes there? A nice day to everyone Danny

    Read the article

  • Error after installing scala plugin of netbeans

    - by ghedas
    I installed the scala plugin on my netbeans and followed the instruction of this page: http://wiki.netbeans.org/Scala68v1#Scala_Plugins_for_NetBeans_6.8_v1.x_.28RC2.29 but after it completed correctly step by step, when I make an empty project (Hello world!), the project has an error! The empty project is here: package scalaapplication1 object Main { /** * @param args the command line arguments */ def main(args: Array[String]): Unit = { println("Hello, world!") } } and the console error report is: ...\NetBeansProjects\ScalaApplication2\nbproject\build-impl.xml:403: The following error occurred while executing this line: ...\NetBeansProjects\ScalaApplication2\nbproject\build-impl.xml:236: scalac doesn't support the "fork" attribute Is there any suggestion about it?!

    Read the article

  • How do I specify MSBuild to execute command-line calls in ascii not unicode

    - by Ben L
    I'm attempting to target VC7.1 (visual studio 2003 sp1) from Visual Studio 2010. I'm so close to setting it to work. But when I build, I get this error. 1------ Build started: Project: AnExample, Configuration: Release Win32 ------ 1 Microsoft (R) 32-bit C/C++ Standard Compiler Version 13.10.6030 for 80x86 1 Copyright (C) Microsoft Corporation 1984-2002. All rights reserved. 1 1 cl ÿ_/ 1 1cl : Command line warning D4024: unrecognized source file type 'ÿ_/', object file assumed 1 Microsoft (R) Incremental Linker Version 7.10.6030 1 Copyright (C) Microsoft Corporation. All rights reserved. 1 1 /out:.exe 1 ¦/ 1LINK : fatal error LNK1181: cannot open input file ' ¦/.obj' I know this is unsupported but I thought I'd give it a go. Does anyone know how to force the output from msbuild to be ascii or if this is the problem? There were some errors like this years ago related to the DDK acorrding to some other forums. Thanks.

    Read the article

  • How to avoid manual editing of manifest file

    - by Atara
    My application uses isolated activeX (outer), that depends on another activeX (inner), both are using registration-free-com. The generated manifest file contains only the information for the outer activeX. (probably because vs cannot know that the outer activeX is using inner activeX ) When I re-build my project, I always need to manually add the information for the inner activeX to the manifest file, otherwise the application only shows the outer, without the inner. Is there a way to inform visual studio (2008) that I do not want it to re-generate the manifest file for each build? Will I have such option if I upgrade to VS2010 ? Thanks, Atara

    Read the article

  • asp.net free webcontrol to display crosstab or pivot reports with column and row grouping, subtotals

    - by dev-cu
    Hello, I want to develop some crosstab also know as pivot reports in Asp.net with x-axis and y-axis being dynamics, allowing grouping by row and column, for example: have products in y-axis and date in x-axis having in body number of sells of a given product in a given date, if date in x-axis are years, i want subtotals for each month for a product (row) and subtotals of sells of all products in date (column) I know there are products available to build reports, but i am using Mysql, so Reporting Service is not an option. It's not necessary for the client build additional reports, i think the simplest solution is having a control to display such information and not using crystal report (which is not free) or something more complex, i want to know if is there an available free control to reach my goal. Well, does anybody know a control or have a different idea, thanks in advance.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • How do I give MacPorts privileges?

    - by cojadate
    I tried to install PostgreSQL server development libraries using MacPorts and got the following: Warning: MacPorts running without privileges. You may be unable to complete certain actions (e.g. install). ---> Computing dependencies for postgresql-server-devel ---> Dependencies to be installed: postgresql-devel ---> Building postgresql-devel Error: Target org.macports.build returned: shell command failed Error: The following dependencies failed to build: postgresql-devel Error: Status 1 encountered during processing. To report a bug, see <http://guide.macports.org/#project.tickets> So I guess that means I need to running MacPorts with privileges and try again. Unfortunately I've no idea how to give MacPorts privileges. I'm running OS X 10.6.3

    Read the article

  • Image Viewer application, Image processing with Dispaly Data.

    - by Harsha
    Hello All, I am working on Image Viewer application and planning to build in WPF. My Image size are usually larger than 3000x3500. After searching for week, I got sample code from MSDN. But it is written in ATL COM. So I am planning to work and build the Image viewer as follows: After reading the Image I will scale down to my viewer size, viwer is around 1000x1000. Lets call this Image Data as Display Data. Once displaying this data, I will work only this Display data. For all Image processing operation, I will use this display data and when user choose to save the image, I will apply all the operation to original Image data. My question is, Is is ok to use Display data for showing and initial image processing operations.

    Read the article

  • hackage package dependencies and future-proof libraries

    - by yairchu
    In the dependencies section of a cabal file: Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 Should I be doing something like Build-Depends: base >= 3 && < 5, transformers >= 0.2.0 && < 0.3.0 (putting upper limits on versions of packages I depend on) or not? I'll use a real example: my "List" package on Hackage (List monad transformer and class) If I don't put the limit - my package could break by a change in "transformers" If I do put the limit - a user that uses "transformers" but is using a newer version of it will not be able to use lift and liftIO with ListT because it's only an instance of these classes of transformers-0.2.x I guess that applications should always put upper limits so that they never break, so this question is only about libraries: Shall I use the upper version limit on dependencies or not?

    Read the article

  • 3d user experience with HTML5 and Javascript

    - by chako
    I've to build a 3D user experience with HTML5 and any required JS library which provides such functionality. 3D scene consists of a cylindrical pipe and surface. It has 360 degree rotation and can zoom in and out. As user selects a parameter, specific value of that parameter at various depth of pipe in surface should display. I've searched for HTML5 3d and JS libraries and found three.js could help for this.Also found this useful presentation on HTML 3d engine: http://projects.mariusgundersen.net/OnGameStart/#1 .But as this is my first time with HTML5 3d modeling, how should I initiate to build ? What parameters should be considered ? Which tools and libraries would best fit for such requirements ? I would like to create a 3d model using HTML5 and JS 3d engine as shown in the 1st image.

    Read the article

  • MinGW - cross compile tool - latest version?

    - by Petike
    At the MinGW download page you can download the "Cross-Hosted MinGW Build Tool" which is a shell script to build the "MinGW cross-compiler" so that you will be able to compile your programs on "Linux" to the "Windows" target. I have downloaded that script, run it and answered the interactive questions the script has asked me. I had to dowload some files from which one has name "gcc-core". And the "latest" version of the "gcc-core source code" I have found on that page, was "gcc-core-3.4.5-20060117-2-src.tar.gz" - so that "3.4.5" version. But on "Ubuntu Linux" I can download the precompiled "mingw32" package which is of the version "4.2.1". How is it possible that the "Ubuntu package" version of MinGW is newer than the one from the MinGW "homepage"? So which is the latest version of the "MinGW cross compile tool"?

    Read the article

  • QWT plugin for QT 4.5

    - by Extrakun
    Hi, I have gotten the latest QWT 5.1.2 for QT 4.5 and managed to get it to complie. I am now trying to get the plugin to work in QT Designer (with VS intergration). I have placed the plugin files into the plugin/designer folder, but when attempting to load, I hit this error Cannot load library qwt_desginer_plugin5.dll: The specified module cannot be found. I've done some search on this issue, one page which suggest moving the plugin to the VS intergration folder - which does not exist for Program Files\Nokia\Vs4Addin. My QT Designer is a debug-and-relase build. (That is, if I use a debug build of the plugin it would complain that it is expecting a release).

    Read the article

  • mod_wsgi | linux installation error

    - by MMRUser
    I'm getting the following error when I try to install mod_wsgi ./configure checking for apxs2... no checking for apxs... /usr/sbin/apxs checking Apache version... 2.2.3 configure: creating ./config.status config.status: creating Makefile make /usr/sbin/apxs -c -I/usr/local/include/python2.6 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.6/config -lpython2.6 -lpthread -ldl -lutil -lm /apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.6 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo sh: /apr-1/build/libtool: No such file or directory apxs:Error: Command failed with rc=8323072 . make: *** [mod_wsgi.la] Error 1 libtool is installed on my system.. mod_wsgi 3.2 *Apache 2.2* *Python 2.6*

    Read the article

  • fcgio.cpp: In destructor 'virtual fcgi_streambuf::~fcgi_streambuf()':

    - by skyeagle
    I am attempting to build fastcgi on a Linux Ubuntu 10.x machine. I run the following commands: ./configure make and I get the following error: fcgio.cpp: In destructor 'virtual fcgi_streambuf::~fcgi_streambuf()': fcgio.cpp:50: error: 'EOF' was not declared in this scope fcgio.cpp: In member function 'virtual int fcgi_streambuf::overflow(int)': fcgio.cpp:70: error: 'EOF' was not declared in this scope fcgio.cpp:75: error: 'EOF' was not declared in this scope fcgio.cpp: In member function 'virtual int fcgi_streambuf::sync()': fcgio.cpp:86: error: 'EOF' was not declared in this scope fcgio.cpp:87: error: 'EOF' was not declared in this scope fcgio.cpp: In member function 'virtual int fcgi_streambuf::underflow()': fcgio.cpp:107: error: 'EOF' was not declared in this scope make[2]: *** [fcgio.lo] Error 1 make[2]: Leaving directory `/somepath/fcgi-2.4.0/libfcgi' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/somepath/fcgi-2.4.0' make: *** [all] Error 2 I notice that others have had the same problem and have asked this question in various fora etc - however, I have not as yet, seen an answer to this question/problem. Has anyone ever managed to build fastcgi on Linux? How do I fix this problem?

    Read the article

  • Win32 call order

    - by DD
    Hi all, I have two windows that I send scripted input to. The procedure goes as this BringWindowToTop( window1 ); i = Build input structures( window1 ); SendInput(i); BringWindowToTop( window2 ); i = Build input structures( window2 ); SendInput(i); I was having trouble with inputs not being sent and the correct time. I put delays after each call and saw that input from the first SendInput() was processed after window2 is brought to top. Same thing at the end of the loop as well. Are SendInput calls buffered? If so, how can I make sure of a serial execution of this code? Thanks

    Read the article

  • ASP.NET MVC- Bizarre problem - suddennly lost all LINQTO SQL data context objects

    - by MikeD
    I was making an edit to a long existing project. Specifically I added some fields to a table and had to delete the table from the LINQTOSQL designer and re-add it. doesn't Also had to do the same for a view. Mode some other code changes and went to build . Now my project won't build because it can't resolve any of the data context objects (all tables and views) in my code. I don't know what I did or how this happeened. I have many tables and views in the project's L2S data context so I don't wont to try and do over. Please any suggestions on how to resolve this problem are greatly appreciated. Desparate! The error messages I am getting are the familiar The type or namespace name 'equipment' could not be found (are you missing a using directive or an assembly reference?)

    Read the article

  • Delete or comment out non-working JUnit tests?

    - by Chris Knight
    I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I: 1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them 2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore 3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base

    Read the article

  • Building a rails form to filter an index page?

    - by Schroedinger
    G'day guys, I'm having an issue with filtering the presentation of several thousand trade items I have in my system. As per the specs of the system we're building we have to have a form that allows people to put in a start date and then an interval in minutes, to filter the presentation of the items. I've built my helper functions to return all of the trades within that interval period, but I can't for the life of me properly build the form that will return a dateTime value and an integer value within the top of the index page? Any ideas? Would I have to build a separate model object to assign values to, or is there a simpler way?

    Read the article

  • executing a script from maven inside a multi module project

    - by Roman
    Hi everyone. I have this multi-module project. In the beginning of each build I would like to run some bat file. So i did the following: <profile> <id>deploy-db</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> </plugin> </plugins> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>exec</goal> </goals> <inherited>false</inherited> </execution> </executions> <configuration> <executable>../database/schemas/import_databases.bat</executable> </configuration> </plugin> </plugins> </pluginManagement> </build> </profile> when i run the mvn verify -Pdeploy-db from the root I get this script executed over and over again in each of my modules. I want it to be executed only once, in the root module. What is there that I am missing ? Thanks

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >