Search Results

Search found 7776 results on 312 pages for 'configure in'.

Page 45/312 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Nginx as reverse proxy: how to properly configure gateway timeout?

    - by user1281376
    We have configured Nginx as a reverse proxy to an Apache server farm, but I'm running into trouble with the gateway timeouts. Our Goal in human readable form is: "Deliver a request within one second, but if it really takes longer, deliver anyway", which for me translates into "Try the first Apache server in upstream for max 500ms. If we get a timeout / an error, try the next one and so on until we finally succeed." Now our relevant configuration is this: location @proxy { proxy_pass http://apache$request_uri; proxy_connect_timeout 1s; proxy_read_timeout 2s; } [...] upstream apache { server 127.0.0.1:8001 max_fails=1 fail_timeout=10s; server 10.1.x.x:8001 max_fails=1 fail_timeout=10s backup; server 10.1.x.x:8001 max_fails=1 fail_timeout=10s backup; server 10.1.x.x:8001 max_fails=1 fail_timeout=10s backup; } The problem here is that nginx seems to misunderstand this as "Try to get a response from the whole upstream cluster within one second and deliver a 50X error if we don't - without any limit on how long to try any upstream server", which is obviously not what we had in mind. Is there any way to get nginx to do what we want?

    Read the article

  • How to configure permissions for win2008 task running as Network Service to stop/start a service on different system?

    - by weiji
    Well... title says it all, actually. We've got a Scheduled Task set up on a windows server 2003 box running as the Network Service, and the batch file it runs will invoke "sc" to stop and then start a service on another windows box, however sc reports: [SC] OpenService FAILED 5: Access is denied. Running the same batch file via the windows explorer has no issues, and my user account is part of the Administrators group so I believe this is why there are no issues when I try it manually. Is this a permissions thing I enable for Network Service on the first server? Or do I enable permissions for Network Service somehow on the target server? This question (http://serverfault.com/questions/19382/why-sc-query-fails-from-one-machine-but-works-from-another) touches on something similar, but I'm looking for enabling the Network Service to access the service via the scheduled task.

    Read the article

  • Configure Web app for external access (IIS7), allowing only certain users via AD group. All users need internal access

    - by White Island
    We have a Web app running in IIS7 (Server 2008 R2). I now need to allow external access with an SSL certificate, so certain users (e.g. the owner of the company) can use it remotely without VPN. They want to roll out the external access only to those specific users at first (thinking: a Windows credential prompt), BUT everyone will still need access internally (HTTP), without the prompt. I have the SSL cert installed on the server and public DNS configured. I've been trying to figure out how to work the authentication/authorization. I was thinking I need to disable Anonymous authn and set Windows authn, then I keep coming back to 'URL Authorization' in my research for the group setting; however, when I tried URL authz, (removed allow all, added allow rule for the special group), it broke the site internally (403.2 Forbidden, I believe it was). I thought maybe setting up a second site in IIS pointing to the same program would work, but the exact same thing happened (and again with a new app pool, just for kicks). So I guess my question is, how would you do this: allow external access, limited to users in a specific AD group, while still allowing internal access without a credentials prompt? How do I separate the external HTTPS and internal HTTP authorization requirements? Will I need to just copy the entire contents of the app in Windows Explorer to a new folder and create my external site from that? Is Windows authentication the correct option for this? I did come across this, which refers to creating a custom module. While it sounds like a solution, it's not one I'm familiar with, and I just wondered if there is a simpler way to get it to work: http://forums.iis.net/p/1182792/2000775.aspx Thanks!

    Read the article

  • How do I configure apache to accept a client ssl certificate (if present) or authenticate using ldap (if the cert is absent)?

    - by jmwood051
    I have an Apache server that serves up mercurial repositories and it currently authenticates using ldap credentials. I want to permit a single user (to start with) to use a SSL client certificate, with all remaining users still able to use the ldap credentials authentication method. I have looked through Stack Overflow and other wider (google) searches but can not find information/guidance on how to set this up.

    Read the article

  • [SOLVED]Another version of this product is already installed. Installation of this version cannot continue. To configure or remove the existing version of this product, use Add/Remove Programs on the Control Panel

    - by kazim sardar mehdi
    Another version of this product is already installed.  Installation of this version cannot continue.  To configure or remove the existing version of this product, use Add/Remove Programs on the Control Panel I tried to install a new version of windows services that packed into 1 setup.msi and encounter the above mentioned error. To resolve it I tried google read lots of but then find the following article MSIEXEC - The power user's install steps to solve the error: 1. Execute the following command from command prompt: msiexec /i program_name.msi /lv logfile.log where program_name.msi is the new version /lv is log Verbose output   2. open up the logfile.log in the editor 3. find the GUID in it I found it like the following Product Code from property table before transforms: '{GUID}' 4. Above mentioned article suggest  to search it in the registry but to find the uninstall command. Try if you like to see it in the registry. you need to search twice to to get there there you I tried the following command as it mentioned in the above mentioned article but it didn’t work for me. so I keep digging until I got Windows 7 and Windows Installer Error “Another installation is in progress” It mentioned the use of msizap.exe 5.   by combining the commands from both the articles I able to uninstall the service successfully execute the following command from the visual studio command prompt if you already have installed or get it from Microsoft website http://msdn.microsoft.com/en-us/library/aa370523%28VS.85%29.aspx   msizap.exe TWP {GUID} it did the trick and removed the installed service successfully

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • can't install anything ,getting error "Sub-process /usr/bin/dpkg returned an error code (1)"

    - by soum
    i am getting error whenever tring to install or update anything. "Sub-process /usr/bin/dpkg returned an error code (1)" please help me i am just stopped with my ubuntu 11.10. no installation or update. th unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argumenttriggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argumenttriggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • I get this error after upgade. please help

    - by user203404
    dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.2.1~); however: Version of initramfs-tools-bin on system is 0.103ubuntu0.2. klibc-utils (2.0.1-1ubuntu2) breaks initramfs-tools (<< 0.103) and is installed. Version of initramfs-tools to be configured is 0.99ubuntu13.2. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of plymouth: plymouth depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing plymouth (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of mountall: mountall depends on plymouth; however: Package plymouth is not configured yet. dpkg: error processing mountall (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of initscripts: initscripts depends on mountall (>= 2.28); however: Package mountall is not configured yet. dpkg: error processing initscripts (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of upstart: upstart depends on initscripts; however: Package initscripts is not configured yet. upstart depends on mountall; however: Package mountall is not configured yet. dpkg: error processing upstart (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of passwd: passwd depends on upstart-job; however: Package upstart-job is not installed. Package upstart which provides upstart-job is not configured yet. dpkg: error processing passwd (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools plymouth mountall initscripts upstart passwd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • How to configure replication? - This database is not enabled for publication.

    - by truthseeker
    Hi, I'm trying to configure repication on SQL Server 2005. I can done it using wizard. But when I'm trying to run generated scripts by this wizard the error message appears: Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addpublication, Line 159 This database is not enabled for publication. Msg 18757, Level 16, State 1, Procedure sp_MSrepl_addpublication_snapshot, Line 66 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addarticle, Line 168 This database is not enabled for publication. Msg 14294, Level 16, State 1, Procedure sp_verify_job_identifiers, Line 25 Supply either @job_id or @job_name to identify the job. It's a bit strange, because when I'm running this query on database where I clicked and then removed publication, everyting is going well. The problem is when I'm using my query on new database. What is more I'm using sp_replicationdboption stored procedure. When I'm tryin to run it, it says: The replication option 'publish' of database 'ReplicationTest00' has already been set to true. Please help me resolve this issue.

    Read the article

  • How do I configure a C# web service client to send HTTP request header and body in parallel?

    - by Christopher
    Hi, I am using a traditional C# web service client generated in VS2008 .Net 3.5, inheriting from SoapHttpClientProtocol. This is connecting to a remote web service written in Java. All configuration is done in code during client initialization, and can be seen below: ServicePointManager.Expect100Continue = false; ServicePointManager.DefaultConnectionLimit = 10; var client = new APIService { EnableDecompression = true, Url = _url + "?guid=" + Guid.NewGuid(), Credentials = new NetworkCredential(user, password, null), PreAuthenticate = true, Timeout = 5000 // 5 sec }; It all works fine, but the time taken to execute the simplest method call is almost double the network ping time. Whereas a Java test client takes roughly the same as the network ping time: C# client ~ 550ms Java client ~ 340ms Network ping ~ 300ms After analyzing the TCP traffic for a session discovered the following: Basically, the C# client sent TCP packets in the following sequence. Client Send HTTP Headers in one packet. Client Waits For TCP ACK from server. Client Sends HTTP Body in one packet. Client Waits For TCP ACK from server. The Java client sent TCP packets in the following sequence. Client Sends HTTP Headers in one packet. Client Sends HTTP Body in one packet. Client Revieves ACK for first packet. Client Revieves ACK for second packet. Client Revieves ACK for second packet. Is there anyway to configure the C# web service client to send the header/body in parallel as the Java client appears to? Any help or pointers much appreciated.

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • ASP.NET configure data source is not returning anything?

    - by Greg McNulty
    I'm selecting table data of the current user: SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID) I have set a control to the unique user ID and it is getting the correct value: HTML: <asp:Label ID="userID" runat="server" Text="Labeluser"></asp:Label> C#: userID.Text = Membership.GetUser().ProviderUserKey.ToString(); I then use it in the where clause using the Configure Data Source window unique ID = control then controlID userID (fills in .text for me) I compile and run but nothing shows up where the table should be. Any suggestions? Here is the code it has created: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="ConfidenceLevel" HeaderText="ConfidenceLevel" SortExpression="ConfidenceLevel" /> <asp:BoundField DataField="LoveLevel" HeaderText="LoveLevel" SortExpression="LoveLevel" /> <asp:BoundField DataField="HappinessLevel" HeaderText="HappinessLevel" SortExpression="HappinessLevel" /> </Columns> </asp:GridView> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionStringToDB %>" SelectCommand="SELECT [ConfidenceLevel], [LoveLevel], [HappinessLevel] FROM [UserData] WHERE ([UserProfileID] = @UserProfileID)"> <SelectParameters> <asp:ControlParameter ControlID="userID" Name="UserProfileID" PropertyName="Text" Type="Object" /> </SelectParameters> </asp:SqlDataSource>

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • How do I configure encodings (UTF-8) for code executed by Quartz scheduled Jobs in Spring framework

    - by Martin
    I wonder how to configure Quartz scheduled job threads to reflect proper encoding. Code which otherwise executes fine within Springframework injection loaded webapps (java) will get encoding issues when run in threads scheduled by quartz. Is there anyone who can help me out? All source is compiled using maven2 with source and file encodings configured as UTF-8. In the quartz threads any string will have encoding errors if outside ISO 8859-1 characters: Example config <bean name="jobDetail" class="org.springframework.scheduling.quartz.JobDetailBean"> <property name="jobClass" value="example.ExampleJob" /> </bean> <bean id="jobTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerBean"> <property name="jobDetail" ref="jobDetail" /> <property name="startDelay" value="1000" /> <property name="repeatCount" value="0" /> <property name="repeatInterval" value="1" /> </bean> <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean"> <property name="triggers"> <list> <ref bean="jobTrigger"/> </list> </property> </bean> Example implementation public class ExampleJob extends QuartzJobBean { private Log log = LogFactory.getLog(ExampleJob.class); protected void executeInternal(JobExecutionContext ctx) throws JobExecutionException { log.info("ÅÄÖ"); log.info(Charset.defaultCharset()); } } Example output 2010-05-20 17:04:38,285 1342 INFO [QuartzScheduler_Worker-9] ExampleJob - vÖvÑvñ 2010-05-20 17:04:38,286 1343 INFO [QuartzScheduler_Worker-9] ExampleJob - UTF-8 The same lines of code executed within spring injected beans referenced by servlets in the web-container will output proper encoding. What is it that make Quartz threads encoding dependent?

    Read the article

  • C++11 support in GNU automake

    - by sorush-r
    I'm trying to port buildsystem of my project to GNU autotools. The code need to be compiled with -std=c++11 or -std=c++0x flag. I want my configure script to check if compiler supports C++11 or not. I tried adding AX_CHECK_COMPILE_FLAG([-std=c++0x], [CXXFLAGS="$CXXFLAGS -std=c++0x"]) to configure.ac file but configure fails with this error: ... ./configure: line 2732: syntax error near unexpected token `-std=c++0x,' ./configure: line 2732: `AX_CHECK_COMPILE_FLAG(-std=c++0x, CXXFLAGS="$CXXFLAGS -std=c++0x")'

    Read the article

  • New Oracles VM RAC template with support for oracle vm 3 built-in

    - by wcoekaer
    The RAC team did it again (thanks Saar!) - another awesome set of Oracle VM templates published and uploaded to My Oracle Support. You can find the main page here. What's special about the latest version of DeployCluster is that it integrates tightly with Oracle VM 3 manager. It basically is an Oracle VM frontend that helps start VMs, pass arguments down automatically and there is absolutely no need to log into the Oracle VM servers or the guests. Once it completes, you have an entire Oracle RAC database setup ready to go. Here's a short summary of the steps : Set up an Oracle VM 3 server pool Download the Oracle VM RAC template from oracle.com Import the template into Oracle VM using Oracle VM Manager repository - import Create a public and private network in Oracle VM Manager in the network tab Configure the template with the right public and private virtual networks Create a set of shared disks (physical or virtual) to assign to the VMs you want to create (for ASM/at least 5) Clone a set of VMs from the template (as many RAC nodes as you plan to configure) With Oracle VM 3.1 you can clone with a number so one clone command for, say 8 VMs is easy. Assign the shared devices/disks to the cloned VMs Create a netconfig.ini file on your manager node or a client where you plan to run DeployCluster This little text file just contains the IP addresses, hostnames etc for your cluster. It is a very simple small textfile. Run deploycluster.py with the VM names as argument Done. At this point, the tool will connect to Oracle VM Manager, start the VMs and configure each one, Configure the OS (Oracle Linux) Configure the disks with ASM Configure the clusterware (CRS) Configure ASM Create database instances on each node. Now you are ready to log in, and use your x node database cluster. x No need to download various products from various websites, click on trial licenses for the OS, go to a Virtual Machine store with sample and test versions only - this is production ready and supported. Software. Complete. example netconfig.ini : # Node specific information NODE1=racnode1 NODE1VIP=racnode1-vip NODE1PRIV=racnode1-priv NODE1IP=192.168.1.2 NODE1VIPIP=192.168.1.22 NODE1PRIVIP=10.0.0.22 NODE2=racnode2 NODE2VIP=racnode2-vip NODE2PRIV=racnode2-priv NODE2IP=192.168.1.3 NODE2VIPIP=192.168.1.23 NODE2PRIVIP=10.0.0.23 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=mydomain.com DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=racnode12-scan SCANIP=192.168.1.50

    Read the article

  • How can I configure a Factory with the possible providers?

    - by Jonathas Costa
    I have three assemblies: "Framework.DataAccess", "Framework.DataAccess.NHibernateProvider" and "Company.DataAccess". Inside the assembly "Framework.DataAccess", I have my factory (with the wrong implementation of discovery): public class DaoFactory { private static readonly object locker = new object(); private static IWindsorContainer _daoContainer; protected static IWindsorContainer DaoContainer { get { if (_daoContainer == null) { lock (locker) { if (_daoContainer != null) return _daoContainer; _daoContainer = new WindsorContainer(new XmlInterpreter()); // THIS IS WRONG! THIS ASSEMBLY CANNOT KNOW ABOUT SPECIALIZATIONS! _daoContainer.Register( AllTypes.FromAssemblyNamed("Company.DataAccess") .BasedOn(typeof(IReadDao<>)).WithService.FromInterface(), AllTypes.FromAssemblyNamed("Framework.DataAccess.NHibernateProvider") .BasedOn(typeof(IReadDao<>)).WithService.Base()); } } return _daoContainer; } } public static T Create<T>() where T : IDao { return DaoContainer.Resolve<T>(); } } This assembly also defines the base interface for data access IReadDao: public interface IReadDao<T> { IEnumerable<T> GetAll(); } I want to keep this assembly generic and with no references. This is my base data access assembly. Then I have the NHibernate provider's assembly, which implements the above IReadDao using NHibernate's approach. This assembly references the "Framework.DataAccess" assembly. public class NHibernateDao<T> : IReadDao<T> { public NHibernateDao() { } public virtual IEnumerable<T> GetAll() { throw new NotImplementedException(); } } At last, I have the "Company.DataAccess" assembly, which can override the default implementation of NHibernate provider and references both previously seen assemblies. public interface IProductDao : IReadDao<Product> { Product GetByName(string name); } public class ProductDao : NHibernateDao<Product>, IProductDao { public override IEnumerable<Product> GetAll() { throw new NotImplementedException("new one!"); } public Product GetByName(string name) { throw new NotImplementedException(); } } I want to be able to write... IRead<Product> dao = DaoFactory.Create<IRead<Product>>(); ... and then get the ProductDao implementation. But I can't hold inside my base data access any reference to specific assemblies! My initial idea was to read that from a xml config file. So, my question is: How can I externally configure this factory to use a specific provider as my default implementation and my client implementation?

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • How to configure hibernate-tools with maven to generate hibernate.cfg.xml, *.hbm.xml, POJOs and DAOs

    - by mmm
    Hi, can any one tell me how to force maven to precede mapping .hbm.xml files in the automatically generated hibernate.cfg.xml file with package path? My general idea is, I'd like to use hibernate-tools via maven to generate the persistence layer for my application. So, I need the hibernate.cfg.xml, then all my_table_names.hbm.xml and at the end the POJO's generated. Yet, the hbm2java goal won't work as I put *.hbm.xml files into the src/main/resources/package/path/ folder but hbm2cfgxml specifies the mapping files only by table name, i.e.: <mapping resource="MyTableName.hbm.xml" /> So the big question is: how can I configure hbm2cfgxml so that hibernate.cfg.xml looks like below: <mapping resource="package/path/MyTableName.hbm.xml" /> My pom.xml looks like this at the moment: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <id>hbm2cfgxml</id> <phase>generate-sources</phase> <goals> <goal>hbm2cfgxml</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2cfgxml</name> <implemetation>jdbcconfiguration</implementation> <outputDirectory>src/main/resources/</outputDirectory> </component> </components> <componentProperties> <packagename>package.path</packageName> <configurationFile>src/main/resources/hibernate.cfg.xml</configurationFile> </componentProperties> </configuration> </execution> </executions> </plugin> And then the second question: is there a way to tell maven to copy resources to the target folder before executing hbm2java? At the moment I'm using mvn clean resources:resources generate-sources for that, but there must be a better way. Thanks for any help. Update: @Pascal: Thank you for your help. The path to mappings works fine now, I don't know what was wrong before, though. Maybe there is some issue with writing to hibernate.cfg.xml while reading database config from it (though the file gets updated). I've deleted the file hibernate.cfg.xml, replaced it with database.properties and run the goals hbm2cfgxml and hbm2hbmxml. I also don't use the outputDirectory nor configurationfile in those goals anymore. As a result the files hibernate.cfg.xml and all *.hbm.xml are being generated into my target/hibernate3/generated-mappings/ folder, which is the default value. Then I updated the hbm2java goal with the following: <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernate.cfg.xml</configurationfile> </componentProperties> But then I get the following: [INFO] --- hibernate3-maven-plugin:2.2:hbm2java (hbm2java) @ project.persistence --- [INFO] using configuration task. [INFO] Configuration XML file loaded: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:17,484 INFO org.hibernate.cfg.Configuration - configuring from url: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:19,046 INFO org.hibernate.cfg.Configuration - Reading mappings from resource : package.name/Messages.hbm.xml [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java (hbm2java) on project project.persistence: Execution hbm2java of goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java failed: resource: package/name/Messages.hbm.xml not found How do I deal with that? Of course I could add: <outputDirectory>src/main/resources/package/name</outputDirectory> to the hbm2hbmxml goal, but I think this is not the best approach, or is it? Is there a way to keep all the generated code and resources away from the src/ folder? I assume, the goal of this approach is not to generate any sources into my src/main/java or /resources folder, but to keep the generated code in the target folder. As I generally agree with this point of view, I'd like to continue with that eventually executing hbm2dao and packaging the project to be used as a generated persistence layer component from the business layer. Is this also what you meant?

    Read the article

  • apt-get install was interrupted

    - by user3475299
    I am new to Ubuntu. I got the following lines after an interrupted apt-get install. Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic update-initramfs: Generating /boot/initrd.img-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.13.0-29-generic /boot/vmlinuz-3.13.0-29-generic /usr/sbin/grub-mkconfig: 14: /etc/default/grub: nouveau.modeset=0: not found run-parts: /etc/kernel/postinst.d/zz-update-grub exited with return code 127 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.13.0-29-generic.postinst line 1025. No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: error processing package linux-image-3.13.0-29-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-extra-3.13.0-29-generic: linux-image-extra-3.13.0-29-generic depends on linux-image-3.13.0-29-generic; however: Package linux-image-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-image-extra-3.13.0-29-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.13.0-29-generic; however: Package linux-image-3.13.0-29-generic is not configured yet. linux-image-generic depends on linux-image-extra-3.13.0-29-generic; however: Package linux-image-extra-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.13.0.29.35); however: Package linux-image-generic is not configured yet. dpkg: error processing package linux-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-image-extra-3.13.0-27-generic: linux-image-extra-3.13.0-27-generic depends on linux-image-3.13.0-27-generic; however: Package linux-image-3.13.0-27-generic is not configured yet. dpkg: error processing package linux-image-extra-3.13.0-27-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-3.13.0-29-generic: linux-signed-image-3.13.0-29-generic depends on linux-image-3.13.0-29-generic (= 3.13.0-29.53); however: Package linux-image-3.13.0-29-generic is not configured yet. linux-signed-image-3.13.0-29-generic depends on linux-image-extra-3.13.0-29-generic (= 3.13.0-29.53); however: Package linux-image-extra-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-signed-image-3.13.0-29-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-generic: linux-signed-image-generic depends on linux-signed-image-3.13.0-29-generic; however: Package linux-signed-image-3.13.0-29-generic is not configured yet. dpkg: error processing package linux-signed-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-generic: linux-signed-generic depends on linux-signed-image-generic (= 3.13.0.29.35); however: Package linux-signed-image-generic is not configured yet. dpkg: error processing package linux-signed-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-signed-image-3.13.0-27-generic: linux-signed-image-3.13.0-27-generic depends on linux-image-3.13.0-27-generic (= 3.13.0-27.50); however: Package linux-image-3.13.0-27-generic is not configured yet. linux-signed-image-3.13.0-27-generic depends on linux-image-extra-3.13.0-27-generic (= 3.13.0-27.50); however: Package linux-image-extra-3.13.0-27-generic is not configured yet. dpkg: error processing package linux-signed-image-3.13.0-27-generic (--configure): dependency problems - leaving unconfigured Setting up libxkbcommon-x11-0:amd64 (0.4.1-0ubuntu1) ... Setting up libqt5gui5:amd64 (5.2.1+dfsg-1ubuntu14.2) ... Processing triggers for libc-bin (2.19-0ubuntu6) ... Errors were encountered while processing: linux-image-3.13.0-27-generic linux-image-3.13.0-29-generic linux-image-extra-3.13.0-29-generic linux-image-generic linux-generic linux-image-extra-3.13.0-27-generic linux-signed-image-3.13.0-29-generic linux-signed-image-generic linux-signed-generic linux-signed-image-3.13.0-27-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • Macports: port install jpeg fails

    - by Philipp Keller
    History: installed MacPorts on Leopard upgraded to Snow Leopard uninstall all ports reinstalled XCode sudo port uninstall jpeg sudo port install jpeg DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/graphics/jpeg DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port jpeg. DEBUG: Requested variant i386 is not provided by port jpeg. DEBUG: Requested variant macosx is not provided by port jpeg. --- Computing dependencies for jpeg DEBUG: Executing org.macports.main (jpeg) DEBUG: Skipping completed org.macports.fetch (jpeg) DEBUG: Skipping completed org.macports.checksum (jpeg) DEBUG: Skipping completed org.macports.extract (jpeg) DEBUG: Skipping completed org.macports.patch (jpeg) --- Configuring jpeg DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (jpeg) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-arch x86_64' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local' sh: line 0: cd: /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a: No such file or directory Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_graphics_jpeg/work/jpeg-8a" && ./configure --prefix=/opt/local " returned error 1 while executing "$procedure $targetname" Warning: the following items did not execute (for jpeg): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. To report a bug, see http://guide.macports.org/#project.tickets

    Read the article

  • Php pdo_dblib - cannot find/unable to load freetds

    - by MaxPowers
    Self-hosted box, RHEL 6 PHP 5.3.3 PDO installed freetds installed pdo_dblib - so far no luck installing My goal is to use PDO with sybase. Attempting to install pdo_dblib from the appropriate version php source code. I have tried a variety of methods and searched quite a bit for help on this topic, but have yet to be successful. Method 1 Install freetds $ ./configure $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure $ make $ make test Error output: PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 Warning: PHP Startup: Unable to load dynamic library '/home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so' - /home/sybase/Install_items/php_533_src/php-5.3.3/ext/pdo_dblib/modules/pdo_dblib.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 That doesn't look good...I researched this and found an interesting hack for this here. But changing pdo.ini to pdo_0.ini was not the solution, as I still got the same errors on make test. $ su $ make install Output: Installing shared extensions: /usr/lib64/php/modules/ That seems strange...and no, it doesn't actually install (not showing up on phpinfo after apache restart). Method 2 Install freetds following the instructions exactly, i add the prefix $ ./configure --prefix=/usr/local/freetds $ make $ su root Password: $ make install This is successful Install pdo_dblib inside the /ext/pdo_dblib folder: $ phpize $ ./configure --with-sybase=/usr/local/freetds This produces the following error at the bottom of the output ... checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories Method 3 freetds ./configure variation (including or not include the --prefix...) did not change the result of this so I'll skip it. Install pdo_dblib pecl extension following the method specified here. pecl download pdo_dblib tar -xzvf PDO_DBLIB-1.0.tgz Removed the line, <dep type=”ext” rel=”ge” version=”1.0?>pdo</dep> Saved the package.xml file, and moved it in to the PDO_DBLIB directory. mv package.xml ./PDO_DBLIB-1.0 Navigated to the PDO_DBLIB directory, then installed the package from the directory. cd ./PDO_DBLIB-1.0 pecl install package.xml But, this command gives me the following error output, same as Method 2. checking for PDO_DBLIB support via FreeTDS... yes, shared configure: error: Cannot find FreeTDS in known installation directories ERROR: `/home/sybase/Install_items/pecl_pdo_dblib/PDO_DBLIB-1.0/configure' failed

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >