Search Results

Search found 19615 results on 785 pages for 'apache config'.

Page 309/785 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Hylafax / Capi4hylafax: faxgetty does not recognize number of lines

    - by Wrikken
    We've got a T.30 card, 30 working lines on it, but for some reason, if I add more then 30 faxes in the queue at any time (and we're busy enough at peak times that this happens a lot), faxgetty sends faxes to non-existent lines and they appear in the error queue as a 'busy' signal on the line, which results in a lot of failed faxes because the counter of max 3 tries increases rapidly. This is using faxgetty (USE_FAXGETTY="y" in /etc/default/hylafax). I've inherited this thing, so I'm not entirely sure how faxgetty is supposed to know the number of lines. However, if I alter the script to faxmodem (USE_FAXGETTY="n" in /etc/default/hylafax and manually enabling 30 modems), this behavior goes away (new faxes 'wait' for a line to be available before trying to send, so each try / fail is a valid one on a working line, majorly descreasing the amount of failed faxes. However, when researching this almost anyone talks about faxgetty being the preferred, more robust, method, and on top of that for some unexplained reason all FIFO's disappeared for some reason after several errorless hours with faxmodem, forcing a hylafax restart using faxgetty until we figured out why this faxmodem solution failed (which is another question, and somewhat out of scope here). Environment: Debian 2.6.26-2-amd64 capi4hylafax 1:01.03.00.99.svn.300-12 hylafax-client 2:4.4.4-10.1 hylafax-server 2:4.4.4-10.1 Config --hfaxd.conf-- LogFacility: daemon ServerTracing: 0x1ff --hyla.conf-- Host: localhost Verbose: No VRes: 196 TimeZone: local DialRules: "/etc/hylafax/dialrules.europe" --/etc/hylafax/config -- InternationalPrefix: 00 LongDistancePrefix: 0 AreaCode: 99999 CountryCode: 31 DialStringRules: "etc/dialrules.europe" ModemGroup: any:faxCAPI SendFaxCmd: "/usr/bin/wrapc2faxsend" --/etc/hylafax/config.faxCAPI -- SpoolDir: /var/spool/hylafax FaxRcvdCmd: /var/spool/hylafax/bin/faxrcvd PollRcvdCmd: /var/spool/hylafax/bin/pollrcvd FaxReceiveUser: uucp FaxReceiveGroup: dialout LogFile: /var/spool/hylafax/log/capi4hylafax #no, checking this log did not yield anything interesting LogTraceLevel: 4 LogFileMode: 0600 ModemGroup: any:faxCAPI #repeats of faxCAPI2 = faxCAPI30, with of course another devicename/local ident: { HylafaxDeviceName: faxCAPI RecvFileMode: 0600 FAXNumber: ****redacted**** LocalIdentifier: ****some-ident-per-device*** MaxConcurrentRecvs: 0 OutgoingController: 1 OutgoingMSN: SuppressMSN: 0 NumberPrefix: NumberPlusReplacer: "00" UseISDNFaxService: 0 RingingDuration: 0 { Controller: 1 AcceptSpeech: 0 UseDDI: 0 DDIOffset: DDILength: 0 IncomingDDIs: IncomingMSNs: AcceptGlobalCall: 1 } } So in short: How does faxgetty determine the number of lines available? (the man page isn't terribly revealing, and I can't find an appropriate setting in hylafax-config. And how can I get a capi4hylafax/hylafax setup which queues more faxes then lines are available correctly without immediately incrementing the fail count? We will not be receiving any faxes on this machine b.t.w. As I said, I've inherited this thing, so if there are important configuration options I'm not including, please let me know.

    Read the article

  • Validating GPG key signature authenticity

    - by Dor
    I'm trying to validate the integrity of my httpd-2.2.17.tar.gz image. I followed the steps written in the following pages: http://httpd.apache.org/download.cgi#verify http://httpd.apache.org/dev/verification.html#Validating But I got: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. What I need to do in order to verify the authenticity of the key?

    Read the article

  • Solr gives CorruptIndexException codec header mismatch

    - by Alaa Alomari
    I have Solr 3.5 (Jetty). suddenly it starts giving java.lang.RuntimeException: org.apache.lucene.index.CorruptIndexException: codec header mismatch: actual header=448 vs expected header=1071082519 at org..[SomeText]... Caused by: org.apache.lucene.index.CorruptIndexException: codec header mismatch: actual header=448 vs expected header=1071082519 at org..[SomeText]... Anybody faced the same problem before? any idea of how to fix it? Thanks

    Read the article

  • Showing Directory Root When Launching Rails App Using Apache2 and Passenger

    - by LightBe Corp
    I have done the following in an attempt to host a Rails 3.2.3 application using Apache 2.2.21 and Passenger 3.0.13: Installed gem Passenger rvmsudo passenger-install-apache2-module Added website info in /etc/apache2/extra/httpd-vhosts.conf Added line to /etc/hosts (not sure if this was needed or not; not mentioned in Passenger documentation Uncommented out the line in /etc/apache2/httpd.conf to Include /etc/apache2/extra/httpd-vhosts.conf Restarted Apache When I try to pull up my website the following displays: Index of / Name Last modified Size Description Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8r DAV/2 PHP/5.3.10 with Suhosin-Patch Phusion_Passenger/3.0.13 Server at lightbesandbox2.com Port 443 Here is /etc/hosts entry for the website: 127.0.0.1 www.lightbesandbox2.com Here is my /etc/apache2/extra/httpd-vhosts.conf entry for the website: NameVirtualHost *:80 <VirtualHost *:80> ServerName www.lightbesandbox2.com ServerAlias lightbesandbox2.com PassengerAppRoot /Users/server1/Sites/iktusnetlive_RoR/ DocumentRoot /Users/server1/Sites/iktusnetlive_RoR/public <Directory /Users/server1/Sites/iktusnetlive_RoR/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> When I do rvmsudo passenger-status I get the following output: ----------- General information ----------- max = 6 count = 1 active = 0 inactive = 1 Waiting on global queue: 0 ----------- Application groups ----------- /Users/server1/Sites/iktusnetlive_RoR/: App root: /Users/server1/Sites/iktusnetlive_RoR/ * PID: 8140 Sessions: 0 Processed: 2 Uptime: 20m 51s None of my assets are in the public folder in my Rails app. I have written an application using the template presented in Michael Hartl's Ruby on Rails Tutorial. The home page is in /app/views/static_pages/home.html.erb. I decided to copy an index.html file in the public folder to see if it would display. It displayed as I had hoped.. Is there a way to get Passenger to find my assets without me having to rewrite my application? Any help would be appreciated. Update 6/23/2012 10:00 am CDT GMT-6 I corrected the problems with my file and have successfully executed the rake assets:precompile command. I still get the index page as before. I have made no other changes. I did a passenger-status command and it is still loaded. Restarting Apache did nothing.

    Read the article

  • HowTo redirect HTTP to HTTPS on the same httpd?

    - by mosg
    Hi. Here is what I have got: CentOS 5.4 (32-bit) installed Apache httpd (Server version: Apache/2.2.11 (Unix)) mod_rewrite already presents Question: how to redirect simple http://site.com to https://site.com not using VirtualHost defines? PS: tried to find in later answers on SF, but doesn't find nice solution. Thanks.

    Read the article

  • What PHP, Xdebug and Eclipse configurations work on Windows 7 64 bit?

    - by thaddeusmt
    I have been mucking around for days, trying to find the right combination that lets me debug with breakpoints and variable viewing, in Eclipse, without crashing Apache. PHP 5.3? PHP 5.2? Eclipse Helios? Eclipse Galileo? One or the other with certain versions of xdebug or php? Or do I really need to use NetBeans or something else? Is my 64 bit OS the problem? Do need specific 64bit versions of PHP, Eclipse or Xdebug to work on Windows 7 64? Any special xdebug config options and tricks that I need in php.ini? Like turning off xdebug.profiler_enable or not using quotes around my zend_extension path to the xdebug dll? A Vhosts issue? Scrap the whole thing and go back to Win XP or Ubuntu? Here's what I've already been reading: http://stackoverflow.com/questions/4509245/so-eclipse-and-xdebug-walk-into-a-bar-and-then-my-apache-server-dies/4602473 http://stackoverflow.com/questions/206788/why-does-xdebug-crash-apache-on-every-xampp-install-ive-tried http://bugs.xdebug.org/view.php?id=459 https://bugs.eclipse.org/bugs/show_bug.cgi?id=312951#c8 http://stackoverflow.com/questions/2799936/xdebug-for-php-5-2-on-windows-7-64bit and so and so on... SO, xdebug bug tracker, eclipse bugzilla, etc, etc Basically what would be great is if folks could post their working (i.e. debugging with breakpoints and local variable viewing in Eclipse) Win7 64bit configurations, including: PHP version (5.3.1, 5.2.11, etc) Xdebug dll (2.1.0-5.3-vc6, etc) Xdebug php.ini config (zend_extension = "C:\xampp\php\ext\php_xdebug.dll", etc) Apache version (2.2.14, etc) Eclipse version Anything else important? The "secret ingredient"? Thanks! I miss my debugger since I got a new laptop with Win 7! Sadly it looks like some of the drivers (switchable graphics, multi-touch pad, etc) on my lappy don't work right with Ubuntu yet, so I feel a bit trapped on Win :( I know I will figure something out eventually, but I've been at this trial-and-error game a while and am seeking some guidance. (Originally posted on StackOverflow here, but moved to SuperUser:) http://stackoverflow.com/questions/4628215/what-php-xdebug-and-eclipse-configurations-work-on-windows-7-64-bit

    Read the article

  • Lighttpd wont restart due to an fastcgi.server error

    - by Mint
    Here is what comes up when I try to restart lighttpd user:/# /etc/init.d/lighttpd restart Stopping web server: lighttpd. Starting web server: lighttpdDuplicate config variable in conditional 0 global: fastcgi.server 2010-02-13 22:08:30: (configfile.c.907) source: /etc/lighttpd/lighttpd.conf line: 179 pos: 1 parser failed somehow near here: (EOL) failed! Lighttpd restarts fine when I comment out fastcgi config lines.

    Read the article

  • (Ubuntu - Lamp) phpmyadmin install error 404 not found

    - by meyosef
    Hi, I install apache, mysql, php and then phpmyadmin on ubuntu desktop 10.04 version I test apache: localhost with browser and its works. I test mysql with console and its works. I test phpmyadmin http://localhost/phpmyadmin/ and get error 404 not found I see that solution for that brobem, copy /phpmyadmin/ to /www/, but I dont want to put in /www/ to my websites files (maybe I wrong or miss something-please tell me in this case). How this problem can be solved in the best way? Thanks

    Read the article

  • Changing default datadirectory to one on a External NAS or add external share

    - by Hagbart Celine
    So I have been searching the web for days, looking for a solution to my problem. Now my only hope are you guys. I have installed Owncloud successfully on my Windows Server 2008R2. It all runs smoothly and I can connect without problems. So first checks are OK. Now I wanted to change the default data directory from my server to a shared folder on my NAS (Synology DS1813+, DSM 5.0-4493 Update 3). Tried following: changing the directory in config.php I changed the path in the config file from : "C:\inetpub\wwwroot\myfolder\data" to "\NASIP\cloud". by doing this the owncloud server only shows: Code: Select all Daten-Verzeichnis (\192.168.2.4\Cloud\data) ist ungültig Bitte stelle sicher, dass das Daten-Verzeichnis eine Datei namens ".ocdata" im Wurzelverzeichnis enthält. I also tried coping the files that were created in the local data storage, to the share on the NAS. No Luck. Now I tried it by mapping a network drive and using that in the config.php But still no luck. I get the same message with the missing .ocdata file. Now I tried the "External Storage APP" that comes with owncloud I thought that at least I could add the share as an external storage. But this also does not work. tried UNC, Mapped Drive Name (Z:) but nothing helped. So now I'm turning to you.. Does anyone have expirience with this kind of setup? Or can you even tell me how to make it work? (default or external storgae, I don't care anymore ) Using NAS (Synology DS1813+, DSM 5.0-4493 Update 3), Owncloud 7, Windows Server 2008 R2, IIS7 I got an answer on an other forum: The second option is how it should be done: 1. Put OC in maintenance mode 2. Mount (mapping in the windows world) your NAS directly to your OS 3. Copy the local data directory to the NAS mount 4. Ensure the permission is setup to give the web user access to the NAS mount 5. Update OC config.php with the new data path 6. Disable OC maintenance mode And this seems like the right way.. Ensure the permission is setup to give the web user access to the NAS mount I guess this is where I am not sure. What user is it exactly on my Server that is making the requests to the NAS? If the user is for example "IUSR" I can just create an account on my synology NAS and give him full access to my share? (But what is IUSRs password?) I have full root ssh access to my NAS, so if you can tell me what chmod or chown I need to use on my cloud folder...

    Read the article

  • Remove (or add) Entry in Indicator-Applet (Ubuntu/GNOME)

    - by Tim Lytle
    I can't seem to find a guide or reference on how to configure the 'indicator-applet' (aka MessagingMenu) that came about in the 9.04 release of Ubuntu. It's that little mail icon that lists messaging apps. I can find docs about what it should do, people complaining about how it works, references that the API changed in 9.10, but not much on how to change the configuration. The MessagingMenu design spec page says that the config file should be at $HOME/.config/indicators/messages/applications/, but there's nothing there on my install (9.10).

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • lacp, cicso 3550, 3560, help with configuration

    - by Flamewires
    Hey all this is a repost from a question I asked on the cisco forums but never got a useful reply. Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are: BSDServer: #!/bin/sh #bring up both interfaces ifconfig em0 up media 1000baseTX mediaopt full-duplex ifconfig em1 up media 1000baseTX mediaopt full-duplex #create the lagg interface ifconfig lagg0 create #set lagg0's protocol to lacp, add both cards to the interface, #and assign it em1's ip/netmask ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0 The switches are configured as follows: #clear out old junk no int Po1 default int range GigabitEthernet 0/15 - 16 # config ports interface range GigabitEthernet 0/15 - 16 description lagg-test switchport duplex full speed 1000 switchport access vlan 192 spanning-tree portfast channel-group 1 mode active channel-protocol lacp **** switchport trunk encapsulation dot1q **** no shutdown exit interface Port-channel 1 description lagginterface switchport access vlan 192 exit port-channel load-balance src-mac end obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports. With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line. First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase? Second, This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing. In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c server-ip-address), so the source mac(and IP) are different for both connections. Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.

    Read the article

  • PHP fastcgi handler dont work

    - by user1260968
    I have CentOS server ( Server version: Apache/2.2.15 (Unix) Server built: Feb 13 2012 22:31:42 ) with mod_fastcgi.x86_64(2.4.6-2.el6.rf) and php 5.3.3. some sites not work on fastcgi mode. In apache error.log: [Mon Sep 03 19:20:37 2012] [warn] [client 80.*.*.*] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Mon Sep 03 19:20:37 2012] [error] [client 80.*.*.*] Premature end of script headers: index.php Can anybody tell me how solve this?

    Read the article

  • How to configure SNI so as to have benifits of SNI

    - by cd
    Hi, How can i configure SNI to get the benifts ...........I am using openssl 1.0.0 beta5 and apache 2.2.14. Can anyone tell me the complete procedure . I am configuring virtual host in ssl.conf files and having diff certs to each site hosted on apache. Need help its urgent

    Read the article

  • Create .gitconfig for chrooted users

    - by Vincent LITUR
    I have several chrooted users on my server, and I want to install git for specific users. I block at the command : git config --global user.name "user_name" I use this command connected as the user, and I got this error : error: could not lock config file /home/username/.gitconfig: Permission denied I tried to create the file from root, and then put chmod 755 and chown username .gitconfig, but I get the error. Is there a way to do this ? Edit : This question http://stackoverflow.com/questions/17908386/unable-to-create-gitconfig-file-for-user answers mine

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • Why not use a WAMP stack?

    - by matang
    I recently had a talk with some experienced people and they suggested to me not to use a WAMP stack, and instead install apache, mysql and php separately. I don't understand why they have suggested this, though, so can anyone tell me? Is there a particular disadvantage of WAMP, or a particular advantage to installing all of them separately? Since a WAMP stack itself is composed of apache, mysql and php, then what's the difference between using the WAMP stack and installing them all separately?

    Read the article

  • GlassFish docroot internationalization

    - by Mr.J4mes
    At the moment, I am using Apache web server to redirect all HTTP request to port 8080 to be served by GlassFish app server. Just like Apache, GlassFish has a docroot folder to store static pages. I've tried to googled for a while but I could not figure out whether there's a way to set up internationalization for GlassFish's docroot. I'd be very grateful if you could give me a hint or a link to some tutorial regarding this matter.

    Read the article

  • Windows 2003 registry corrupt - endless reboot

    - by Jack
    Windows 2003 will run the loading screen then it stop with "Stop c000218 registry file failure or corrupt. The registry can not load the hive \systemroot\system32\config\security" then it start a count down about dumping the physical memory to disk and reboot itself again. I found Error starting Windows SBS 2003 - STOP: c0000218 but the config is different directory than mine. Is it the same step to try for recovery console?

    Read the article

  • Post too large Exeception in Sun one application server

    - by Business Caliber
    Hi I am getting following exeception after posting the lasrge amount of Data. [#|2010-03-01T23:36:49.764-0600|WARNING|sun-appserver-ee8.1_02|javax.enterprise.system.stream.err|_ThreadID=31;|java.lang.IllegalStateException: Post too large at org.apache.coyote.tomcat5.CoyoteRequest.parseRequestParameters(CoyoteRequest.java:2607) at org.apache.coyote.tomcat5.CoyoteRequest.getParameter(CoyoteRequest.java:1139)

    Read the article

  • JNDI Datasource definition in Tomcat 6.0

    - by romaintaz
    Hi all, I want to define a DataSource to an Oracle database on my Tomcat 6.0. So, in conf/server.xml (yes, I know that this DataSource will be available for all the webapps in Tomcat, but it's not a problem here), I've set this Resource: <GlobalNamingResources> <Resource name="hibernate/HibernateDS" auth="Container" type="javax.sql.DataSource" url="jdbc:oracle:thin:@myserver:1542:foo" username="foo" password="bar" driverClassName="oracle.jdbc.OracleDriver" maxActive="50" maxIdle="10" validationQuery="select 1 from dual"/> Then, in the web.xml of my application, I set a resource-ref element: <resource-ref> <description>Hibernate Datasource</description> <res-ref-name>hibernate/HibernateDS</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Finally, as Hibernate is used to manage the database connection, I have a webapps/mywebapp/WEB-INF/classes/hibernate.cfg.xml that creates a session-factory using the JNDI DataSource: <hibernate-configuration> <session-factory> <property name="connection.datasource">java:comp/env/hibernate/HibernateDS</property> ... However, when I start my Tomcat server, I get an error that says it could not create the INFO [net.sf.hibernate.util.NamingHelper] JNDI InitialContext properties:{} INFO [net.sf.hibernate.connection.DatasourceConnectionProvider] Using datasource: java:comp/env/hibernate/HibernateDS INFO [net.sf.hibernate.transaction.TransactionFactoryFactory] Transaction strategy: net.sf.hibernate.transaction.JDBCTransactionFactory INFO [net.sf.hibernate.transaction.TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of process level read-write cache is not recommended) WARN [net.sf.hibernate.cfg.SettingsFactory] Could not obtain connection metadata org.apache.tomcat.dbcp.dbcp.SQLNestedException: Cannot create JDBC driver of class '' for connect URL 'null' at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1150) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880) at net.sf.hibernate.connection.DatasourceConnectionProvider.getConnection(DatasourceConnectionProvider.java:59) at net.sf.hibernate.cfg.SettingsFactory.buildSettings(SettingsFactory.java:84) at net.sf.hibernate.cfg.Configuration.buildSettings(Configuration.java:1172) ... Caused by: java.lang.NullPointerException at sun.jdbc.odbc.JdbcOdbcDriver.getProtocol(JdbcOdbcDriver.java:507) at sun.jdbc.odbc.JdbcOdbcDriver.knownURL(JdbcOdbcDriver.java:476) at sun.jdbc.odbc.JdbcOdbcDriver.acceptsURL(JdbcOdbcDriver.java:307) at java.sql.DriverManager.getDriver(DriverManager.java:253) at org.apache.tomcat.dbcp.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1143) ... 11 more Do you have any idea why Hibernate is not able to construct the session-factory? What is wrong in my configuration?

    Read the article

  • SQL 2008 Report Manager not working

    - by Fatherjack
    I have a SQL 2008 developer edition with SSRS and the report manager is only available from the local machine. If I try to access it from any other machine I get challenged for my domain u/name and pwd 3 times and then the screen stays blank. I have made changes to some config files (originals copied out) in order to get a 3rd party application to run but that is now uninstalled and the config files are all back to vanilla (originals copied back in) I feel its something to do with authentication but am stuck ... any suggestions welcomed Jonathan

    Read the article

  • What benefit do I get from using a 64-bit server?

    - by blockhead
    I bought a small 256MB slice from slicehost and installed Ubuntu 10.04 64bit and wordpress on it. Performance was dismal as apache was eating up all my memory. Once I did some taming of apache and switched to fCGI things ran fine. Next I rebuilt as a 32 bit server, and performance was much better. What benefit would I get from a 64 bit server. Is it all about the memory?

    Read the article

  • what i should do in order to build curl without error?

    - by hugemeow
    failed when i run ./buildconf the error information is as follows: [mirror@home curl]$ ls acinclude.m4 CMakeLists.txt GIT-INFO MacOSX-Framework packages TODO-RELEASE Android.mk configure.ac include Makefile.am perl vc6curl.dsw buildconf COPYING install-sh Makefile.dist README winbuild buildconf.bat CTestConfig.cmake lib Makefile.msvc.names RELEASE-NOTES CHANGES curl-config.in libcurl.pc.in maketgz sample.emacs CHANGES.0 curl-style.el log2changes.pl missing src CMake docs m4 mkinstalldirs tests [mirror@home curl]$ ./config [mirror@home curl]$ ./buildconf buildconf: autoconf version 2.63 (ok) buildconf: autom4te version 2.59 (ERROR: does not match autoconf version) [mirror@home curl]$ echo $? 1

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >