Search Results

Search found 3500 results on 140 pages for 'factory defaults'.

Page 92/140 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Wifi Drops Connections with WPA2-PSK

    - by graf_ignotiev
    I run a small computer lab made up of 10 computers of identical hardware and software (Dell Latitudes with Windows 7 x64 Enterprise) and I use a ZyWALL 2WG as a router/firewall. Nine of the computers connect to the router over wifi using WPA2-PSK encryption while the last one is connected by ethernet cable. I'm having a problem where any computer connected to the wi-fi occasionally drops off the network (it cannot be pinged and the client cannot ping the gateway). It only happens on the wifi side and only when the encryption is WPA2-PSK or WPA-PSK. I tried using another router with a different make and model and had no problems. Thinking it could be a software error, I reset the router to factory defaults and installed the newest firmware (V4.04(AQI.8) | 04/09/2010), but still have the problem. The 802.1X log gives the following error User logout because of user disassociation. with this note WPA2-PSK:00242c582ece:logout where 00242c582ece is the mac address of the device. At this point I'm out of things to try and leads to follow. It looks like this user had the same or similar problem, but none of those proposed solutions work for me.

    Read the article

  • Problem while Installing mysql cluster

    - by Champion
    I downloaded latest mysql cluster file 64 bit (mysql-cluster-gpl-7.1.9-linux-x86_64-glibc23) from mysql site. Ran following command to install mysql server. /home/mysql_cluster/mysqlc/scripts/mysql_install_db --no-defaults --basedir=/home/mysql_cluster/mysqlc --datadir=/home/mysql_cluster/my_cluster/mysqld_data It's giving following issues. /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 245: /home/mysql_cluster/mysqlc/bin/my_print_defaults: No such file or directory Neither host 'db' nor 'localhost' could be looked up with /home/mysql_cluster/mysqlc/bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option My host is pinging and hostname command gives correct hostname. I tried also with --force option force but still not working giving following issues: /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 245: /home/mysql_cluster/mysqlc/bin/my_print_defaults: No such file or directory Installing MySQL system tables... /home/mysql_cluster/mysqlc/scripts/mysql_install_db: line 393: /home/mysql_cluster/mysqlc/bin/mysqld: No such file or directory Installation of system tables failed! Examine the logs in /home/mysql_cluster/my_cluster/mysqld_data for more information. It says above files doesn't exist but this files are present checked with ls command. We are using XEN VM with 64 bit debian lenny os. How this issue can be resolved ?

    Read the article

  • How to configure Farsun USB barcode scanner to not auto-trigger

    - by David Grayson
    At my company we have several USB barcode scanners. I'm not sure exactly what model they are, but I think they are the FG9800 from Farsun because that's what they look like on the exterior. They came with a programming manual that is very similar to this document from the Farsun website. When I scan the "Output Firmware Version" barcode, my scanner types the following into the computer: Farsun V2.00 2011-01-01 Is it possible to configure these scanners so they only read barcodes in response to the trigger button being pressed? I don't want them to automatically read barcodes. Additionally, I want this setting to be remembered while the scanner is turned off. Since this scanner only has a USB port, the only way to configure it that I know of is to scan bar codes from the manual (or make your own). I have tried scanning the configuration bar codes for Single Scan (013300), Single Scan No Trigger (013301), and Laser/CCD Timeout - 5 Seconds (0134005) from this document. Sometimes (but not often) this puts the scanner in to the right mode, where it only scans when the button is pressed. Unfortunately, the scanner seems to always leave this mode when it is power cycled. I have also scanned the "Reset Configuration To Defaults" barcode (0B) many times. We have three different scanners like this and I have not been able to successfully configure any of them. If the things I want are not possible with these Farsun-based scanners, is there some other scanner we can use?

    Read the article

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

  • nginx does not use variables set in /etc/environment on system reboot, but does when restarted from shell

    - by Dave Nolan
    I have a Rails app running on nginx/passenger. It restarts happily in a shell using sudo /etc/init.d/nginx stop|start|restart. But Passenger throws an error when the system is rebooted: "Missing the Rails #{version} gem". But GEM_HOME and GEM_PATH are both set in /etc/environment so surely they would be available to all processes during reboot? /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/games" GEM_HOME=/var/lib/gems/1.8 GEM_PATH=/var/lib/gems/1.8 /etc/init.d/nginx #! /bin/sh ### BEGIN INIT INFO # Provides: nginx # Required-Start: $all # Required-Stop: $all # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the nginx web server # Description: starts nginx using start-stop-daemon ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/opt/nginx/sbin/nginx NAME=nginx DESC=nginx test -x $DAEMON || exit 0 # Include nginx defaults if available if [ -f /etc/default/nginx ] ; then . /etc/default/nginx fi set -e case "$1" in start) echo -n "Starting $DESC: " start-stop-daemon --start --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; restart|force-reload) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON sleep 1 start-stop-daemon --start --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; reload) echo -n "Reloading $DESC configuration: " start-stop-daemon --stop --signal HUP --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart|force-reload}" >&2 exit 1 ;; esac exit 0 $ opt/nginx/sbin/nginx -v nginx version: nginx/0.7.67 Ubuntu lucid

    Read the article

  • Howo to get Multipath IO with Dell MD3600i into active/active setup?

    - by Disco
    I'm desperately trying to improve performance of my SAN connection. Here's what i have: [root@xnode1 dell]# multipath -ll mpath1 (36d4ae520009bd7cc0000030e4fe8230b) dm-2 DELL,MD36xxi [size=5.5T][features=3 queue_if_no_path pg_init_retries 50][hwhandler=1 rdac][rw] \_ round-robin 0 [prio=200][active] \_ 18:0:0:0 sdb 8:16 [active][ready] \_ 19:0:0:0 sdd 8:48 [active][ghost] \_ 20:0:0:0 sdf 8:80 [active][ghost] \_ 21:0:0:0 sdh 8:112 [active][ready] And multipath.conf : defaults { udev_dir /dev polling_interval 5 prio_callout none rr_min_io 100 max_fds 8192 user_friendly_names yes path_grouping_policy multibus default_features "1 fail_if_no_path" } blacklist { device { vendor "*" product "Universal Xport" } } devices { device { vendor "DELL" product "MD36xxi" path_checker rdac path_selector "round-robin 0" hardware_handler "1 rdac" failback immediate features "2 pg_init_retries 50" no_path_retry 30 rr_min_io 100 prio_callout "/sbin/mpath_prio_rdac /dev/%n" } } And sessions. [root@xnode1 dell]# iscsiadm -m session tcp: [13] 10.0.51.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [14] 10.0.50.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [15] 10.0.51.221:3260,2 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c tcp: [16] 10.0.50.220:3260,1 iqn.1984-05.com.dell:powervault.md3600i.6d4ae520009bd7cc000000004fd7507c I'm getting very poor read performance : dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=1000 The SAN is configured as follows: CTRL0,PORT0 : 10.0.50.220 CTRL0,PORT1 : 10.0.50.221 CTRL1,PORT0 : 10.0.51.220 CTRL1,PORT1 : 10.0.51.221 And on the host : IF0 : 10.0.50.1 IF1 : 10.0.51.1 (Dual 10GbE Ethernet Card Intel DA2) It's connected to a 10gbE switch dedicated for SAN traffic. My questions being; why the connection is set up as 'ghost' and not 'ready' like an active/active configuration ?

    Read the article

  • ZFDataGrid table width

    - by emeraldjava
    I'm using zfdatagrid to display a table within a Zend app. How do i fix the width of the table? I can't find any setting in the grid.ini. public function displaytemptableAction() { $config = new Zend_Config_Ini(APPLICATION_PATH.'/grids/grid.ini', 'production'); $db = Zend_Registry::get("db"); $grid = Bvb_Grid::factory('Table',$config,$id=''); $grid->setSource(new Bvb_Grid_Source_Zend_Table(new Model_DbTable_TmpTeamRaceResult())); //CRUD Configuration $form = new Bvb_Grid_Form(); $form->setAdd(false)->setEdit(true)->setDelete(true); $grid->setForm($form); $grid->setPagination(0); $grid->setExport(array('xml','pdf')); $this->view->grid = $grid->deploy(); }

    Read the article

  • DRBD on a disk with existing file system that takes all the place

    - by Karolis T.
    I'm currently trying to simulate the environment via XEN. I have installed two debian systems with such FS layout: cltest1:/etc# df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda2 6.0G 417M 5.2G 8% / tmpfs 257M 0 257M 0% /lib/init/rw udev 10M 16K 10M 1% /dev tmpfs 257M 4.0K 257M 1% /dev/shm Host cltest2 is identical. Here's my drbd.conf global { minor-count 1; } resource mysql { protocol C; syncer { rate 10M; # 10 Megabytes } on cltest1 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.186:7789; meta-disk internal; } on cltest2 { device /dev/drbd0; disk /dev/xvda2; address 192.168.1.187:7789; meta-disk internal; } } I have not created filesystem on drbd0 Starting DRBD via init.d script errors out with: Starting DRBD resources: [ d(mysql) /dev/drbd0: Failure: (114) Lower device is already claimed. This usually means it is mounted. [mysql] cmd /sbin/drbdsetup /dev/drbd0 disk /dev/xvda2 /dev/xvda2 internal --set-defaults --create-device failed - continuing! Running: drbdadm create-md mysql gives: cltest1:/etc# drbdadm create-md mysql md_offset 6442446848 al_offset 6442414080 bm_offset 6442217472 Found ext3 filesystem which uses 6291456 kB current configuration leaves usable 6291228 kB Device size would be truncated, which would corrupt data and result in 'access beyond end of device' errors. You need to either * use external meta data (recommended) * shrink that filesystem first * zero out the device (destroy the filesystem) Operation refused. Command 'drbdmeta /dev/drbd0 v08 /dev/xvda2 internal create-md' terminated with exit code 40 drbdadm aborting As I understand, all of my problems are because I don't have unallocated disk space on xvda2. What are my options besides shrinking FS and connecting a separate physical disk? Can't the meta-data be stored on a file in the local filesystem?

    Read the article

  • Mounting Replicated Gluster Multi-AZ Storage

    - by Roman Newaza
    I have Replicated Gluster Storage which is used by Auto scaling Servers. Both, Auto scaling and Storage are allocated in two Availability zones. Gluster: Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: gluster01:/storage/1a # Zone A Brick2: gluster02:/storage/1b # Zone B Brick3: gluster03:/storage/2a # Zone A Brick4: gluster04:/storage/2b # Zone B Brick5: gluster01:/storage/3a # Zone A Brick6: gluster02:/storage/3b # Zone B Brick7: gluster03:/storage/4a # Zone A Brick8: gluster04:/storage/4b # Zone B I used Round Robin DNS for Gluster entry point, so DNS name resolves to all of the storage server addresses which are returned in different order all the time: # host storage.domain.com storage.domain.com has address xx.xx.xx.x1 storage.domain.com has address xx.xx.xx.x2 storage.domain.com has address xx.xx.xx.x3 storage.domain.com has address xx.xx.xx.x4 The Storage is mounted with Native Gluster Client: # grep storage /etc/fstab storage.domain.com:/storage /storage glusterfs defaults,log-level=WARNING,log-file=/var/log/gluster.log 0 0 I have heard Gluster might be mounted with the first Server IP and after that it will fetch its configuration with the rest of Servers. Personally, I never tested single Server mount setup and I don't know how Gluster handles this. On EC2, traffic among single Availability zone is free and between different zones is not. When Client in zone A writes to storage and IP of Storage in zone B is returned, it will cost me twice more for data transfer: Client (Zone A) - Storage Server (Zone B) - Replication to Storage Server (Zone A). Question: Would it be better to mount Storage Server of the same zone, so that data transfer charges apply only for replication (A - A - B)?

    Read the article

  • How can I erase the traces of Folder Redirection from the Default Domain Policy

    - by bruor
    I've taken over from an IT outsourcer and have found a struggle now that we're starting a migration to windows 7. Someone decided that they would setup Folder redirection in the Default Domain Policy. I've since configured redirection in another policy at an OU level. No matter what I do, the windows 7 systems pick up the Default Domain Policy folder redirection settings only. I keep getting entries in the event log showing that the previously redirected folders "need to be redirected" with a status of 0x80000004. From what I can tell this just means that it's redirecting them locally. Is there a way I can wipe that section of the GPO clean so it's no longer there? I'm hesitant to try to reset the default domain policy to complete defaults. ***UPDATE 6-26 I found that the following condition occurred and was causing the grief here. I've already implemented the new policies for clients, and for some reason, XP was working great, 7 was refusing to process. The DDP was enforced. Because of this, and the fact that the folder redirection policies were set to redirect back to the local profile upon removal, it was forcing clients to pick up it's "redirect to local" settings. Requirements for to recreate the issue. -Create a new test OU and policy. -Create some folder redirection settings, set them to redirect to local upon removal -Remove settings on that GPO -Refresh your view of the GPO and check the settings. -You'll notice that the settings show "not configured" entries for folder redirection. -Enforce this GPO -Create another sub-OU -Create a GPO linked to this sub-ou and configure some folder redirection settings. -Watch as the enforced GPOs "not configured" setting overrides the policy you just defined. I've had to relink the DDP to all OU's that have "block inheritance" enabled, and disable the "enforced" option on the DDP as a workaround. I'd love to re-enable enforcement of the DDP, but until I can erase the traces of folder redirection settings from the DDP, I think I'm stuck.

    Read the article

  • tomcat 5.5 startup script on Ubuntu server

    - by Registered User
    Can any one share their Tomcat startup script I am looking for a tomcat start up script on a Ubuntu machine. My Ubuntu is 10.04 server. The tomcat is 5.5.30. It is in /opt/apache-tomcat-5.5.31 I tried a script #!/bin/bash # # tomcat # # chkconfig: # description: Start up the Tomcat servlet engine. # Source function library. . /lib/lsb/init-functions RETVAL=$? CATALINA_HOME="/opt/apache-tomcat-5.5.31" case "$1" in start) if [ -f $CATALINA_HOME/bin/startup.sh ]; then echo $"Starting Tomcat" /opt/apache-tomcat-5.5.31/bin/startup.sh fi ;; stop) if [ -f $CATALINA_HOME/bin/shutdown.sh ]; then echo $"Stopping Tomcat" /opt/apache-tomcat-5.5.31/bin/shutdown.sh fi ;; *) echo $"Usage: $0 {start|stop}" exit 1 ;; esac exit $RETVAL but it did not worked after a reboot. But the same script works if I do /etc/init.d/tomcat start or /etc/init.d/tomcat stop I have done update-rc.d tomcat defaults as it is a Ubuntu server but on reboot all of this fails to work.

    Read the article

  • Google Gears - Database - VACUUM

    - by Sirber
    With this code: var db = google.gears.factory.create('beta.database'); db.open('cominar'); db.execute('CREATE TABLE IF NOT EXISTS Ajax (AJAX_ID INTEGER PRIMARY KEY AUTOINCREMENT , MODULE TEXT, FUNCTION TEXT, CONTENT_JSON TEXT);'); db.execute('VACUUM;'); // nettoye la DB I'm trying to clean the database (VACUUM) the database at each initialisation but I get this error: Uncaught Error: Database operation failed. ERROR: authorization denied DETAILS: not authorized The database was created by me (the same page). Thank you!

    Read the article

  • Bug in MvcFutures LinkBuilder.BuildUrlFromExpression

    - by rboarman
    I hope this is the right place to post this. The following code fails in the latest build of the MVC futures library. This works and correctly calls my controller: Html.ActionLink<CreatePlayerController>(x => x.PlayerDetails(), "Click") This fails: LinkBuilder.BuildUrlFromExpression<CreatePlayerController>(null, Html.RouteCollection, x => x.PlayerDetails() ) The error is: System.InvalidOperationException: The IControllerFactory 'SF.Client.Website.StructureMapControllerFactory' did not return a controller for the name 'CreatePlayer'. My StructureMapControllerFactory. GetControllerInstance method is never called. Do I need to register my controller factory in some other way? It seems like the MvcFutures code is ignoring it. ControllerBuilder.Current.SetControllerFactory(new StructureMapControllerFactory()); Thank you, Rick

    Read the article

  • HA Proxy won't load balance my web requests. What have I done wrong?

    - by Josh Smeaton
    I've finally got HA Proxy set up and running in a way I think I want. However, it is not load balancing the web requests it receives. All requests are currently being forwarded to the first server in the cluster. I'm going to paste my configuration below - if anyone can see where I may have gone wrong, I'd appreciate it. This is my first stab at configuring web servers in a *nix environment. First up, I have HA Proxy running on the same host as the first server in the apache cluster. We are moving these servers to virtual later on, and they will have different virtual hosts, but I wanted to get this running now. Both web servers are receiving their health checks, and are reporting back correctly. The haproxy?stats page correctly reports servers that are up and down. I've tested this by altering the name of the file that is checked. I haven't put any load onto these servers yet. I've just opened up the URLs on several tabs (private browsing), and had several co-workers hit the URL too. All of the traffic goes to WEB1. Am I balancing incorrectly? global maxconn 10000 nbproc 8 pidfile /var/run/haproxy.pid log 127.0.0.1 local0 debug daemon defaults log global mode http retries 3 option redispatch maxconn 5000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen WEBHAEXT :80,:8443 mode http cookie sessionbalance insert indirect nocache balance roundrobin option httpclose option forwardfor except 127.0.0.1 option httpchk HEAD health_check.txt stats enable stats auth rah:rah server WEB1 10.90.2.131:81 cookie WEB_1 check server WEB2 10.90.2.130:80 cookie WEB_2 check

    Read the article

  • Fedora, ssh and sudo

    - by Ricky Robinson
    I have to run a script remotely on several Fedora machines through ssh. Since the script requires root priviliges, I do: $ ssh me@remost_host "sudo touch test_sudo" #just a simple example sudo: no tty present and no askpass program specified The remote machines are configured in such a way that the password for sudo is never asked for. For the above error, the most common fix is to allocate a pseudo-terminal with the -t option in ssh: $ ssh -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Let's try to force this allocation with -t -t: $ ssh -t -t me@remost_host "sudo touch test_sudo" sudo: no tty present and no askpass program specified Nope, it doesn't work. In /etc/sudoers of course I have this line: #Defaults requiretty ... but I can't manually change it on tens of remote machines. Am I missing something here? Is there an easy fix? EDIT: Here is the sudoers file of a host where ssh me@host "sudo stat ." works. Here is the sudoers file of a host where it doesn't work. EDIT 2: Running tty on a host where it works: $ ssh me@host_ok tty not a tty $ ssh -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. $ ssh -t -t me@host_ok tty /dev/pts/12 Connection to host_ok closed. Now on a host where it doesn't work: $ ssh me@host_ko tty not a tty $ ssh -t me@host_ko tty not a tty Connection to host_ko closed. $ ssh -t -t me@host_ko tty not a tty Connection to host_ko closed.

    Read the article

  • XCOPY-deploying Microsoft Sync Framework in a no admin rights scenario (i.e. ClickOnce install)

    - by Mike Bouck
    I'm currently designing a smart client app (WPF) which needs to operate in an "occasionally disconnected" mode. For the offline scenario, I'm looking at using: Disconnected Service Agent Application Block (from the Smart Client Software Factory) Microsoft Sync Framework I should mention that I want my smart client app to be XCOPY-deployable, auto-updating, and installable without administrative privledges -- basically a ClickOnce-deployed app. From what I can tell this means the Microsoft Sync Framework is out because it has some COM in it's implementation that needs to get registered on the client which requires admin rights. Is it possible to XCOPY deploy and run MSF from a ClickOnce app? Any other ideas for data synchronization?

    Read the article

  • Hibernate exception

    - by Mark
    Hi all, im new to hibernate! i have followed the netbeans tutorial on creating a hibernate enabled application. after sucessfully creating a database in mysql workbench i reversed engineered the pojos etc and then tried to run a simple query(from Course) and got the following org.hibernate.MappingException: An association from the table coursemodule refers to an unmapped class: DAL.Module at org.hibernate.cfg.Configuration.secondPassCompileForeignKeys(Configuration.java:1252) at org.hibernate.cfg.Configuration.secondPassCompile(Configuration.java:1170) at org.hibernate.cfg.AnnotationConfiguration.secondPassCompile(AnnotationConfiguration.java:324) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1286) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:859) heres the generated class for Course package DAL; // Generated 02-May-2010 16:41:16 by Hibernate Tools 3.2.1.GA import java.util.HashSet; import java.util.Set; /** * Course generated by hbm2java */ public class Course implements java.io.Serializable { private int id; private String name; private Set<Module> modules = new HashSet<Module>(0); public Course() { } public Course(int id, String name) { this.id = id; this.name = name; } public Course(int id, String name, Set<Module> modules) { this.id = id; this.name = name; this.modules = modules; } public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public Set<Module> getModules() { return this.modules; } public void setModules(Set<Module> modules) { this.modules = modules; } } and its config file course.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated 02-May-2010 16:41:16 by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="DAL.Course" table="course" catalog="walkthrough"> <id name="id" type="int"> <column name="id" /> <generator class="assigned" /> </id> <property name="name" type="string"> <column name="name" not-null="true" /> </property> <set name="modules" inverse="false" table="coursemodule"> <key> <column name="courseId" not-null="true" unique="true" /> </key> <many-to-many entity-name="DAL.Module"> <column name="moduleId" not-null="true" unique="true" /> </many-to-many> </set> </class> </hibernate-mapping> hibernate.reveng.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-reverse-engineering PUBLIC "-//Hibernate/Hibernate Reverse Engineering DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-reverse-engineering-3.0.dtd"> <hibernate-reverse-engineering> <schema-selection match-catalog="Walkthrough"/> <table-filter match-name="walkthrough"/> <table-filter match-name="course"/> <table-filter match-name="module"/> <table-filter match-name="studentmodule"/> <table-filter match-name="attendee"/> <table-filter match-name="student"/> <table-filter match-name="coursemodule"/> <table-filter match-name="session"/> <table-filter match-name="test"/> </hibernate-reverse-engineering> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/Walkthrough</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password">password</property> <property name="hibernate.show_sql">true</property> <property name="hibernate.current_session_context_class">thread</property> <mapping resource="DAL/Student.hbm.xml"/> <mapping resource="DAL/Walkthrough.hbm.xml"/> <mapping resource="DAL/Test.hbm.xml"/> <mapping resource="DAL/Module.hbm.xml"/> <mapping resource="DAL/Session.hbm.xml"/> <mapping resource="DAL/Course.hbm.xml"/> </session-factory> </hibernate-configuration> any ideas on why im getting this exception? ps. test is just a table with an id in it and is not related to anything. running "from Test" works

    Read the article

  • PORT FORWARDING TO PUT MY WEB SERVER ON THE INTERNET

    - by Chadworthington
    I went to http://canyouseeme.org/ to check to see what my external IP address. Regardless of what port I enter, it tells me that the port is blocked. I have a LinkSys router that basically has the default settings with the exception that I have WEP encrptin setup and I have forwarded a few ports, including 80 and 69. I forwarded them to the 192.x.x.103 IP address of the PC which is running IIS. That PC runs Symantec Endpoint Protection, which I right mouse clicked in the tray to Disable. These steps used to make my PC visible so I could host my own web site in IIS on port 80, or some other port, like 69. Yet, the Open Port tool cannot see my IP when it checks eiether port and when I navigate to http://my external ip/ I get "page cant be displayed" At first I was thinking that maybe Comcast is blocking port 80, but 69 doesnt work eiether. I do not see any other blockking set up in my router and, as I mentioned, I went with teh defaults except where discussed. This is a corporate PC and Symantec End Point Protecion is new to it (this previously worked on teh same PC with Symantec Protection Agent), but I thought that disabling Sym End Pt from the tray, that that would effectively neutralize it. I do not have the rights to kill the program itself. Any suggestions on what else to try to make my PC externally visible?

    Read the article

  • WCF webservice stop working after upgrade to framework 3.5 sp1

    - by Victor
    I have a wcf webservice on one of my testing servers. Everything worked fine until I upgraded frome framework 3.5 to 3.5 sp1. the wcf web service stoped working and returns the error: "Failed to invoke the service. The service may be offline or inaccessible. Refer to the stack trace for details." "The remote server returned an unexpected response: (502) Proxy Error ( The specified network name is no longer available. ). Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)" Does anyone know what is going on here?

    Read the article

  • "java.lang.ClassNotFoundException: javax.ejb.EJBObject" when running JAR

    - by Bernhard V
    Hi, I'm getting a "java.lang.ClassNotFoundException: javax.ejb.EJBObject" error when I'm running my application as a JAR file. When running it in Eclipse everything is working fine. The application properly access the main class and the main method. But when it tries to load the application context it cannot resolve a reference to an EJB bean. I then get the following error: Error creating bean with name 'bc' defined in class path resource [blabla.xml]: Initialization of bean failed; nested exception is java.lang.NoClassDefFoundError: javax/ejb/EJBObject at org.springframework.beans.factory.support.BeanDefinitionValueResolver .resolveReference(BeanDefinitionValueResolver.java:275) ... Caused by: java.lang.ClassNotFoundException: javax.ejb.EJBObject I've included all runtime-scoped dependencies with Maven in the JAR file. Do you know any further information regarding this error?

    Read the article

  • Ubuntu 12.04 glusterfs volume failed to mount at boot time

    - by user183394
    I have just setup 7 KVM guests, all running Ubuntu 12.04 LTS 64bit Minimal server to test out glusterfs 3.2.5 from the Ubuntu official repo. Two of them form a mirrored pair (i.e. replica 2), and five of them are clients. I am still new to this file system and would like to gain some "hands-on" experience. The setup was mostly uneventful, until I put in the following into each glusterfs client's /etc/fstab: 192.168.122.120:/testvol /var/local/testvol glusterfs defaults,_netdev 0 0, where 192.168.122.120 is the IP address of the first "glusterfs server". If I issue either a manaul mountall or a mount.glusterfs 192.168.122.120:/testvol /var/local/testvol on CLI, a mount shows that the volume is successfully imported. But once a client is rebooted, after it comes back up, the volume is not mounted! I searched the Internet, and found this article, but since I am not running both client and server on the same node, IMHO it's not strictly applicable. So, as a kludgy "get-around", I put in a sleep 3 && mount.glusterfs 192.168.122.120:/testvol /var/local/testvol into each client node's /etc/rc.local. It seems to be able to get the volume mounted on each node, as far as I can tell. But this is quite ugly, and I would appreciate a hint as to how to resolve this glusterfs-non-boot-time-mounting issue correctly. Note that I used the IP address of the first "glusterfs server" although the /etc/hosts of all nodes have been populated with their hostnames. I figured that the use of IP address is more robust. --Zack

    Read the article

  • abstract class NumberFormat - very confuse about getInstance()

    - by Alex
    Hi, I'm new to Java and I have a beginner question: NumberFormat is an abstract class and so I assume I can't make an instance of it. But there is a public static (factory?) method getInstance() that allow me to do NumberFormat nf = NumberFormat.getInstance(); I'm quite confuse. I'll be glad if someone could give me hints on: 1) If there is a public method to get an instance of this abstract class, why don't we have also a constructor? 2) This is an abstract class ; how can we have this static method giving us an instance of the class? 3) Why choosing such a design? If I assume it's possible to have an instance of an abstract class (???), I don't get why this class should be abstract at all. Thank you.

    Read the article

  • Calling generic method in spring.net application context

    - by Bert Vandamme
    Hi, I'm trying to invoke this method in spring.net, but i'm having trouble getting the configuration right. Method: public void AddRepository(IRepository repository) where TEntity : IEntity { Repositories.Add(repository.GetType().Name, repository); } Config: <object type="Spring.Objects.Factory.Config.MethodInvokingFactoryObject, Spring.Core"> <property name="TargetObject"> <ref local="RepositoryFactory" /> </property> <property name="TargetMethod" value="AddRepository"/> <property name="Arguments"> <list> <ref object="BinaryAssetFileRepository"/> </list> </property> </object>" Is it possible to address generic methods in this way? Thx, Bert

    Read the article

  • Wireless router blocking some sites while using ethernet is fine

    - by Micke
    I'm using Windows 7 and my router is a wireless Apple Airport Express that is approximately two years old. Suddenly I can't access some sites (for example www.sthlm.friskissvettis.se, or www.vegetarian-shoes.co.uk, some streamed tv-shows on svtplay.se, and a number of other random sites) when connecting to internet with my router. It worked good until recently and I'm fairly sure this problem emerged when my ISP upgraded from 10/10mbit to 100/10mbit speed. Most other sites like facebook and google works fine. When using my network cable to connect to internet everything works fine and I can access these sites. Firmware is current and I've tried reseting the router to factory defaults. Tried different browsers, and I can't ping the "blocked" sites either. Tracert www.sthlm.friskissvettis.se starts with 10.0.0.1 and continues through a number of long addresses until it says timeout. The last working address before timeout was sth-tcy-ipcore01-ge-0-2-0.neq.dgcsystems.net [83.241.252.13], if it matters. Tracert www.vegetarian-shoes.co.uk also eventually gives me a timeout. When the network cable is plugged in, I still get timeout on tracert www.sthlm.friskissvettis.se even though I can access the site in Chrome. Weird. www.vegetarian-shoes.co.uk doesn't give me a tracert timeout when the cable is plugged in, and I can access the site as usual. I've tried changing DNS servers to use opendns servers instead, but to no use. I've tried pinging these two sites with a lower MTU packet size (with this method: http://www.richard-slater.co.uk/archives/2009/10/23/change-your-mtu-under-vista-or-windows-7/), but still can't access them through ping... I don't know what to do anymore.... any suggestions???

    Read the article

  • HaProxy - Http and SSL pass through config

    - by Bill
    I've currently got an HaProxy LB solution in place and everything is working fine however we are having an issue with a very few clients who cannot get to our site via HTTPS (SSL) they can browse our site in Http but as soon as they click on an absolute HTTPS link they are taken to our home page instead. Wondering if anyone can look at our config below and see if there's something awry. I believe we are on HaProxy 1.2.17 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 6144 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 stats auth # admin password stats uri /monitor listen webfarm # bind :80,:443 bind :443 mode tcp balance source #cookie SERVERID insert indirect #option httpclose #option forwardfor #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen webfarmhttp :80 mode http balance source # option httpclose option forwardfor # option httpchk HEAD /check.cfm HTTP/1.0 option httpchk /check.cfm server webA 111.10.10.1 #server webB 111.10.10.2 server webB 111.10.10.3 server webC 111.10.10.4 listen monitor :8443 mode http balance roundrobin #cookie SERVERID insert indirect option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 #option httpchk HEAD /check.cfm HTTP/1.0 server webA 111.10.10.1 server webB 111.10.10.2

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >