Search Results

Search found 9223 results on 369 pages for 'cloud office'.

Page 345/369 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • networking through openstack installed on vm

    - by Mandar Katdare
    I am trying to set up a test installation of Openstack on a Ubuntu 12.04 VM running on a ESXi server. So far I have been able to launch the VMs on the ESXi, however am unable to assign IP addresses to them. As the VM with the Openstack installation has a single public IP, I wish to assign IPs to the VMs create through Openstack so that they can directly interact with the public network itself without having a separate private network. So I feel that bridging would not be the correct option here. But am unable to find the correct documents to go ahead with such an install. My ifconfig looks as follows: eth0 Link encap:Ethernet HWaddr 00:0c:29:6f:8a:d7 inet addr:192.168.4.167 Bcast:192.168.4.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe6f:8ad7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:391640 errors:33 dropped:98 overruns:0 frame:0 TX packets:545044 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:40303931 (40.3 MB) TX bytes:763127348 (763.1 MB) Interrupt:18 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:146127 errors:0 dropped:0 overruns:0 frame:0 TX packets:146127 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:799815763 (799.8 MB) TX bytes:799815763 (799.8 MB) virbr0 Link encap:Ethernet HWaddr 8a:80:33:32:63:a0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) The eth0 is the adapter that I intend to use for all communication. My nova.conf looks as follows: --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --allow_admin_api=true --use_deprecated_auth=false --auth_strategy=keystone --scheduler_driver=nova.scheduler.simple.SimpleScheduler --s3_host=192.168.4.167 --ec2_host=192.168.4.167 --rabbit_host=192.168.4.167 --cc_host=192.168.4.167 --nova_url=http://192.168.4.167:8774/v1.1/ --routing_source_ip=192.168.4.167 --glance_api_servers=192.168.4.167:9292 --image_service=nova.image.glance.GlanceImageService --iscsi_ip_prefix=192.168.4 --sql_connection=mysql://novadbadmin:[email protected]/nova --ec2_url=http://192.168.4.167:8773/services/Cloud --keystone_ec2_url=http://192.168.4.167:5000/v2.0/ec2tokens --api_paste_config=/etc/nova/api-paste.ini --libvirt_type=kvm --libvirt_use_virtio_for_bridges=true --start_guests_on_host_boot=true --resume_guests_state_on_host_boot=true --vnc_enabled=true --vncproxy_url=http://192.168.4.167:6080 --vnc_console_proxy_url=http://192.168.4.167:6080 # network specific settings --network_manager=nova.network.manager.FlatDHCPManager --public_interface=eth0 --vmwareapi_host_ip=192.168.4.254 --vmwareapi_host_username=**** --vmwareapi_host_password=**** --vmwareapi_wsdl_loc=http://127.0.0.1:8080/wsdl/vim25/vimService.wsdl --fixed_range=192.168.4.190/24 --floating_range=192.168.4.190/24 --network_size=32 --flat_network_dhcp_start=192.168.4.190 --flat_injected=False --force_dhcp_release --iscsi_helper=tgtadm --connection_type=vmwareapi --root_helper=sudo nova-rootwrap --verbose --libvirt_use_virtio_for_bridges --ec2_private_dns_show --novnc_enabled=true --novncproxy_base_url=http://192.168.4.167:6080/vnc_auto.html --vncserver_proxyclient_address=192.168.4.167 --vncserver_listen=192.168.4.167 192.168.4.167 is my VM with the Openstack installation and 192.168.4.254 is my ESXi server on which the VM runs. Can anyone advice me about how to proceed? Thanks, Mandar

    Read the article

  • csync2 ERROR: Connection to remote host failed

    - by Emil Salama
    I was unable to find any articles to answer this question, so my best bet was to post this here: Scenario We have 2x application servers in production hosting a PHP website and I would like some folders to be syncronized between the 2, the same was setup for the development environment with no issues, I've followed all instructions from the URL "http://www.cloudedify.com/synchronising-files-in-cloud-with-csync2/", I still seem to have the same result, firewall has been disabled on both boxes for troubeshooting purposes: Config Files: cysnc2.cfg nossl * *; group production { host server1; host server2; key /etc/csync-production-group.key; include /etc/httpd/sites-available; include /xxxxxx/public_html/files include /xxxxxxx/magento/media/catalog/product include /xxxxxxx/magento/media/brands exclude *.log; exclude /xxxx/public_html/file/cache; exclude /xxxxx/public_html/magento/var/cache; exclude /xxxx/public_html/logs; exclude /xxxxx/public_html/magento/var/log; backup-directory /data/sync-conflicts/; backup-generations 2; auto younger; } /etc/xinetd.d/csync2 csync2.cfg service csync2 { disable = no flags = REUSE socket_type = stream wait = no user = root group = root server = /usr/sbin/csync2 server_args = -i -D /data/sync-db/ port = 30865 type = UNLISTED log_type = FILE /data/logs/csync2/csync2-xinetd.log log_on_failure += USERID } I've made sure that the daemon is listening on both server on port 30865 and the keys matched on both servers I've run a tcpdump on each server, output as follows: 12:20:31.366771 IP server1.49919 server2.csync2: Flags [S], seq 445156159, win 14600, options [mss 1460,sackOK,TS val 794864936 ecr 0,nop,wscale 7], length 0 12:20:31.366810 IP server2.csync2 server1.49919: Flags [S.], seq 450593575, ack 445156160, win 14480, options [mss 1460,sackOK,TS val 794798911 ecr 794864936,nop,wscale 7], length 0 12:20:31.367101 IP server1.49919 server2.csync2: Flags [.], ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 0 12:20:31.367138 IP server1.49919 server2.csync2: Flags [P.], seq 1:9, ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 8 12:20:31.367147 IP server2.csync2 server1.49919: Flags [.], ack 9, win 114, options [nop,nop,TS val 794798912 ecr 794864937], length 0 12:20:31.368625 IP server2.csync2 server1.49919: Flags [R.], seq 1, ack 9, win 114, options [nop,nop,TS val 794798913 ecr 794864937], length 0 Is there anything else i'm missing or should be doing?

    Read the article

  • JBoss AS 5: starts but can't connect (Windows, remote)

    - by Nuwan
    Hello I installed Jboss 5.0GA and Its works fine in localhost.But I want It to access through remote Machine.Then I bind my IP address to my server and started it.This is the command I used run.bat -b 10.17.62.63 Then the server Starts fine This is the console log when starting the server > =============================================================================== > > JBoss Bootstrap Environment > > JBOSS_HOME: C:\jboss-5.0.0.GA > > JAVA: C:\Program Files\Java\jdk1.6.0_34\bin\java > > JAVA_OPTS: -Dprogram.name=run.bat -server -Xms128m -Xmx512m > -XX:MaxPermSize=25 6m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Ds un.rmi.dgc.server.gcInterval=3600000 > > CLASSPATH: C:\jboss-5.0.0.GA\bin\run.jar > > =============================================================================== > > run.bat: unused non-option argument: ûb run.bat: unused non-option > argument: 0.0.0.0 13:43:38,179 INFO [ServerImpl] Starting JBoss > (Microcontainer)... 13:43:38,179 INFO [ServerImpl] Release ID: JBoss > [Morpheus] 5.0.0.GA (build: SV NTag=JBoss_5_0_0_GA date=200812041714) > 13:43:38,179 INFO [ServerImpl] Bootstrap URL: null 13:43:38,179 INFO > [ServerImpl] Home Dir: C:\jboss-5.0.0.GA 13:43:38,179 INFO > [ServerImpl] Home URL: file:/C:/jboss-5.0.0.GA/ 13:43:38,195 INFO > [ServerImpl] Library URL: file:/C:/jboss-5.0.0.GA/lib/ 13:43:38,195 > INFO [ServerImpl] Patch URL: null 13:43:38,195 INFO [ServerImpl] > Common Base URL: file:/C:/jboss-5.0.0.GA/common/ > > 13:43:38,195 INFO [ServerImpl] Common Library URL: > file:/C:/jboss-5.0.0.GA/comm on/lib/ 13:43:38,195 INFO [ServerImpl] > Server Name: default 13:43:38,195 INFO [ServerImpl] Server Base Dir: > C:\jboss-5.0.0.GA\server 13:43:38,195 INFO [ServerImpl] Server Base > URL: file:/C:/jboss-5.0.0.GA/server/ > > 13:43:38,210 INFO [ServerImpl] Server Config URL: > file:/C:/jboss-5.0.0.GA/serve r/default/conf/ 13:43:38,210 INFO > [ServerImpl] Server Home Dir: C:\jboss-5.0.0.GA\server\defaul t > 13:43:38,210 INFO [ServerImpl] Server Home URL: > file:/C:/jboss-5.0.0.GA/server/ default/ 13:43:38,210 INFO > [ServerImpl] Server Data Dir: C:\jboss-5.0.0.GA\server\defaul t\data > 13:43:38,210 INFO [ServerImpl] Server Library URL: > file:/C:/jboss-5.0.0.GA/serv er/default/lib/ 13:43:38,210 INFO > [ServerImpl] Server Log Dir: C:\jboss-5.0.0.GA\server\default \log > 13:43:38,210 INFO [ServerImpl] Server Native Dir: > C:\jboss-5.0.0.GA\server\defa ult\tmp\native 13:43:38,210 INFO > [ServerImpl] Server Temp Dir: C:\jboss-5.0.0.GA\server\defaul t\tmp > 13:43:38,210 INFO [ServerImpl] Server Temp Deploy Dir: > C:\jboss-5.0.0.GA\server \default\tmp\deploy 13:43:39,710 INFO > [ServerImpl] Starting Microcontainer, bootstrapURL=file:/C:/j > boss-5.0.0.GA/server/default/conf/bootstrap.xml 13:43:40,851 INFO > [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.pl > ugins.cache.IterableTimedVFSCache] 13:43:40,866 INFO > [VFSCacheFactory] Using VFSCache [IterableTimedVFSCache{lifet > ime=1800, resolution=60}] 13:43:41,616 INFO [CopyMechanism] VFS temp > dir: C:\jboss-5.0.0.GA\server\defaul t\tmp 13:43:41,648 INFO > [ZipEntryContext] VFS force nested jars copy-mode is enabled. > > 13:43:44,288 INFO [ServerInfo] Java version: 1.6.0_34,Sun > Microsystems Inc. 13:43:44,288 INFO [ServerInfo] Java VM: Java > HotSpot(TM) Server VM 20.9-b04,Sun Microsystems Inc. 13:43:44,288 > INFO [ServerInfo] OS-System: Windows XP 5.1,x86 13:43:44,569 INFO > [JMXKernel] Legacy JMX core initialized 13:43:50,148 INFO > [ProfileServiceImpl] Loading profile: default from: org.jboss > .system.server.profileservice.repository.SerializableDeploymentRepository@e72f0c > (root=C:\jboss-5.0.0.GA\server, > key=org.jboss.profileservice.spi.ProfileKey@143b > 82c3[domain=default,server=default,name=default]) 13:43:50,148 INFO > [ProfileImpl] Using repository:org.jboss.system.server.profil > eservice.repository.SerializableDeploymentRepository@e72f0c(root=C:\jboss-5.0.0. > GA\server, > key=org.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,s > erver=default,name=default]) 13:43:50,148 INFO [ProfileServiceImpl] > Loaded profile: ProfileImpl@8b3bb3{key=o > rg.jboss.profileservice.spi.ProfileKey@143b82c3[domain=default,server=default,na > me=default]} 13:43:54,804 INFO [WebService] Using RMI server > codebase: http://127.0.0.1:8083 / 13:44:12,147 INFO [CXFServerConfig] > JBoss Web Services - Stack CXF Runtime Serv er 13:44:12,147 INFO > [CXFServerConfig] 3.1.2.GA 13:44:29,788 INFO > [Ejb3DependenciesDeployer] Encountered deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:29,819 INFO [Ejb3DependenciesDeployer] Encountered > deployment AbstractVFS > DeploymentContext@29776073{vfszip:/C:/jboss-5.0.0.GA/server/default/deploy/myE-e > jb.jar} 13:44:37,116 INFO [JMXConnectorServerService] JMX Connector > server: service:jmx > :rmi://127.0.0.1/jndi/rmi://127.0.0.1:1090/jmxconnector 13:44:38,022 > INFO [MailService] Mail Service bound to java:/Mail 13:44:43,162 WARN > [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RI SK. It > has been detected that the MessageSucker component which sucks > messages f rom one node to another has not had its password changed > from the installation d efault. Please see the JBoss Messaging user > guide for instructions on how to do this. 13:44:43,209 WARN > [AnnotationCreator] No ClassLoader provided, using TCCL: org. > jboss.managed.api.annotation.ManagementComponent 13:44:43,600 INFO > [TransactionManagerService] JBossTS Transaction Service (JTA version) > - JBoss Inc. 13:44:43,600 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer 13:44:44,366 INFO > [TransactionManagerService] Initializing recovery manager 13:44:44,678 > INFO [TransactionManagerService] Recovery manager configured > 13:44:44,678 INFO [TransactionManagerService] Binding > TransactionManager JNDI R eference 13:44:44,787 INFO > [TransactionManagerService] Starting transaction recovery man ager > 13:44:46,428 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on > http-127.0.0 .1-8080 13:44:46,459 INFO [AjpProtocol] Initializing > Coyote AJP/1.3 on ajp-127.0.0.1-80 09 13:44:46,459 INFO > [StandardService] Starting service jboss.web 13:44:46,475 INFO > [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.1.GA > 13:44:46,616 INFO [Catalina] Server startup in 350 ms 13:44:46,709 > INFO [TomcatDeployment] deploy, ctxPath=/web-console, vfsUrl=manag > ement/console-mgr.sar/web-console.war 13:44:48,553 INFO > [TomcatDeployment] deploy, ctxPath=/juddi, vfsUrl=juddi-servi > ce.sar/juddi.war 13:44:48,678 INFO [RegistryServlet] Loading jUDDI > configuration. 13:44:48,694 INFO [RegistryServlet] Resources loaded > from: /WEB-INF/juddi.prope rties 13:44:48,709 INFO [RegistryServlet] > Initializing jUDDI components. 13:44:48,991 INFO [TomcatDeployment] > deploy, ctxPath=/invoker, vfsUrl=http-invo ker.sar/invoker.war > 13:44:49,162 INFO [TomcatDeployment] deploy, ctxPath=/jbossws, > vfsUrl=jbossws.s ar/jbossws-management.war 13:44:49,475 INFO > [RARDeployment] Required license terms exist, view vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml > 13:44:49,569 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml > 13:44:49,741 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml > 13:44:49,819 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml > 13:44:49,912 INFO [RARDeployment] Required license terms exist, view > vfszip:/C: > /jboss-5.0.0.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml > 13:44:50,069 INFO [SimpleThreadPool] Job execution threads will use > class loade r of thread: main 13:44:50,115 INFO [QuartzScheduler] > Quartz Scheduler v.1.5.2 created. 13:44:50,131 INFO [RAMJobStore] > RAMJobStore initialized. 13:44:50,131 INFO [StdSchedulerFactory] > Quartz scheduler 'DefaultQuartzSchedule r' initialized from default > resource file in Quartz package: 'quartz.properties' > > 13:44:50,131 INFO [StdSchedulerFactory] Quartz scheduler version: > 1.5.2 13:44:50,131 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUS TERED started. 13:44:51,194 INFO > [ConnectionFactoryBindingService] Bound ConnectionManager 'jb > oss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name > 'java:DefaultDS' 13:44:51,819 WARN [QuartzTimerServiceFactory] sql > failed: CREATE TABLE QRTZ_JOB > _DETAILS(JOB_NAME VARCHAR(80) NOT NULL, JOB_GROUP VARCHAR(80) NOT NULL, DESCRIPT ION VARCHAR(120) NULL, JOB_CLASS_NAME VARCHAR(128) NOT > NULL, IS_DURABLE VARCHAR( 1) NOT NULL, IS_VOLATILE VARCHAR(1) NOT > NULL, IS_STATEFUL VARCHAR(1) NOT NULL, R EQUESTS_RECOVERY VARCHAR(1) > NOT NULL, JOB_DATA BINARY NULL, PRIMARY KEY (JOB_NAM E,JOB_GROUP)) > 13:44:51,912 INFO [SimpleThreadPool] Job execution threads will use > class loade r of thread: main 13:44:51,928 INFO [QuartzScheduler] > Quartz Scheduler v.1.5.2 created. 13:44:51,928 INFO [JobStoreCMT] > Using db table-based data access locking (synch ronization). > 13:44:51,944 INFO [JobStoreCMT] Removed 0 Volatile Trigger(s). > 13:44:51,944 INFO [JobStoreCMT] Removed 0 Volatile Job(s). > 13:44:51,944 INFO [JobStoreCMT] JobStoreCMT initialized. 13:44:51,944 > INFO [StdSchedulerFactory] Quartz scheduler 'JBossEJB3QuartzSchedu > ler' initialized from an externally provided properties instance. > 13:44:51,959 INFO [StdSchedulerFactory] Quartz scheduler version: > 1.5.2 13:44:51,959 INFO [JobStoreCMT] Freed 0 triggers from 'acquired' / 'blocked' st ate. 13:44:51,975 INFO [JobStoreCMT] > Recovering 0 jobs that were in-progress at the time of the last > shut-down. 13:44:51,975 INFO [JobStoreCMT] Recovery complete. > 13:44:51,975 INFO [JobStoreCMT] Removed 0 'complete' triggers. > 13:44:51,975 INFO [JobStoreCMT] Removed 0 stale fired job entries. > 13:44:51,990 INFO [QuartzScheduler] Scheduler > JBossEJB3QuartzScheduler_$_NON_CL USTERED started. 13:44:52,381 INFO > [ServerPeer] JBoss Messaging 1.4.1.GA server [0] started 13:44:52,569 > INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pa > geSize=2000, downCacheSize=2000 13:44:52,584 INFO [QueueService] > Queue[/queue/ExpiryQueue] started, fullSize=20 0000, pageSize=2000, > downCacheSize=2000 13:44:52,709 INFO [ConnectionFactory] Connector > bisocket://127.0.0.1:4457 has l easing enabled, lease period 10000 > milliseconds 13:44:52,709 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@1a8ac5e > started 13:44:52,725 WARN [ConnectionFactoryJNDIMapper] > supportsFailover attribute is t rue on connection factory: > jboss.messaging.connectionfactory:service=ClusteredCo nnectionFactory > but post office is non clustered. So connection factory will *no t* > support failover 13:44:52,725 WARN [ConnectionFactoryJNDIMapper] > supportsLoadBalancing attribute is true on connection factory: > jboss.messaging.connectionfactory:service=Cluste redConnectionFactory > but post office is non clustered. So connection factory wil l *not* > support load balancing 13:44:52,740 INFO [ConnectionFactory] > Connector bisocket://127.0.0.1:4457 has l easing enabled, lease period > 10000 milliseconds 13:44:52,740 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@1d43178 > started 13:44:52,740 INFO [ConnectionFactory] Connector > bisocket://127.0.0.1:4457 has l easing enabled, lease period 10000 > milliseconds 13:44:52,756 INFO [ConnectionFactory] > org.jboss.jms.server.connectionfactory.Co nnectionFactory@52728a > started 13:44:53,084 INFO [ConnectionFactoryBindingService] Bound > ConnectionManager 'jb > oss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name > 'java:JmsXA' 13:44:53,225 INFO [TomcatDeployment] deploy, ctxPath=/, > vfsUrl=ROOT.war 13:44:53,553 INFO [TomcatDeployment] deploy, > ctxPath=/jmx-console, vfsUrl=jmx-c onsole.war 13:44:53,975 INFO > [TomcatDeployment] deploy, ctxPath=/TestService, vfsUrl=TestS > erviceEAR.ear/TestService.war 13:44:55,662 INFO [JBossASKernel] > Created KernelDeployment for: myE-ejb.jar 13:44:55,709 INFO > [JBossASKernel] installing bean: jboss.j2ee:jar=myE-ejb.jar,n > ame=RPSService,service=EJB3 13:44:55,725 INFO [JBossASKernel] with > dependencies: 13:44:55,725 INFO [JBossASKernel] and demands: > 13:44:55,725 INFO [JBossASKernel] > jboss.ejb:service=EJBTimerService 13:44:55,725 INFO [JBossASKernel] > and supplies: 13:44:55,725 INFO [JBossASKernel] > jndi:RPSService/remote 13:44:55,725 INFO [JBossASKernel] Added > bean(jboss.j2ee:jar=myE-ejb.jar,name=RP SService,service=EJB3) to > KernelDeployment of: myE-ejb.jar 13:44:56,772 INFO > [SessionSpecContainer] Starting jboss.j2ee:jar=myE-ejb.jar,na > me=RPSService,service=EJB3 13:44:56,803 INFO [EJBContainer] STARTED > EJB: com.monz.rpz.RPSService ejbName: RPSService 13:44:56,819 INFO > [JndiSessionRegistrarBase] Binding the following Entries in G lobal > JNDI: > > > 13:44:57,381 INFO [DefaultEndpointRegistry] register: > jboss.ws:context=myE-ejb, endpoint=RPSService 13:44:57,428 INFO > [DescriptorDeploymentAspect] Add Service id=RPSService > address=http://127.0.0.1:8080/myE-ejb/RPSService > implementor=com.monz.rpz.RPSService > invoker=org.jboss.wsf.stack.cxf.InvokerEJB3 mtomEnabled=false > 13:44:57,459 INFO [DescriptorDeploymentAspect] JBossWS-CXF > configuration genera ted: > file:/C:/jboss-5.0.0.GA/server/default/tmp/jbossws/jbossws-cxf1864137209199 > 110130.xml 13:44:57,569 INFO [TomcatDeployment] deploy, ctxPath=/myE-ejb, vfsUrl=myE-ejb.j ar 13:44:57,709 WARN [config] > Unable to process deployment descriptor for context '/myE-ejb' > 13:44:59,334 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on > http-127.0.0.1-8 080 13:44:59,397 INFO [AjpProtocol] Starting Coyote > AJP/1.3 on ajp-127.0.0.1-8009 13:44:59,459 INFO [ServerImpl] JBoss > (Microcontainer) [5.0.0.GA (build: SVNTag= JBoss_5_0_0_GA > date=200812041714)] Started in 1m:21s:233ms But Still I cant connect to It when I Type my IP address in my browser thanks

    Read the article

  • Business Owners - What Remote Desktop Solution Do You Use To Service Your Clients PCs?

    - by Sootah
    Howdy fellow computer geeks, I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. So - Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost? Thanks in advance for your help! -Sootah

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Ububtu server 12.04 auto installation freezes at kickseeding running if ks.cfg has post scripts

    - by john206
    I'm trying to make a custom Ubuntu Server iso file. Kickstart file (ks.cfg) runs smooth when there is no %post in the file and Ubuntu installs correctly with ks configuration. Installation finishes installing base, apt, grub and It echos: Kickseed Running... and it freezes @ 0% I thought may be apt-get update doesnt work in ks file, I tried to install other apps like apache2 but no luck I have created dozen iso images and installed them in Virtual Box.I have been googling for 3 days and checked out ubuntu forums but haven't figured out the issue. I appreciate your help. This is how I made the iso image. My ks.file and txt.cfg files located in isolinux directory: root@ubuntu:/home/work mount -o loop ubuntu-12.04-amd64.iso original-iso/ rsync -a original-iso/ custom-iso/ cp ks.cfg custom-iso/isolinux/ cp txt.cfg custom-iso/isolinux/ chmod -R 777 custom-iso/ #Creating Iso image mkisofs -D -r -V “$IMAGE_NAME” -cache-inodes -J -l -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -o ~/ubuntu-12.04-alternate-custom-amd64.iso custom-iso/ ks.cfg #Generated by Kickstart Configurator #platform=AMD64 or Intel EM64T #System language lang en_US #Language modules to install langsupport en_US #System keyboard keyboard us #System mouse mouse #System timezone timezone America/Los_Angeles #Root password rootpw --iscrypted somethingsomething #Initial user user ubuntu --fullname "ubuntu" --iscrypted --password somethingsomething. #Reboot after installation reboot #Use text mode install text #Install OS instead of upgrade install #Use CDROM installation media cdrom #System bootloader configuration bootloader --location=mbr #Clear the Master Boot Record zerombr yes #Partition clearing information clearpart --all --initlabel #Disk partitioning information part /boot --size 128 --fstype=ext3 --asprimary part / --size 512 --fstype=ext3 --asprimary part swap --size 512 part /tmp --size 512 --fstype=ext3 part /var --size 512 --fstype=ext3 part /usr --size 4096 --fstype=ext3 part /home --size 2048 --fstype=ext3 #System authorization infomation auth --useshadow --enablemd5 #Network information network --bootproto=dhcp --device=eth0 #Firewall configuration firewall --disabled --http --ftp --ssh #X Window System configuration information xconfig --depth=32 --resolution=1024x768 --defaultdesktop=GNOME %post apt-get update mkdir /home/user txt.cfg default autoinstall label autoinstall menu label ^Install Custom Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed initrd=/install/initrd.gz quiet ks=cdrom:/isolinux/ks.cfg -- label install menu label ^Install Ubuntu Server kernel /install/vmlinuz append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz quiet -- label cloud menu label ^Multiple server install with MAAS kernel /install/vmlinuz append modules=maas-enlist-udeb vga=788 initrd=/install/initrd.gz quiet -- label check menu label ^Check disc for defects kernel /install/vmlinuz append MENU=/bin/cdrom-checker-menu vga=788 initrd=/install/initrd.gz quiet -- label memtest menu label Test ^memory kernel /install/mt86plus label hd menu label ^Boot from first hard disk localboot 0x80

    Read the article

  • What Remote Desktop Solution Do You Use To Service Your Clients' PCs? [closed]

    - by Sootah
    Possible Duplicate: What’s the best Remote Desktop Application? I am the owner of a local computer repair business that primarily services its clients on-site. On the occasions that we do service the machines in the office we generally have one of our techs pick the computer up while they are out and about and bring it back with them. Only rarely will we require the customer to bring us the computer themselves. In order to reduce costs, be much more efficient, and potentially expand our market far beyond what would be feasible with travel required; I am looking at ways that we can service our clients remotely whenever possible. What we're in need of is a solid remote desktop application that will be incredibly easy for our customers to connect to, as well as be robust enough that we don't need the client babysitting the computer during the entire repair. Ideally I would like to use a web-based solution so that we don't have to walk the customers through installing, connecting, and configuring it over the phone. This would be unacceptable because of the level of service they are used to. Effectively we'd want them to be able to just go to a URL, enter a PIN or something, and then they are connected and ready to rumble. (Obviously the option to just email them a link that'd do all this for them would be what we'd be aiming for) Along with the ease of use factor, we would need the product to not require any further intervention on the part of the client after we have connected. Nobody is going to be happy if we have to call them every 15 minutes so they can reconnect to us every time we reboot - so auto-reconnect is an absolute must. The only product I know of right now that does any of this is LogMeIn Rescue. It allows unattended access, the applet is lightweight and installs quickly, and the customer can either enter a PIN on the site or just click a link emailed to them in order to connect. The only real downside I see to LogMeIn Rescue is that it's $120.00/month per technician. While we'd ultimately end up saving far more than that per month just in fuel costs alone, I'd like to explore any other options out there that I may not have come across. Are there any equally good products out there? If so what are they, why do you recommend them, how have you been utilizing them yourself, and what do they cost?

    Read the article

  • What is a good layout for a somewhat advanced home network and storage solution?

    - by Shaun
    My home network/storage needs are changing and I am searching for some opinions and starting points on what a good network/storage layout would be that can serve my needs for a few years into the future. I think I have a decent starting point for equipment, but I am also willing to invest fairly heavily in a solution that can last me for a while. I am a bit of a tech nerd and I have a moderate tolerance for setup of the solution. I would prefer if maintenance of the system is somewhat low once it is setup, but I am willing to accept some tradeoffs. Existing equipment: Router - Netgear WNDR3700 (gigabit) Router - DLink Gamerlounge DGL-4300 (gigabit) Switch - 16 port Trendnet green switch (gigabit) Switch - 5 port Trendnet green (gigabit) Computer - i7-950 office computer (gigabit ethernet) Computer - Q6600 quad core media center, hooked up to TV, records shows (gigabit ethernet) Computer - Acer 1810T ultraportable laptop (gigabit and N ethernet) NAS - Intel SS4200-E (gigabit) External hard drive - 2TB WD Green drive (esata) All kinds of miscellaneous network connected TV, Bluray, Verizon network extender, HDhomerun TV tuners, etc. Requirements: -Robust backup solution for a growing collection of huge family picture files and personal files, around 1.5TB. (Including offsite backup) -Central location for all user's files, while also keeping them secure from each other. -Storage for terabytes of movie backups and recorded TV, and access to them from all computers (maybe around 4TB eventually) -Possibility to host files to friends and family easily Nice to have: -Backup of terabytes of movie backups Intriguing possibilities: -Capability to have users' Windows desktops and files look the same from all network computers I am not sure if the new Windows Home Server 2011 would fit into this well, if I need a domain server, how best to organize my backups, or how to most effectively use RAID. Currently I am simply backing up all computers to a RAID 1 on the NAS box, which I was thinking could prevent a situation where I reach for a backup and find that the disk is corrupt. One possibility that I am thinking about now is simply using my media center PC with a huge RAID of hard drives on which all files are stored. Pseudo-backup of all files would be present because of the RAID, but important files would also be backed up off site via carrying hard drives to work. But what if corruption seeps into the files and the corrupted data is then backed up? Does RAID protect against this? I really want to take next to zero risks with the irreplaceable files. I can handle some degree of risk with the movies and other files. I'm looking for critiques on this idea as well as other possibilities. To summarize, my goal is high functionality, media capable, and robust backup of irreplaceable files.

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • Intermittent internet access on a flat network - Router is connected

    - by Naveed
    I’m looking for some help with network settings. I’ve just started a new job (non-IT!) and we have problems with our office network. I’m the most IT literate in the organisation (15 permanent employees) and so have been dealing with IT issues. Our main bit of software is web-based so we need constant web access but it sometimes goes down for between 20 minutes and 3 hours despite everything seemingly working fine. It’s a flat network with wireless APs, BT Business Broadband 8Mbit connection and that’s about it. We have no servers and no standard settings and staff are encouraged to bring in their own laptops and connect! The network basically exists to provide internet access and that’s it. We also have students accessing the wireless (and I know there’s a whole list of access and content issues etc but right now we just need internet access stabilised). This is what we have: Building 1 Cisco SLM-224P 24-port PoE 10/100 switch with 2 gigabit ports 3 x ZyXEL NWA-3160 wireless APs Samsung OfficeServ 7100 phone server which borrows the building’s wiring Building 2 Netgear GS605-UK 5-port 10/100/1000 switch 1 x ZyXEL NWA-3160 wireless AP 1 x BT Business Hub – 2wire BT2700hgv – is the DHCP server We have 2 link cables between the buildings. One connects the two switches on a gigabit port. The second (oddly) connects the switch in building 2 to the OfficeServ server in building 1. When the internet goes down I can still access the router through a wireless connection. I can also ping websites and get a response. Firefox just says “Cannot connect” etc. The system then heals itself when it feels like it. (Sorry if this is asking too much but) These are my immediate questions… Why would browser-based internet go down? I don’t know enough about protocols etc but I can try to standardise settings. The WAPs have a DNS server setting and I don’t know whether it should be “None” or “From DHCP”. What should be the DHCP server? The router or the Cisco switch? Or something else?! Would there be any problem in connecting the second link from switch to switch? Is that good practice? Is it worth swapping the Netgear GS605 with either a Cisco SG200-08 or Netgear GS108T-200? Is it worth upgrading the router to, for instance, a Cisco RV042G Dual Gigabit router which would also act as a switch? Or is it better to have a separate router and switch in Building 2?

    Read the article

  • How to troubleshoot performance issues of PHP, MySQL and generic I/O

    - by jbx
    I have a WordPress based website running on a shared hosting. Its response time is very decent (around 2s to retrieve the HTML page and 5s to load all the resources). I was planning to move it to a dedicated virtual server (Ubuntu 12.04 LTS), which should theoretically improve things and make them more consistent given its not shared. However I observed severe performance degredation, with the page taking 10seconds to be generated. I ruled out network issues by editing /etc/hosts on the server and mapping the domain to 127.0.0.1. I used the Apache load tester ab to get the HTML, so JS, CSS and images are all excluded. It still took 10 seconds. I have Zpanel installed on the server which also uses MySQL, and its pages come up quite fast (1.5s) and also phpMyAdmin. Performing some queries on the wordpress database directly through phpMyAdmin returns them quite fast too, with query times in the 10 to 30 millisecond region. Memory is also sufficient, with only 800Mb being used of the 1Gb physical memory available, so it doesn't seem to be a swap issue either. I have also installed APC to try to improve the PHP performance, but it didn't have any effect. What else should I look for? What could be causing this degradation in performance? Could it be some kind of I/O issue since I am running on a cloud based virtual server? I wish to be able to raise the issue with my provider but without showing actual data from some diagnosis I am afraid he will just blame my application. UPDATE with sar output (every second) when I did an HTTP request: 02:31:29 CPU %user %nice %system %iowait %steal %idle 02:31:30 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:31 all 2.22 0.00 2.22 0.00 0.00 95.56 02:31:32 all 41.67 0.00 6.25 0.00 2.08 50.00 02:31:33 all 86.36 0.00 13.64 0.00 0.00 0.00 02:31:34 all 75.00 0.00 25.00 0.00 0.00 0.00 02:31:35 all 93.18 0.00 6.82 0.00 0.00 0.00 02:31:36 all 90.70 0.00 9.30 0.00 0.00 0.00 02:31:37 all 71.05 0.00 0.00 0.00 0.00 28.95 02:31:38 all 14.89 0.00 10.64 0.00 2.13 72.34 02:31:39 all 2.56 0.00 0.00 0.00 0.00 97.44 02:31:40 all 0.00 0.00 0.00 0.00 0.00 100.00 02:31:41 all 0.00 0.00 0.00 0.00 0.00 100.00 My suspicion that this comes from I/O related issue is also because a caching plugin I use to reduce the amount of queries to the database, by precompiling PHP pages is actually making things worse instead of better. It seems that file access is making things worse instead.

    Read the article

  • How to have Windows Server DNS use hosts file to resolve specific host names

    - by user41079
    Hello, everyone, I'm facing a small problem with Windows Server 2003 DNS service. In my corporation, I'm running Microsoft DNS server(172.16.0.12) to do name resolution to my company intranet(domain name ends in dev.nls. resolving to IP 172.16..), and it is also configured as a DNS forwarder to forward other domain names(e.g. *.google.com , *.sf.net) to Internet real DNS servers. This internal DNS server never tends to serve users from outside world. And, we are running a mail server(serving incoming mail for a real Internet domain @nlscan.com) inside company firewall which can be accessed in either way: by connecting to 172.16.0.10 from within intranet. by connecting to mail.nlscan.com(resolved to 202.101.116.9) from Internet. Note that 172.16.0.10 and 202.101.116.9 is not the same physical machine. The 202 one is a firewall machine who do port forwarding of port 25 and 110 to intranet address 172.16.0.10 . Now my question: If users inside corporate LAN want to resolve mail.nlscan.com, it resolves to 202.101.116.9. That's correct and workable, BUT NOT GOOD, because the mail traffic goes to the firewall machine then bounces to 172.16.0.10 . I hope that our internal DNS server can intercept the name mail.nlscan.com and resolve it to 172.16.0.10 . So, I hope that I can write an entry in "hosts" file on 172.16.0.12 to do this. But, how can Microsoft DNS server recognize this "hosts" file? Maybe you suggest, why not have intranet user use 172.16.0.10 to access my mail server? I have to say it is inconvenient, suppose a user(employee) works on his laptop, daytime in office and nighttime at home. When he is at home, he cannot use 172.16.0.10 . Creating a zone for nlscan.com on our internal DNS server is not feasible, because the name server for nlscan.com domain is on our ISP, and it is responsible for resolving other host names and sub-domains under nlscan.com . Thank you in advance.

    Read the article

  • Slow Memcached: Average 10ms memcached `get`

    - by Chris W.
    We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands. Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms. More details on the system architecture: Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster. We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs) When I ping from one application server to another the responses are in ~0.5ms Isn't 10ms a slow response time for Memcached? As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong? Here's the output of the memcache-top command: memcache-top v0.7 (default port: 11211, color: on, refresh: 3 seconds) INSTANCE USAGE HIT % CONN TIME EVICT/s GETS/s SETS/s READ/s WRITE/s cache1:11211 37.1% 62.7% 10 5.3ms 0.0 73 9 3958 84.6K cache2:11211 42.4% 60.8% 11 4.4ms 0.0 46 12 3848 62.2K cache3:11211 37.5% 66.5% 12 4.2ms 0.0 75 17 6056 170.4K AVERAGE: 39.0% 63.3% 11 4.6ms 0.0 64 13 4620 105.7K TOTAL: 0.1GB/ 0.4GB 33 13.9ms 0.0 193 38 13.5K 317.2K (ctrl-c to quit.) ** Here is the output of the top command on one machine: ** (Roughly the same on all cluster machines. As you can see there is very low CPU utilization, because these machines only run memcache.) top - 21:48:56 up 1 day, 4:56, 1 user, load average: 0.01, 0.06, 0.05 Tasks: 70 total, 1 running, 69 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 501392k total, 424940k used, 76452k free, 66416k buffers Swap: 499996k total, 13064k used, 486932k free, 181168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6519 nobody 20 0 384m 74m 880 S 1.0 15.3 18:22.97 memcached 3 root 20 0 0 0 0 S 0.3 0.0 0:38.03 ksoftirqd/0 1 root 20 0 24332 1552 776 S 0.0 0.3 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 4 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0 5 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kworker/u:0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper ...output truncated...

    Read the article

  • Critique My Backup and Storage Plan

    - by MetaHyperBolic
    My current storage (RAID-1 off of a hardware RAID card) and backup (a spare drive) solutions for my home network are inadequate. I have too much data scattered on various one-off drives. It is time to evolve. Backups seem simple enough, at least: lots of big drives. However, I am bewildered by the number of choices for small home storage. The Drobo S looks appealing. So does the ReadyNAS. I am not looking for bunches of shiny features, I'm mostly interested in reliability. I am not interested in building Yet Another PC to create a file server or doing something in the cloud, or whatever. I'm stupid, so I am keeping it simple. Requirements for Main Volume: Starting working space roughly 2TB, with options for growth up to 5TB RAID or something RAID-like with at least one parity drive eSATA II for speed during backups Ability to shut down gracefully when alerted of low power by a UPS Optional but Desirable: Will take 2TB drives now with options for the larger 3TB drives coming in 2010-2011 Optional but Desirable: : RAID-6 or something similar, with two parity drives Optional but Desirable: : Hot spare Ethernet connection not required, as the volume will be shared via the same machines which runs my home print server Backups: Backup performed via ROBOCOPY in mirror mode to an external hard drive via a eSATA II connection. Start with rotating between two external 2TB hard drives, will go up to six external 2TB drives. Start with a weekly backup, move to a bi-weekly backup as more drives are added. Move to 3TB drives as the size of my main volume increases. Backup drives will be stored on an off-site location. Hard drives: I plan on buying all of the same model, but different batches from different vendors. I found a "burn-in" utility with which I can pound away on the drives for a couple of weeks before adding them to the backup pool or the main volume. I estimate that I am looking at roughly $1,500 to start, once I start throwing in two TB drives for backup and four for storage. So, are there any obvious flaws in my plan? What have I overlooked? Any suggestions for the storage device for my main volume that fits my requirements? Or do I just keep it simple, 2 drives in RAID-1, then perform due diligence with my backups, accepting that I will have to buy a whole new unit when my data grows past 2TB?

    Read the article

  • I/O intensive MySql server on Amazon AWS

    - by rhossi
    We recently moved from a traditional Data Center to cloud computing on AWS. We are developing a product in partnership with another company, and we need to create a database server for the product we'll release. I have been using Amazon Web Services for the past 3 years, but this is the first time I received a spec with this very specific hardware configuration. I know there are trade-offs and that real hardware will always be faster than virtual machines, and knowing that fact forehand, what would you recommend? 1) Amazon EC2? 2) Amazon RDS? 3) Something else? 4) Forget it baby, stick to the real hardware Here is the hardware requirements This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 1 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. This server will be focused on I/O and MySQL for the statistics, memory size and disk space for the images hosting. Server 2 I/O The very main part on this server will be I/O processing, FusionIO cards have proven themselves extremely efficient, this is currently the best you can have in this domain. o Fusion ioDrive2 MLC 365GB (http://www.fusionio.com/load/-media-/1m66wu/docsLibrary/FIO_ioDrive2_Datasheet.pdf) CPU MySQL will use less CPU cores than Apache but it will use them very hard, the E7 family has 30M Cache L3 wichi provide boost performance : o 1x Intel E7-2870 will be ok. Storage SAS will be good enough in terms of performance, especially considering the space required. o RAID 10 of 4 x SAS 10k or 15k for a total available space of 512 GB. Memory o 64 GB minimum is required on this server considering the size of the statistics database. Warning: the statistics database will grow quickly, if possible consider starting with 128 GB directly, it will help. Thanks in advance. Best,

    Read the article

  • What the hell was THAT?!?

    - by Massimo
    My system is Windows XP SP3, updated with the latest patches. The PC is connected to a Cisco 877 ADSL router, which does NAT from the internal network to its single static public IP address. There are no forwarded ports, and the router's management console can only be accessed from the inside. I was doing two things: working on a remote office machine via VPN and browsing some web pages on the Cisco web site. The remote network is absolutely safe (it's a lab network, four virtual servers, no publicly accessible services and no users at all; also, none of what I'm going to describe ever happened there). The Cisco web site... well, I suppose is quite safe, too. Suddenly, something happened. Strange popups appears anywhere; programs claiming they're "antimalware", "antispyware" et so on begins autoinstalling; fake Windows Update and Security Center icons pop up in the system tray. svchost.exe began crashing repeatedly. Then, finally, after some minutes of this... BSOD. And, upon rebooting, BSOD again. Even in safe mode. Ok, that was obviously some virus/trojan/whatever. I had to install a new copy of Windows on another partition to clean things up. I found strange executables, services and DLLs almost anywhere. Amongst the other things, user32.dll and ndis.sys had been replaced. A fake software called "Antimalware Doctor" had been installed. There were services with completely random names or even GUIDs (!), and also ones called "IpSect" and "Darkness". There were executable files without an .exe extension. There were even two boot-class drivers, which I'm quite sure are the ones that finally caused the system to crash. A true massacre. Ok, now the questions: What the hell was that?!? It was something more than a simple virus! How did it manage to attack my computer, as I am behind a firewall and was not doing anything even only potentially harmful on the web at the time?

    Read the article

  • Some apps very slow to load on Windows 7, copy & paste very slow.

    - by Mike
    Hello, I searched here and couldn't find a similar issue to mine but apologies if I missed it. I've searched the web and no one else seems to be having the same issue either. I'm running Windows 7 Ultimate 64bit on a pretty high spec. machine (well, apart from the graphics): Asus M4A79T Deluxe AMD Phenom II 965 black edition (quad core, 3.4GHz) 8GB Crucial Ballistix DDR 1333MHz RAM 80GB Intex X25 SSD for OS 500GB mechanical drive for data. ATi Radeon HD 4600 series PCI-e Be Quiet! 850W PSU I think those are all the relevant stats, if you need anything else let me know. I've updated chipset, graphics and various other drivers all to no avail, the problem remains. I have also unplugged and replugged every connection internally and cleaned the RAM edge connectors. The problems: Video LAN (VLC) and CDBurnerXP both take ages to load, I'm talking 30 seconds and 1 minute respectively which is really not right. Copy and paste from Open Office spread sheet into Fire Fox, for example, is really, really slow, I'll have pressed control V 5 or 6 times before it actually happens, if I copy and then wait 5 to 10 seconds or so it'll paste first time so it's definitely some sort of time lag. Command and Conquer - Generals: Zero Hour. When playing it'll run perfectly for about 10 or 20 seconds then it'll just pause for 3 or 4 then run for another 10 or 20 seconds and pause again and so on. I had the Task Manager open on my 2nd monitor whilst playing once and I noticed it was using about 25% of the CPU, pretty stable but when the pause came another task didn't shoot up to 100% like others on the web have been reporting (similar but not the same as my issue, often svchost.exe for them) but dropped to 2 or 3% usage then back up to 25% when it started playing properly again. Very odd! But it gets even odder... I had a BSOD and reboot last week, when it rebooted the problem had completely gone, I could play C&C to my heart's content and both the other apps loaded instantly, copy and paste worked instantly too. I did an AVG update earlier this week which required a reboot, rebooted and the problem's back. I don't think it's AVG related though, I think it was just coincidence that's the app that required a reboot. I think any reboot would have brought the issue back. A number of reboots later and it hasn't gone away again. If any one could make any suggestions as to the likely cause and solution to these issues I'd be most grateful, it's driving me nuts! Thanks, Mike....

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • Disable Acer eRecovery system

    - by Joel Coehoorn
    The meat of this question is that I'm looking for a way to either require a password before using a recovery partition or "break" the recovery partition (specifically, Acer eRecovery) in a way that I can later "unbreak" only by booting normally into windows first. Here's the full details: I have a set of new Acer Veriton n260g machines in a computer lab. A lot work went into setting up this lab to work well - for example, Office 2007 and other programs needed by the students were installed, all windows updates are applied, and a default desktop is setup. All in all it's several hours work to fully set up one machine. Unfortunately, I don't currently have the ability to easily image these machines, and even if I did I would want to avoid downtime even while an image is restored. Therefore, I've taken steps to lock them down — namely DeepFreeze and a bios password to prevent booting from anywhere but the frozen hard drive. DeepFreeze is an amazing product — as long as you boot from the frozen hard drive, there is no way to actually make permanent changes to that hard drive. Anything you do is wiped after the machine restarts. It lets me give students the leeway to do what they want on lab computers without worrying about them breaking something. The problem is that even with the bios locked and set to only boot from the hard drive, these Acers still have a simple way to choose a different boot source: shut them down and put a paper click in a little hole at the top while you turn it on again. This puts them into the "Acer eRecovery" mode. This by itself is no big deal — you can still power cycle with no impact. But if you then click through the menu to reset the machine (we're now past the point of curiosity and on to intent) it will wipe the hard drive and restore it to the original state. Of course, a few students have already figured this out and reset a couple machines. That's unfortunate, but inevitable. I don't want to destroy the ability to do this entirely (which I could by repartitioning the drives to remove the recovery partition) but I would like a way to require a password first, or "break" the recovery system in a way that I can "unbreak" only if I first un-freeze the hard drive in DeepFreeze. Any ideas?

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Moving from single-site to multi-site Active Directory has broken OWA proxying

    - by messick
    Originally we had the following setup: OfficeExch01 has Mailbox Role and CAS Role OfficeExch01 is in the office. CoLoExch01 had just CAS Role. CoLoExch01 is internet facing and in a CoLo. Three AD domain controllers in the default site. Users could go to https://webmail.whatever.com/owa, get proxyed to OfficeExch01 and everything was great. Well, we recently setup a separate AD site and put a domain controller and the ColoExch01 server in the new site. I also made that remote DC be a Global Catalog. Now, users get the following error: Outlook Web Access is not available. If the problem continues, contact technical support for your organization and tell them the following: There is no Microsoft Exchange Client Access server that has the necessary configuration in the Active Directory site where the mailbox is stored. I also see event 41 errors in the logs: The Client Access server "https://webmail.xxxxxxx.com/owa" attempted to proxy Outlook Web Access traffic for mailbox "/o=XXXXX/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=xxxxxxk". This failed because no Client Access server with an Outlook Web Access virtual directory configured for Kerberos authentication could be found in the Active Directory site of the mailbox. The simplest way to configure an Outlook Web Access virtual directory for Kerberos authentication is to set it to use Integrated Windows authentication by using the Set-OwaVirtualDirectory cmdlet in the Exchange Management Shell, or by using the Exchange Management Console. If you already have a Client Access server deployed in the target Active Directory site with an Outlook Web Access virtual directory configured for Kerberos authentication, the proxying Client Access server may not be finding that target Client Access server because it does not have an internalUrl parameter configured. You can configure the internalUrl parameter for the Outlook Web Access virtual directory on the Client Access server in the target Active Directory site by using the Set-OwaVirtualDirectory cmdlet. Looking this up I see a lot talk about ExternalURL and InternalURL settings. However, everything worked great until we made the new AD site. I also made sure the internal CAS server's /owa virtual directory is set to use Integrated Authentication. Is there something I need to do to allow Exchange to see that I've made these AD changes?

    Read the article

  • Installing the Newest KeePass for MacOSX from Source

    - by firebush
    I've transitioned our passwords to KeePass. LastPass looks cool, but I prefer a system where we control the database locally rather than it being kept in the cloud. I have a windows and Linux system and both are able to access our KeePass database easily. On my Linux system (Ubuntu), I simply installed KeePass via synaptic and it just worked. So everything was working great, until my wife tried to set up things on her MacBook to access the database. Huge problems. It was so easy on Linux that I didn't expect there to be issues there. In case it's helpful, she's running a fresh install of Mac OSX 10.5.8, Leopard. We simply went to the download site for KeePass: http://keepass.info/download.html Clicked on the link titled KeePass 2.x for Mac OS X from which we retrieved Mono 2.10.5 and KeePass 2.18 from that site (the packages posted there at the time of this writing). Mono installed without problems (at least, none that we saw). She opened the KeePass image and dragged it to the Application side, unpackaging it there. According to the instructions on the KeePass installation instructions, she opened a terminal, changed to the directory in /Applications containing KeePass.exe, and ran it through mono: mono KeePass.exe No application opens at all - we see a blip for it, but then it immediately goes away, indicating to us that it is crashing. Also disconcerting, I see that people are throwing fits about copy-and-paste not working for KeePass 2.18 on MacOSX. Judging from the 2.19 addresses the copy/paste issue. I'm hoping that version will solve all our issues. So here's my question: how can I try out 2.19 on her system. It doesn't seem to be packaged like the 2.18 one is. But we're not scared of building it. I see that the source for 2.19 is here (at the bottom of the page). Can I just download that to her machine somewhere and run something to build it? I'm familiar with automake but not with building .NET source, so please answer gently if this is stupidly easy. :^) btw: tomorrow's my wife's birthday, and this is getting her down. If you know how to navigate these issues, it would be a nice birthday gift for her. Thanks in advance! Update I'll post this since it might be helpful to someone else: I got KeePass 2.18 to run by updating Mono to 2.10.9 (rather than the 2.10.5 given by the site above). With the later version of Mono, it runs without crashing. And yet, I do see the copy and paste issue that others see. I can open a database on her machine, but the incorrect data get's copied. So again, can someone help me install KeePass 2.19? Thanks!

    Read the article

  • Macintosh computers cannot connect to router unless we re-start the modem and router

    - by dwwilson66
    We have a small office network with DSL and a Netgear WNR-2000 wireless router acting as a DHCP server. There are nine devices connected to the router, wirelessly and wired. Whenever a Mac computer tries to connect, it's unsuccessful until we restart the router. Each of the possible devices that can connect to the network is listed in a table to assign certain IP addresses to certain MAC addresses. I am running WPA-PSK security. I can view the router status and see that the Mac's MAC address is visible to the router, but with a 169.* IP address, even though I'm assigning its MAC address to an IP address within my subnet. All non-Mac devices attached to the network connect properly, and can access the network properly even AFTER the Mac has not successfully connected. The network includes Windows devices, Roku boxes, printers and internet ready TVs. This to me, would point to a DHCP issue with how Mac communicates with my network. One interesting thing to note is that if a Mac connects and is prevented from sleeping, it will stay connected indefinitely; reissuing the security cert from the router works fine. I'm not sure if that's supposed to sever & re-establish a connection with the updated credentials or not, but I do stay connected. If the Mac sleeps and is awakened while the security cert is still valid, it connects fine. If the security certificate expires while the Mac is asleep, we need to restart the router. Restarting the router will ALWAYS assigns the proper IP addresses to the Mac equipment. I have heard anecdotally that Mac doesn't play well with 802.11n; I have not tested any other Wireless protocols. There's a couple issues here: First, I found this on Stack, Mac laptop crashing wireless router, but it's not rally applicable since the router isn't crashing. But, it does give some clues about Mac's accessing the network. I did change my encryption from WEP to WPA-PSK, but after about a week, we're still experiencing the issue. I'm not really sure if there's anything else useful in that question. Second, I'm considering getting a 802.11c router and hooking it up to the wireless N router. the 802.11c router would handle all the Mac traffic, and would be set up as a Mac-only subnet. Everything else would remain as is. However, I'm not sure if this is doable on a technology level...do I need a bridge or is this some way to do this with regular consumer gear?

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >