Search Results

Search found 9362 results on 375 pages for 'direct admin'.

Page 283/375 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • how to design pound -> varnish -> jboss for ha + loadbalancing

    - by andreash
    Hello, I'm planning a new infrastructure for our web application. We have two JBossAS5 servers, running in a cluster. Session state will be replicated via JBoss Cache. In front of that, there should be some cache, to speed up delivery of static elements. However, most of the traffic to our app will be via HTTPS. So far, I had been thinking of two Varnish caches in front of the JBossASs, each being configured for loadbalancing to the two JBossASs via round-robin. Since Varnish doesn't handle HTTPS, then there would need to be two pound proxies in front of the Varnishs, dealing with the HTTPS. The two pounds would be made high-available with Heartbeat/LinuxHA. The traffic to www.example.com would then be going through our firewall, from there to the virtual IP of the pounds, from there to the Varnishs, and from there to the JBossASs. Question 1: Does this make sense? Or is it overly complicated, and the same goal can be reached with simpler methods? Question 2: If my layout is fine, how do I configure the pound - Varnish step? Should I a) make the Varnish service high-available through Heartbeat/LinuxHA as well and direct traffic from pound to the virtual IP of the Varnishs, or should I rather b) Configure two independent Varnishs and use load-balancing in pound to address the different Varnishs? Thanks a lot for your insight! Andreas.

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • Setting up DNS using VirtualMin/WebMin

    - by Nyxynyx
    I am moving from a cPanel server to one where I've installed VirtualMin. The LAMP stack and the website files have been setup properly and I can access the website by its IP address. Problem: Now its time to point my domain mydomain.com to my new server. After reading many sites describing setting up bind and master zones, I am pretty confused as to what to do, especially coming from a cPanel server where its really simple to set this up. Attempt Tried to register my nameservers ns1.mydomain.com and ns2.mydomain.com at my domain registrar, but I am missing the IPs I need to point these nameservers to. Should I set ns1.mydomain.com to the IP addres of my web server, and not register ns2.mydomain.com? When specifying the DNS for mydomain.com, the first one I've set it to ns1.apadment.com. On the manager/admin page of my webhost provider, I am given the option to create a secondary slave DNS, which I assigned to the IP address of my server. Though I am not sure how the slave DNS will copy the info from my web server? I have assigned this secondary DNS ns.hostprovider.com as the second DNS for mydomain.com I tried creating a Virtual Server under Virtualmin, but it seems to mess up Apache's DocumentRoot for the site by creating and enabling a new vhost file that ends with .conf. I edited the .conf file to point DocumentRoot back to where its supposed to be /var/www/mydomain instead of /user/mydomain.com I believe the next step is to setup the zone. Virtualmin has already created a Master Zone with 8 different addresses (www.mydomain.com, ftp.mydomain.com...). Under Nameservers, there are already 2 records. One is the hostname (random name given by hostprovider, ns12345.ip123-123.net), the other is the secondary slave DNS provided by the host provider. Does having BIND running on my web server makes the server the master DNS? Thank you!

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Can't mount FAT32 drive under Ubuntu Linux

    - by Josh
    I have a 320GB USB drive with a single large FAT32 partition. The volume mounts perfectly fine on my Mac OS X 10.5.8 machine and Disk Utility on the mac reports no issues with the volume. I can read/write all data on the drive. However when I connect the drive to my Ubuntu 9.10 Karmic system, the partition does not mount. dmesg|tail says: [ 2752.334822] scsi3 : SCSI emulation for USB Mass Storage devices [ 2752.335040] usb-storage: device found at 3 [ 2752.335044] usb-storage: waiting for device to settle before scanning [ 2757.330301] usb-storage: device scan complete [ 2757.331005] scsi 3:0:0:0: Direct-Access WD 3200AAK External 1.65 PQ: 0 ANSI: 0 [ 2757.331772] sd 3:0:0:0: Attached scsi generic sg2 type 0 [ 2757.355647] sd 3:0:0:0: [sdb] 625142448 512-byte logical blocks: (320 GB/298 GiB) [ 2757.360737] sd 3:0:0:0: [sdb] Write Protect is off [ 2757.360749] sd 3:0:0:0: [sdb] Mode Sense: 00 00 00 00 [ 2757.360755] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367618] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2757.367631] sdb: sdb1 [ 2762.797622] sd 3:0:0:0: [sdb] Assuming drive cache: write through [ 2762.797636] sd 3:0:0:0: [sdb] Attached SCSI disk [ 2822.866228] FAT: bogus number of reserved sectors [ 2822.866237] VFS: Can't find a valid FAT filesystem on dev sdb1. When I run fsck.vfat -a /dev/sdb1 I get: root@cartman:~# fsck.vfat -a /dev/sdb1 dosfsck 3.0.3, 18 May 2009, FAT32, LFN Logical sector size is zero. Googling "vfat Logical sector size is zero" produced no consensus as to the solution. I would prefer not to have to completely reformat the disk if possible because it contains about 280GB of data I would rather not have to find a temporary home for. Any suggestions?

    Read the article

  • Total newb having SSH and remote MySQL access problems

    - by kscott
    I don't often work with linux or need to SSH into remote MySQL databases, so pardon my ignorance. For months I had been using the HeidiSQL client application to remotely access a MySQL database. Today two things happened: the DB moved to a new server and I updated HeidiSQL, now I cannot log in to the MySQL server, when attempting I get this message from Heidi: SQL Error (2003) in statement #0: Can't connect to MySQL server on 'localhost' (10061) If I use Putty, I can connect to the server and get MySQL access through command line, including fetching data from the DB. I assume this means my credentials and address are correct, but do not understand why putting those same details into HeidiSQL's SSH tunnel info won't work. I also downloaded the MySQL Workbench and attempted to set up a connection through that client and got this message: Cannot Connect to Database Server Your connection attempt failed for user 'myusername' from your host to server at localhost:3306: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 Please: 1 Check that mysql is running on server localhost 2 Check that mysql is running on port 3306 (note: 3306 is the default, but this can be changed) 3 Check the myusername has rights to connect to localhost from your address (mysql rights define what clients can connect to the server and from which machines) 4 Make sure you are both providing a password if needed and using the correct password for localhost connecting from the host address you're connecting from From Googling around I see that it could be related to the MySQL bind-address, but I am a third party sub-contractor with no access to the MySQL settings of this box and the system admin is assuring me that I'm an idiot and need to figure it out on my end. This is completely possible but I don't know what else to try. Edit 1 - The client settings I am using In Heidi and MySQL Workbench I am using the following: SSH host + port: theHostnameOfTheRemoteServer.com:22 {this is the same host I can Putty to} SSH Username: mySSHusername {the same user name I use for my Putty connection} SSH Password: mySSHpassword {the same password for the Putty connection} Local port: 3307 MySQL host: theHostnameOfTheRemoteServer.com MySQL User: mySQLusername {which I can connect with once in with Putty} MySQL Password: mySQLpassword {which works once in with Putty} Port: 3306

    Read the article

  • Why can't this user connect to domain share?

    - by Saariko
    Part of my reorganizing credentials in the domain, I have created several users that will be used solely for services (backup, LDAP, etc) The idea is that systems that need specific usage will use a user/service user, that will give them what they need. However, I am having trouble setting the correct needed data. For this example, I have a NAS (Ready NAS 1100 by Netgear), that runs it's own backup jobs. The job reads from a domain share: \domain\qa and copies all data to another location. When using the domain\administrator everything works. When I input the domain\srv.backup user I get an error connecting to the folder. The srv.backup is part of the 'Domain Admins' group, which is a member of 'Administrators' I thought there might be propagation issues, but even when the srv.backup user was a direct member of 'Administrators' the error still occurred. I have 2 DC's (W2K8R2 replicas) - I thought that could also cause a problem, AFAIKT it's not the issue. Sharing permissions are open to everyone The Security on the folder is as follow This is the test window from the NAS dashboard I doubled check that the 'srv.domain' is part of the 'Domain Admins' group As well as tried with a simple 1-9 password. What else do I need to check? thanks.

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • Win7 Command Prompt drives not available

    - by jmerrill
    I have the opposite problem compared to the author of this question: Hard drive access denied from Windows Explorer (but works from command prompt as Admin) I can see all the drive letters for a particular server in Windows Explorer, and can navigate through them exactly as would be expected. The drive letters are displayed in Explorer in parens to the right of the path info -- finalpathportion (\\server\otherpathportions) (driveletter:) e.g. jmerrill (\\server\users) (H:) But the drive letters are not usable in a "Run as Administrator" command prompt. They have worked in the past, but I have since rebooted. I thought that perhaps I had to start a new command prompt having visited them in Explorer -- but that did not help. "net use" in the command prompt shows Unavailable H: \\server\users\jmerrill Microsoft Windows Network with similar info for the other drives. I can do net use h: /d net use h: \\server\users\jmerrill for each drive, and get the letters to be available in the command prompt. It is perhaps obvious that I don't think that it should be necessary to do that. Does anyone have any ideas?

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • SharePoint Business Connectivity Services (BCS) Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by g18c
    I am running SharePoint 2010 with SQL 2012, I am trying to get Business Connectivity Services (BCS) running but I am facing a double-hope authentication issue. Everytime I try to connect to the external BCS list created in SharePoint designer, I get the error Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. In the event viewer on the SQL server I see a login failure for an anonymous user from the SP server IP address. Background information below: I have enabled Kerberos under SharePoint Central admin. I have the following AD domain accounts: SP_Farm - main website pool SP_Services - for SharePoint services (including BCS) SQL_Engine - SQL database engine I then created the following with SetSPN: SetSPN -S http/intranet mydomain\SP_Farm SetSPN -S http/intranet.mydomain.local mydomain\SP_Farm SetSPN -S SPSvc/SPS mydomain\SP_Farm SetSPN -S MSSQLSvc/SQL1 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1:1433 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local:1433 mydomain\SQL_DatabaseEngine I then delegated the AD accounts for any authentication protocol to the following: SP_Farm - SP_Farm (http service type, intranet) SP_Farm - SQL_DatabaseEngine (MSSQLSvc, sql1) SP_Service - SP_Service (SPSvc) SP_Service - SQL_DatabaseEngine (MSSQLSvc, sql1) I have also checked the WFE is being logged on to with Kerberos, with the WFE server event log showing event ID 4624 with Kerberos authentication, this is OK. The SQL is also showing connections authenticated as Kerberos from the WFE with the following query: Select s.session_id, s.login_name, s.host_name, c.auth_scheme from sys.dm_exec_connections c inner join sys.dm_exec_sessions s on c.session_id = s.session_id Despite the above, credentials are not passed from the client through the SharePoint server to the SQL server, only the anonymous account is used. I get the following error in the WFE server for 'BusinessData' ID 8080: Could not open connection using 'data source=sql1.mydomain.local;initial catalog=MSCRM;integrated security=SSPI;pooling=true;persist security info=false' in App Domain '/LM/W3SVC/1848937658/ROOT-1-129922939694071446'. The full exception text is: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. If I set a username and password with the Secure Store Service and set the external list to use the impersonated credentials, the list works. Any ideas what I have missed and what can be tried next?

    Read the article

  • Trouble installing SSL Certificate on Apache

    - by jahufar
    We have a dedicated server with GoDaddy running Plesk that requires SSL. I've generated the certificate files and I created a vhost_ssl.conf (since I can't edit the default plesk apache configuration http.include, vhost_ssl.conf gets Included to httpd.include) that tells apache where to find the certificate files: SSLCertificateFile /usr/local/psa/var/certificates/domain.com.crt SSLCertificateKeyFile /usr/local/psa/var/certificates/domain.com.key SSLCertificateChainFile /usr/local/psa/var/certificates/sub.class1.server.ca.pem When I stop/start apache, it refuses to start up. The error_log does not have anything on it either (which is strange). Then I opened up httpd.include and found this bit: <VirtualHost 208.xxx.xxx.xxx:443> ServerName domain.com:443 ServerAlias www.domain.com UseCanonicalName Off SSLEngine on SSLVerifyClient none SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 Include /var/www/vhosts/domain.com/conf/vhost_ssl.conf Then I commented out SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 (which is plesk's SSL certificate) and restarted apache and it worked perfectly fine. It seems that Apache does not like multiple SSLCertificateFile within the same VirtualHost directive? As anyone who worked with plesk knows, I can't just remove SSLCertificateFile directive in httpd.include as plesk will overwrite my changes when someone uses it - which is why it's in vhost_ssl.conf. So I'm stuck and this is beyond my meager admin skills. Would appreciate someone who knows what (s)he's doing to tell me whats going on. Thanks in advance.

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • Windows Server 2008 R2 RAS VPN: access server on internal interface ip

    - by Mathias
    Hey, short question: I'm usually a linux admin but need to setup a Win2k8 R2 server for a student project. The server is running as VM on a root server and has a public internet IP assigned. Additionally I need a VPN server to access some services running on the server. I managed to set up a working VPN gateway via the Routing and RAS service which assigns clients an IP in the private subnet 192.168.88.0/24 with the Interface "Internal" listening on 192.168.88.1. Additionally I set up the external interface as NAT interface. So I can connect to the VPN server, get an IP assigned and the server additionally does NAT and I can access the internet over the VPN connection. The only thing I additionally need, is that I can access the server itself over that internal IP (e.g. client 192.168.88.2, server 192.168.88.1) as I want to access some services which I don't like to expose to the internet and restrict them to connected VPN clients. Does anybody have a hint, which configuration I'm missing here to be able to access the server over the VPN connection? EDIT: VPN clients get assigned the IP from the private subnet with subnetmask 255.255.255.255, I guess that might be the reason I can't access the server on the private IP address although it's in the same network range. Any ideas how to change this? I defined a static address pool in the Routing and RAS service, but I can't change the netmask there. EDIT2: I can't access the server from the client, but I can fully access the client from the server (ping, HTTP). I guess it has to do with firewall configuration. Thanks in advance, Mathias

    Read the article

  • BIND9 DNS Problems - Not resolving

    - by clone1018
    I host a BIND9 DNS server for my VirtualMin users to use. And It only resolves for 75% of the people. It has been WELL over 1 week now. Here is a sample. $ttl 38400 @ IN SOA axxim.net. root.axxim.net. ( 1274031391 10800 3600 604800 38400 ) @ IN NS axxim.net. day7tech.com. IN A 96.226.216.37 www.day7tech.com. IN A 96.226.216.37 ftp.day7tech.com. IN A 96.226.216.37 m.day7tech.com. IN A 96.226.216.37 localhost.day7tech.com. IN A 127.0.0.1 webmail.day7tech.com. IN A 96.226.216.37 admin.day7tech.com. IN A 96.226.216.37 mail.day7tech.com. IN A 96.226.216.37 day7tech.com. IN MX 5 mail.day7tech.com.

    Read the article

  • Authenticated User Impersonation in Classic ASP under IIS7

    - by user52663
    I've recently moved one of our servers from Server 2003 and IIS6 to Server 2008 R2 and IIS7 (technically IIS7.5 I suppose). In doing so I am transitioning a small account management tool written in classic ASP and have run into a problem with user impersonation. Extensive searching hasn't been much help so far. Under IIS6, the site was configured to impersonate the logged-in user. Thus, if a domain admin logged in, he was able to run commands to create user directories, adjust permissions, etc. Using Procmon you can see the processes executing as that user. This worked fine. However, with the same code under IIS7, I am unable to get this behavior. I have enabled Basic Authentication, disabled Anonymous Auth, enabled impersonation and have changed the app pool to classic instead of integrated pipelining. Everything seems to be configured correctly, however, all the processes launched by the classic ASP site continue to run as the default AppPool identity and not the logged-in user. If it matters, programs are being launched with code such as: set Wsh = Server.CreateObject("WScript.Shell") Wsh.Run("cmd.exe /C mkdir D:\users\foo") Monitoring via Procmon shows cmd.exe being run as either "Classic .NET AppPool" or "DefaultAppPool" depending on the pipeline mode. Any suggestions on how to get the classic ASP site to impersonate and execute as the authenticated user would be great. Thanks!

    Read the article

  • X11 performance problem after upgrading from Centos3 to Centos5 with an ATI Rage XL

    - by Marcelo Santos
    After upgrading a computer from Centos3 to Centos5 an application that does a lot of scrolling took a very high performance hit. top tells me that X is using a lot of CPU and that was not happening before. The machine has an ATI Rage XL with 8MB and X is using the ati driver as there is no proprietary ATI driver for this board on linux. The xorg.conf: Section "Device" Identifier "Videocard0" Driver "ati" EndSection Section "Screen" Identifier "Screen0" Device "Videocard0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "800x600" "640x480" EndSubSection EndSection Section "DRI" Group 0 Mode 0666 EndSection A similar machine that still has Centos3 installed is able to start DRI on the X server while this one is not, this is the Xorg.0.log for the Centos5 machine: drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed drmOpenDevice: node name is /dev/dri/card0 drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: open result is -1, (No such device or address) drmOpenDevice: Open failed [drm] failed to load kernel module "mach64" (II) ATI(0): [drm] drmOpen failed (EE) ATI(0): [dri] DRIScreenInit Failed (II) ATI(0): Largest offscreen areas (with overlaps): (II) ATI(0): 1024 x 1279 rectangle at 0,768 (II) ATI(0): 768 x 1280 rectangle at 0,768 (II) ATI(0): Using XFree86 Acceleration Architecture (XAA) Screen to screen bit blits Solid filled rectangles 8x8 mono pattern filled rectangles Indirect CPU to Screen color expansion Solid Lines Offscreen Pixmaps Setting up tile and stipple cache: 32 128x128 slots 10 256x256 slots (==) ATI(0): Backing store disabled (==) ATI(0): Silken mouse enabled (II) ATI(0): Direct rendering disabled (==) RandR enabled I also tried using EXA instead of XAA and setting: Option "AccelMethod" "XAA" Option "XAANoOffscreenPixmaps" "true" uname -a Linux sir5.erg.inpe.br 2.6.18-128.7.1.el5 #1 SMP Mon Aug 24 08:20:55 EDT 2009 i686 i686 i386 GNU/Linux rpm -qa | grep xorg-x11-server xorg-x11-server-utils-7.1-4.fc6 xorg-x11-server-sdk-1.1.1-48.52.el5 xorg-x11-server-Xvfb-1.1.1-48.52.el5 xorg-x11-server-Xnest-1.1.1-48.52.el5 xorg-x11-server-Xorg-1.1.1-48.52.el5 The drmOpenDevice error continues when using the suggested Option "AIGLX" "true".

    Read the article

  • Windows Server 2008 R2 RAS VPN: access server on internal interface ip

    - by Mathias
    short question: I'm usually a linux admin but need to setup a Win2k8 R2 server for a student project. The server is running as VM on a root server and has a public internet IP assigned. Additionally I need a VPN server to access some services running on the server. I managed to set up a working VPN gateway via the Routing and RAS service which assigns clients an IP in the private subnet 192.168.88.0/24 with the Interface "Internal" listening on 192.168.88.1. Additionally I set up the external interface as NAT interface. So I can connect to the VPN server, get an IP assigned and the server additionally does NAT and I can access the internet over the VPN connection. The only thing I additionally need, is that I can access the server itself over that internal IP (e.g. client 192.168.88.2, server 192.168.88.1) as I want to access some services which I don't like to expose to the internet and restrict them to connected VPN clients. Does anybody have a hint, which configuration I'm missing here to be able to access the server over the VPN connection? EDIT: VPN clients get assigned the IP from the private subnet with subnetmask 255.255.255.255, I guess that might be the reason I can't access the server on the private IP address although it's in the same network range. Any ideas how to change this? I defined a static address pool in the Routing and RAS service, but I can't change the netmask there. EDIT2: I can't access the server from the client, but I can fully access the client from the server (ping, HTTP). I guess it has to do with firewall configuration. Thanks in advance, Mathias

    Read the article

  • How can adding a server to a domain cause Remote Desktop to stop working?

    - by Adrian Grigore
    I have two dedicated with Windows 2008 R2 servers which I am using for Web hosting. One Server A is a domain controller, Server B should simply be added to the domain controlled by Server A. So I RDP'd into Server B and changed the system settings so that Server B is part of that domain. I entered my domain admin credentials, was welcomed to the domain and asked to reboot the server. So far everything seemed to work smoothly After rebooting, I could not open an RDP connection to Server B anymore: Remote Desktop can’t connect to the remote computer for one of these reasons: 1) Remote access to the server is not enabled 2) The remote computer is turned off 3) The remote computer is not available on the network Make sure the remote computer is turned on and connected to the network, and that remote access is enabled. I restored an older backup of Server B and switched off the firewall before adding the server to my domain. But the problem reoccurred just the same. What could be the reason for this? The domain is brandnew and I did not change any of the default settings. Could this be some kind of domain-wide default policy that shuts down RDP on any domain clients? Or perhaps it has to do with the fact that Server B is virtual? Thanks for your help, Adrian

    Read the article

  • Thin client - cloud machine - to run via iPad, iPhone, most Androids etc

    - by Carl Lindberg
    I'm tired of having a laptop macbook that breaks down or having files that I need to sync via dropbox etc all the time via the machines to different OS installations. It sucks. I want a thin client where I can login on any machine - my iPhone, PC desktop, iPad etc to one running machine. I would like to replace a modernly powerful desktop iMac with a thin client running via my iPad. I will connect the iPad with a keyboard/mouse too so you get the idea. But I want to be able to use some of the Android phones as well (I guess most Android phones today has a good enough performance/resolution etc to run a thin client). Of course it has to be able to have input/output in sound. Printing can be solved by PDF/emailing etc - so no direct communication to the printer ports to USB etc is necessary. Is there such a service today? It should cost somewhere under something like $40/ month. I will run stuff like CPU heavy duty ableton for music production, xCode for making iOS apps, some games etc. And on the thin client also run virtual machines. VM of Ubuntu and Windows.

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >