Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 235/274 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Installing ImageMagick on Mac OSX 10.6

    - by Russell C.
    I just got a new Mac and am trying to setup a local Perl development environment. I'm using MAMP but also need the ImageMagick perl module installed in order to do some of the photo processing our scripts require. I tried installing ImageMagick manually but ran into some issues and after reading online a lot of people reported having issues going this route. The general consensus was to install it using MacPorts instead so I went ahead and installed MacPorts. Unfortunately, MacPorts can't seem to install it successfully either. Here is the command I'm using to try to install ImageMagick: sudo port install p5-perlmagick And here are all the errors reported during install: ---> Computing dependencies for p5-perlmagick ---> Building p5-perlmagick Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-perlmagick/work/PerlMagick-6.32" && /usr/bin/make -j2 all " returned error 2 Command output: Magick.xs:10918: error: 'struct Methods' has no member named 'exception' Magick.xs:10918: error: request for member 'severity' in something not a structure or union Magick.xs:10918: error: 'ErrorException' undeclared (first use in this function) Magick.xs:10919: error: 'struct Methods' has no member named 'exception' Magick.xs:10920: warning: implicit declaration of function 'GetImageException' Magick.xs:10922: error: 'struct PackageInfo' has no member named 'image_info' Magick.xs:10922: error: 'struct Methods' has no member named 'adjoin' Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: 'UndefinedException' undeclared (first use in this function) Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'reason' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: error: request for member 'severity' in something not a structure or union Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: pointer/integer type mismatch in conditional expression Magick.xs:10929: error: request for member 'description' in something not a structure or union Magick.xs:10929: warning: passing argument 2 of 'Perl_sv_catpv' from incompatible pointer type Magick.xs:10929: warning: unused variable 'message' Magick.xs:10856: warning: unused variable 'filename' Magick.c:10784: warning: unused variable 'ref' Magick.c:10777: warning: unused variable 'ix' Magick.xs: In function 'boot_Image__Magick': Magick.xs:2122: warning: implicit declaration of function 'InitializeMagick' Magick.xs:2123: warning: implicit declaration of function 'SetWarningHandler' Magick.xs:2124: warning: implicit declaration of function 'SetErrorHandler' make: *** [Magick.o] Error 1 Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. I have no idea what the problem might be or how to go about successfully installing ImageMagick. I'd appreciate any help & advice that someone out there that has done this successfully might be able to provide. Thanks in advance!

    Read the article

  • DB2 Integrity Checks and Exception Tables

    - by imthefirestartr
    I am working on planning a migration of a DB2 8.1 database from a horrible IBM encoding to UTF-8 to support further languages etc. I am encountering an issue that I am stuck on. A few notes on this migration: We are using db2move to export and load the data and db2look to get the details fo the database (tablespaces, tables, keys etc). We found the loading process worked nicely with db2move import, however, the data takes 7 hours to load and this was unacceptable downtime when we actually complete the conversion on the main database. We are now using db2move load, which is much faster as it seems to simply throw the data in without integrity checks. Which leads to my current issue. After completing the db2move load process, several tables are in a check pending state and require integrity checks. Integrity checks are done via the following: set integrity for . immediate checked This works for most tables, however, some tables give an error: DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned: SQL3603N Check data processing through the SET INTEGRITY statement has found integrity violation involving a constraint with name "blah.SQL120124110232400". SQLSTATE=23514 The internets tell me that the solution to this issue is to create an exception table based on the actual table and tell the SET INTEGRITY command to send any exceptions to that table (as below): db2 create table blah_EXCEPTION like blah db2 SET INTEGRITY FOR blah IMMEDIATE CHECKED FOR EXCEPTION IN blah USE blah_EXCEPTION NOW, here is the specific issue I am having! The above forces all the rows with issues to the specified exception table. Well that's just super, buuuuuut I can not lose data in this conversion, its simply unacceptable. The internets and IBM has a vague description of sending the violations to the exception tables and then "dealing with the data" that is in the exception table. Unfortunately, I am not clear what this means and I was hoping that some wise individual knows and could help me out and let me know how I can retrieve this data from these tables and place the data in the original/proper table rather than these exception tables. Let me know if you have any questions. Thanks!

    Read the article

  • Persuading openldap to work with SSL on Ubuntu with cn=config

    - by Roger
    I simply cannot get this (TLS connection to openldap) to work and would appreciate some assistance. I have a working openldap server on ubuntu 10.04 LTS, it is configured to use cn=config and most of the info I can find for TLS seems to use the older slapd.conf file :-( I've been largely following the instructions here https://help.ubuntu.com/10.04/serverguide/C/openldap-server.html plus stuff I've read here and elsewhere - which of course could be part of the problem as I don't totally understand all of this yet! I have created an ssl.ldif file as follows; dn:cn=config add: olcTLSCipherSuite olcTLSCipherSuite: TLSV1+RSA:!NULL add: olcTLSCRLCheck olcTLSCRLCheck: none add: olcTLSVerifyClient olcTLSVerifyClient: never add: olcTLSCACertificateFile olcTLSCACertificateFile: /etc/ssl/certs/ldap_cacert.pem add: olcTLSCertificateFile olcTLSCertificateFile: /etc/ssl/certs/my.domain.com_slapd_cert.pem add: olcTLSCertificateKeyFile olcTLSCertificateKeyFile: /etc/ssl/private/my.domain.com_slapd_key.pem and I import it using the following command line ldapmodify -x -D cn=admin,dc=mydomain,dc=com -W -f ssl.ldif I have edited /etc/default/slapd so that it has the following services line; SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" And everytime I'm making a change, I'm restarting slapd with /etc/init.d/slapd restart The following command line to test out the non TLS connection works fine; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldap://mydomain.com" "cn=roger*" But when I switch to ldaps using this command line; ldapsearch -d 9 -D cn=admin,dc=mydomain,dc=com -w mypassword \ -b dc=mydomain,dc=com -H "ldaps://mydomain.com" "cn=roger*" This is what I get; ldap_url_parse_ext(ldaps://mydomain.com) ldap_create ldap_url_parse_ext(ldaps://mydomain.com:636/??base) ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP mydomain.com:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 127.0.0.1:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 TLS: can't connect: A TLS packet with unexpected length was received.. ldap_err2string ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1) Now if I check netstat -al I can see; tcp 0 0 *:www *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 *:ldaps *:* LISTEN tcp 0 0 *:ldap *:* LISTEN I'm not sure if this is significant as well ... I suspect it is; openssl s_client -connect mydomain.com:636 -showcerts CONNECTED(00000003) 916:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: I think I've made all my certificates etc OK and here are the results of some checks; If I do this; certtool -e --infile /etc/ssl/certs/ldap_cacert.pem I get Chain verification output: Verified. certtool -e --infile /etc/ssl/certs/mydomain.com_slapd_cert.pem Gives "certtool: the last certificate is not self signed" but it otherwise seems OK? Where have I gone wrong? Surely getting openldap to run securely on ubuntu should be easy and not require a degree in rocket science! Any ideas?

    Read the article

  • Lync server 2010 Active Directory Preparation with a Windows Server 2003 DC

    - by juFo
    I'm trying to install Lync server 2010 but i'm stuck for a while now with the "Active Directory Preparation" part of the Lync server 2010 installation. The "Prepare Schema" fails with the following error: "Step 1: Prepare Schema Run once per deployment. Extends the schema for Lync Server. Not Available: Failure occurred while attempting to check the schema state. Please ensure Active Directory is reachable." screenshot: https://skydrive.live.com/#cid=CB15F1A932B364BE&id=CB15F1A932B364BE%211742 The situation: 1 server with Windows Server 2003 (x86), which is the only Domain Controller (DC) 1 server with Windows Server 2008 R2 (x64) where Lync should be installed. First I have found that the DFL/FFL were not correct: On the DC (server2003) I have changed the Domain Functional Level to Windows Server 2003 and also the Forest Functional Level to Windows Server 2003. If I check these settings on the Server2008 with Active Directory Domains and Trusts, I see indeed that the DFL and FFL are being set to Windows Server 2003. (Windows Server 2003 is the minimum required for Lync server 2010) I tried the Lync AD Preparation again but still got the same message: https://skydrive.live.com/#cid=CB15F1A932B364BE&id=CB15F1A932B364BE%211742 I'm logged in on the Server2008 and Server2003 with the domain administrator account. If I check "Active Directory Users and Computers" and go to the directory Users and watch the properties from the Administrator User then it is also a Member of: Domain Admins Domain Users Enterprise Admins Schema Admins Group Policy Creator Owners The firewall on the server2008 is turned off, still nog working So now my question is: what should I do to make the Lync setup (Active Directory Preparation) work? (I would appreciate clear step-by-step suggestions to check.) Thanks in advance. Update 1: Now I've extended AD successfully on the 2003 DC, using this link: http://blogs.pointbridge.com/Blogs/sloan_jason/Pages/Post.aspx?_ID=2 but when I check the Active Directory Preparation again on the Lync install, it still gives me the same error as in the screenshot I've provided. Update 2: I found out that there is a log on "C:\Users\\AppData\Local\Temp\ with this: Get-CSDomainState Get Domain State Error: An error occurred: "Microsoft.Rtc.Management.ADConnect.NoSuitableServerFoundException" "No suitable domain controller was found in domain "OurDomain.LOCAL". Errors:\r\n"OurDCserver.OurDomain.LOCAL5.2 (3790)5.2 (3790) Service Pack1OurDCserver.OurDomain.LOCAL5.2 (3790)5.2 (3790)Service Pack 1"" I thought Lync could be installed with a Windows Server 2003 (according to the documentation on technet) and it doesn't require a SP. :s

    Read the article

  • High Jitter in NTP and poll value never goes above 128

    - by Aseem
    I have a lot of servers syncing to the same NTP servers (4). Not every server is in the same LAN. Some are 3 hops away from the NTP servers and some are 6 hops away. On couple of servers I see that the poll value never touches the 1024 mark and the jitter value is in double digits. Could it be due to the system hardware? These are windows servers (physical) and require as accurate time as possible. Please advice what I should do. Some of the stats that I collected manually from the bad box (which is 6 hops away from the ntp servers) C:\Program Files (x86)\NTP>ntpq -p -n remote refid st t when poll reach delay offset jitter ============================================================================== +*.*.*.25 *.*.*.233 2 u 12 128 377 1.210 -12.579 14.913 +*.*.*.26 *.*.*.233 2 u 96 128 377 1.067 -2.235 9.885 **.*.*.27 *.*.*.233 2 u 24 128 377 1.038 -7.569 11.178 +*.*.*.28 *.*.*.233 2 u 49 128 377 1.288 -11.058 14.544 remote refid st t when poll reach delay offset jitter ============================================================================== +*.*.*.25 *.*.*.233 2 u 124 128 377 0.614 -6.212 5.329 +*.*.*.26 *.*.*.233 2 u 93 128 377 0.910 -9.431 3.111 +*.*.*.27 *.*.*.233 2 u 1 128 377 0.824 -7.428 3.129 **.*.*.28 *.*.*.233 2 u 84 128 377 1.503 -8.230 3.511 remote refid st t when poll reach delay offset jitter ============================================================================== **.*.*.25 *.*.*.233 2 u 117 128 377 1.235 -4.084 11.405 +*.*.*.26 *.*.*.233 2 u 96 128 377 1.335 -11.813 13.130 +*.*.*.27 *.*.*.233 2 u 130 128 377 1.549 -14.036 16.381 -*.*.*.28 *.*.*.233 2 u 79 128 377 1.258 13.395 22.203 remote refid st t when poll reach delay offset jitter ============================================================================== **.*.*.25 *.*.*.233 2 u 88 128 377 1.235 -4.084 14.068 +*.*.*.26 *.*.*.233 2 u 63 128 377 1.335 -11.813 17.086 +*.*.*.27 *.*.*.233 2 u 103 128 377 1.549 -14.036 20.691 -*.*.*.28 *.*.*.233 2 u 47 128 377 1.258 13.395 20.231 remote refid st t when poll reach delay offset jitter ============================================================================== +*.*.*.25 *.*.*.233 2 u 47 64 377 0.652 -15.805 14.077 **.*.*.26 *.*.*.233 2 u 11 64 377 1.013 -14.423 11.375 -*.*.*.27 *.*.*.233 2 u 63 64 377 0.765 -2.030 7.680 +*.*.*.28 *.*.*.233 2 u 4 64 377 1.191 -17.980 14.393 remote refid st t when poll reach delay offset jitter ============================================================================== -*.*.*.25 *.*.*.233 2 u 3 128 377 1.576 18.665 21.999 +*.*.*.26 *.*.*.233 2 u 73 128 377 0.637 -5.012 14.405 **.*.*.27 *.*.*.233 2 u 127 128 377 0.272 -8.237 14.438 +*.*.*.28 *.*.*.233 2 u 123 128 377 1.190 -14.383 18.875 C:\Program Files (x86)\NTP>ntpdc -c loopinfo offset: -0.016430 s frequency: 7.106 ppm poll adjust: 18 watchdog timer: 133 s offset: -0.016430 s frequency: 7.106 ppm poll adjust: 18 watchdog timer: 341 s offset: -0.000149 s frequency: 6.645 ppm poll adjust: 0 watchdog timer: 383 s offset: 0.015735 s frequency: 6.725 ppm poll adjust: 7 watchdog timer: 577 s offset: -0.010331 s frequency: 6.748 ppm poll adjust: 21 watchdog timer: 567 s offset: -0.009427 s frequency: 6.687 ppm poll adjust: 28 watchdog timer: 301 s offset: -0.007361 s frequency: 6.612 ppm poll adjust: 30 watchdog timer: 155 s offset: -0.008106 s frequency: 4.358 ppm poll adjust: 30 watchdog timer: 291 s NTP.conf # NTP configuration file # Use drift file driftfile "C:\Program Files (x86)\NTP\ntp.drift" # Logs statistics loopstats peerstats clockstats statsdir "C:\Program Files (x86)\NTP\logs\" # directory for statistics files filegen peerstats file peerstats type day enable filegen loopstats file loopstats type day enable filegen clockstats file clockstats type day enable logfile "C:\Program Files (x86)\NTP\logs\syslog.txt" # Use specific NTP servers server *.*.*.25 minpoll 4 maxpoll 7 iburst server *.*.*.26 minpoll 4 maxpoll 7 iburst server *.*.*.27 minpoll 4 maxpoll 7 iburst server *.*.*.28 minpoll 4 maxpoll 7 iburst

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

  • Is the master database backup crucial for restoring MS SQL server in the event where you have to res

    - by Imagineer
    I have been advise by Commvault partner support to turn off the backup of the master database as the backup failed due to the log file being lock. The following is the advise given: "The message is caused by Commvault’s inability to backup the master database’s transaction log. If this is happening intermittently its possible that something is locking the transaction log, preventing SQL iData agent from accessing the log. Typically the master database is just a template and is not used by any applications (applications that do require the use of an SQL database create their own) so there should be no harm in preventing it from being backed up You can do this by nominating NOT to back it up in the primary copy for the SQL data agent" The following is the error that I get. sqlxx SQL Server/ SQLxx N/A/ System DBs 19856* (CWE) Transaction Log N/A 01/08/2010 19:00:16 (01/08/2010 19:00:18 ) 01/08/2010 19:03:15 (01/08/2010 19:03:14 ) 1.44 MB 0:01:11 0.071 2 0 1 ITD014L2 Failure Reason: • ERROR CODE [30:325]: Error encountered during backup. Error: [ERROR: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.] Job Options:Create new index, Start new media, Backup all subclients, Truncation Log, Follow mount points , Backup files protected by system file protection , Stop DHCP service when backing up system state data, Stop WINS service when backing up system state data Associated Events: • 79714 [backupxx/JobManager] [01/08/2010 19:03:15 ]: Backup job [19856] completed. Client [sqlxx], Agent Type [SQL Server], Subclient [System DBs], Backup Level [Transaction Log], Objects [2], Failed [1], Duration [00:02:59], Total Size [1.44 MB], Media or Mount Path Used [ITD014L2]. • 79712 [sqlxx/SQLiDA] [01/08/2010 19:01:53 ]: Error encountered during backup. Error: [ERROR: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.] • 79711 [sqlxx/SQLiDA] [01/08/2010 19:01:51 ]: Query Result [[Microsoft][ODBC SQL Server Driver][SQL Server]Cannot back up the log of the master database. Use BACKUP DATABASE instead. [Microsoft][ODBC SQL Server Driver][SQL Server]BACKUP LOG is terminating abnormally.]. • 79707 [backupxx/JobManager] [01/08/2010 19:00:15 ]: New backup request received for Client [sqlxx], iDataAgent [SQL Server], Instance [SQLxx], Subclient [System DBs], Backup Level [Transaction Log]. Files failed to back up: • Backup Database[master] Failed Please advise, thank you.

    Read the article

  • Mysql server fails to start

    - by Nicolas Thery
    Googling since two hours, I require your assistance. I'm on a Debian virtual machine and I cloned it. The only change is the new IP adress it has. Mysql doesn't start any more: Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed! There is no process called mysql. All the mysql log files in /var/log are empty. here is my.cnf file : [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M general_log_file = /var/log/mysql/mysql.log log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M [mysqld_safe] syslog Here is the result of ifconfig : eth0 Link encap:Ethernet HWaddr 00:0c:29:12:98:9a inet adr:192.168.1.138 Bcast:192.168.1.255 Masque:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:754 errors:0 dropped:0 overruns:0 frame:0 TX packets:106 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 RX bytes:101177 (98.8 KiB) TX bytes:17719 (17.3 KiB) lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) As requested, here is the result of : sudo -u mysql mysqld, here is the result : root@debian:/home/nicolas/Bureau# sudo -u mysql mysqld 121004 14:26:57 [Note] Plugin 'FEDERATED' is disabled. mysqld: Can't find file: './mysql/plugin.frm' (errno: 13) 121004 14:26:57 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121004 14:26:57 InnoDB: Initializing buffer pool, size = 8.0M 121004 14:26:57 InnoDB: Completed initialization of buffer pool 121004 14:26:57 InnoDB: Started; log sequence number 0 70822697 121004 14:26:57 [Note] Recovering after a crash using /var/log/mysql/mysql-bin 121004 14:26:57 [Note] Starting crash recovery... 121004 14:26:57 [Note] Crash recovery finished. 121004 14:26:57 [ERROR] mysqld: Can't find file: './mysql/host.frm' (errno: 13) 121004 14:26:57 [ERROR] Fatal error: Can't open and lock privilege tables: Can't find file: './mysql/host.frm' (errno: 13)

    Read the article

  • WSS and CAG , _layout pages break

    - by Mike
    Alright, I've searched everywhere and I cannot find the answer, due to the rarity of our setup. WSS 3.0/IIS 6.0/WinServer 2003 We have a sharepoint site that is in good shape, almost. Its TCP and SSL port are uncommon and need to be rerouted to work properly. This is where the Citrix Access Gateway (CAG) comes in play. It will redirect any request from URL (something.something.com) to the correct SSL port on the correct server. My AAM is configured to Default something.something.com and nothing else, since the CAG will provide the port. We use FBA, and require SSL. This works perfectly for everything that is safe or that is anything that an end user can see, but if I try to add a webpart, it errors out. Whereas if I add it internally, or bypass the CAG the webpart adds fine. The same goes for most of the _layouts pages, like _layouts/new.aspx. If I add a Link List/Doc library on the something.something.com, it errors out (Page cannot be displayed) and the page won't display, but if I try it with an internal address it will work fine. I found that if I am trying to add something or doing anything administrative, the site will navigate to the pages that I need to go to fine, but when i actually ADD something the URL will change from something.something.com to something.something.com:SSLport, thus erroring out the site. The URL with the SSL port shows on the Site URL when navigating to Site Settings. However, if I bypass the CAG, using the internal address the _layouts page works like a charm and i can add anything. All the CAG does is reroute a DNS request to the provided server and port. I've tried reextending the application, no luck same thing. I've tried changing the AAM to hide the port and the CAG rejects it. I've tried to recreate a new webapp/site collection with the same rules on the CAG, same thing occurs. Correct me if I'm wrong, and please provide me with some feedback and answers. Any suggestions would be very appreciated. Is it the CAG or the Alternate Access Mappings (AAM)?

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by rbeier
    Hi, We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Get 5.1 surround sound from computer through a VCR config?

    - by Wedding Nails
    I'm posting to see if my idea of this setup is right and can be done. I currently have the following "equipment": a JVC VCR -quite old-, which has built in surround sound (aka it has several speaker outputs, which I believe is 5.1 and are connected to several speakers that are in every corner of the room), a computer with SPDIF optical output and a new flat screen TV (with built in HDMI). I want the computer to take advantage of the VCR's surround system (all the speakers in the room) in order to play mainly music and video always with all the speakers (5.1) and with the maximum sound quality. Currently, the computer plays sound only through the front speaker (I connect one output to the on board pc audio input) and the quality is really bad. As a side note, the computer video runs with S-video (old school), and the picture quality as you would imagine, is really bad with the new big LCD screen. My main goals are: to upgrade the picture with a new video card which would support HDMI (my tv has HDMI). to buy a SPDIF optical cable, connect one end to the VCR SPDIF input and the other end to the PC output This is theoretically what I've researched so far, and I came out with several questions: in this case, with the SPDIF cable connected, and all the configurations done in windows allowing the 5.1, will I get every content I play "converted" or played through all of my speakers? (I read this forum post). I already know that in order for this setup to play from all the speakers, the content/audio source has to be 5.1. but my question is, if there is a way to play from all of the speakers no matter what type of content I'm playing (that's why I said conversion there) I already know that HDMI cables carry digital sound. Is there a way I can only use said HDMI cord to the tv, and get sound through the VCR? (I'm not too sure about this, I would have to disable the TVs speakers and use the VCR surround as default, but I have no clue wether this can be done or not). Update: The ultimate question is, do I really have to rely on "sound virtualization" technology to get sound from all the speakers, no matter what content I play? (do I require a newer sound card, like a creative soundblaster with said technology?) Thanks!

    Read the article

  • Looking for a fiber optic "switch" or "router" for home use

    - by Shrout1
    The gist of my question: What is a "fiber optic" switch called? I.E. a layer 2 ethernet switch that uses fiber TX and RX connections and sends layer 2 network traffic between the fiber strands that are connected. Can someone purchase a dedicated fiber switch that does not have copper ethernet ports? What is the current average price of a device like this? Not necessarily looking for product endorsements, just information Might not make sense to go this route if it is too cost prohibitive What type of fiber connector is used for terminating a fiber strand into a jack on the wall? Can fiber be "patched" using two jacks and a "patch" cable? Is signal loss a concern with the longest runs at 100-200ft, a patch cable and media converters? The full story: My parents had unterminated fiber optic cable and terminated Cat5e run throughout their home when it was built in 2004. 10 years later the Cat5e isn't providing the throughput that my father needs to accomplish multiple streams of HD and fast system backups throughout the house. He can't reach gigabit speeds across the distance of the Cat5e runs. We are both interested in terminating the fiber connections and using them as high speed "backbones" to copper switches in each room of the house. It would be easy to attain gigabit speeds (or better, eventually) using the fiber. I have searched and searched for a "fiber optic switch" or "fiber optic router" and cannot find the correct term to describe this piece of hardware. We can use fiber media converters at the end points of each connection, however it would be nice to have a "patch panel" set up in the network closet in the basement that has fiber connections on it and switches the ethernet streams between the connections/systems in the house. Each fiber media converter costs between $50-$100 a piece... After 10 or so terminated connections it might make sense to find a piece of hardware that does not require media converters. That would depend upon the cost of this hardware Somewhat unrelated, if we are able to route between these fiber strands successfully, what is the physical connector type used in a jack on the wall? Just like RJ45 has a wall outlet (depicted below): What is the fiber optic equivalent of this? In the interim could we "patch" a couple fiber strands together in the network closet? Would signal loss be of concern with a run length of 100-200 feet, a patch cable and two media converters? If that would work then it could be used until the funds are available for more.

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • how to notify a program of another program? dll? directory? path?

    - by Brady Trainor
    I am trying to experiment with GNUS email in Emacs, in Windows (EDIT: x64 bit). I've got it to work in Ubuntu, but struggling with it in Windows. From http://www.gnu.org/software/emacs/manual/html_mono/emacs-gnutls.html#Help-For-Users I read in second paragraph: This is a little bit trickier on the W32 (Windows) platform, but if you have the GnuTLS DLLs (available from http://sourceforge.net/projects/ezwinports/files/ thanks to Eli Zaretskii) in the same directory as Emacs, you should be OK. I have downloaded and unzipped the gnutls-3.0.9-w32-bin package, but am not sure what to do with it. I have tried putting it in Program Files (x86), which is "the same directory as Emacs". I have tried putting it in the emacs-24.3 folder. I consider merging all the folders in between the two, but am hesitant as that seems a difficult troubleshoot attempt compared to my knowledge on these matters. I think Emacs needs to somehow see the gnutls binaries and/or dlls. My knowledge is limited on this. I've also struggled to understand PATHs for sometime now, and am not sure if that approach is relevant here. FYI, the emacs directory contains folders labeled bin, etc, info, leim, lisp and site-lisp. The gnutls directory contains folder labeled bin, include, lib and share. Hmm, now I'm finding lots of links on adding paths. Still, I'm skeptical that I would only add gnutls.exe path, as it seems the dlls are needed. Some additional data for Ramhound's first comment I have been attempting the (require 'gnutls) route. This seems to be the most relevant parts in the log: Opening connection to imap.gmail.com via tls... gnutls.c: [1] (Emacs) GnuTLS library not found Opening TLS connection to `imap.gmail.com'... Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com'...failed Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com --protocols ssl3'...failed Opening TLS connection with `openssl s_client -connect imap.gmail.com:993 -no_ssl2 -ign_eof'...failed Opening TLS connection to `imap.gmail.com'...failed I am not sure what "in stallion" means. Emacs seems to have installed itself in program files (x86), so I assume it is 32 bit. I can try and figure out how to double check, but did not realize I would get such fast response time, and am headed out right now. I will try merging the files later tonight?

    Read the article

  • Tying down a cloud by virtualizing everything and then locking VMs to real hardware as necessary

    - by tudor
    I'm looking for a cloud software solution that: Can run on both server and desktop machines; Virtualizes hardware and has the option of exposing each real machine to the cloud; Allows a VM to be "locked" to a set of real hardware capabilities and stay there until moved (e.g. a user's "real" desktop); Allows a VM to link to some types of devices elsewhere (e.g. USB/serial via ethernet); and Is geography-aware to control movement of VMs between real networks. I'm aware that this may be the holy grail of virtualization, and I've searched alot. Some solutions appear to meet some criteria but not others. Most cloud implementations appear to ignore real hardware, for example. I realise that this may be solved by using three different implementations in combination: A standard cloud server farm. A bare-metal network backup utility with PXEBoot. VNC and/or VDI. (VNC obviously would require the real hardware to be running.) This combination, however, has some serious drawbacks that I'd like to solve by treating it as one system. My explanation follows... I have a network of real servers and desktops in multiple locations. I've virtualized servers before using Virtualbox and that's worked quite well. I've even connected USB devices to VMs on servers. I would like to virtualize the desktops in all my offices to facilitate movement of desktops, remote access (e.g. VDI) and bare-metal backups. However, I know that there are problems with this. For example, some desktops have specific hardware (e.g. 3D graphics cards, USB devices, etc) that limit their mobility. Geographic constraints also limit movement in that VMs can be moved easily within offices, but transferring between offices is not always preferable. What I would like to find is a system that can virtualize everything from bare-metal easily by maintaining an abstraction layer on each client and server machine that exposes the hardware available and runs as a cloud. Then certain VMs would be "locked" to specific hardware (so that, e.g. the VM runs only on their own desktop.) This would be required for situations where speed is important (e.g. 3D graphics pass-through). In addition, abstracted low-speed devices (e.g. USB) could be piped from real hardware to a VM in the cloud. This is important since if a VM is taken down, another VM can connect to the real hardware for minimum downtime.

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • Puppet Directory and File ownership ignored

    - by Phil Sturgeon
    Puppet seems to be lying to me, which is not very nice. I am trying to set some files and directories included in /vagrant/src to be 666 and 777, and set the ownership group to the correct Apache user (using the PuppetLabs Apache module). Output from Puppet says yes. [default] Running provisioner: Vagrant::Provisioners::Puppet... [default] Running Puppet with /tmp/vagrant-puppet/manifests/default.pp... stdin: is not a tty No LSB modules are available. warning: require is a metaparam; this value will inherit to all contained resources warning: notify is a metaparam; this value will inherit to all contained resources notice: /Stage[main]//File[/vagrant/src/addons/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/addons/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//Package[curl]/ensure: ensure changed 'purged' to 'present' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/config/config.php]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/system/cms/cache/]/mode: mode changed '0755' to '0777' notice: /Stage[main]//File[/vagrant/src/uploads/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/uploads/]/mode: mode changed '0755' to '0777' notice: /Stage[main]/Apache/Service[httpd]/ensure: ensure changed 'stopped' to 'running' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/owner: owner changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/group: group changed 'vagrant' to 'www-data' notice: /Stage[main]//File[/vagrant/src/assets/cache/]/mode: mode changed '0755' to '0777' notice: Finished catalog run in 2.29 seconds Output from ls -lah says no: $ ls -lah /vagrant/src/ total 36K drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 00:11 . drwxr-xr-x 1 vagrant vagrant 340 2012-07-03 08:08 .. drwxr-xr-x 1 vagrant vagrant 136 2012-07-03 00:11 addons drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 assets drwxr-xr-x 1 vagrant vagrant 510 2012-07-03 07:45 .git -rw-r--r-- 1 vagrant vagrant 1.3K 2012-07-03 00:11 .gitignore -rwxr-xr-x 1 vagrant vagrant 1.4K 2012-07-03 00:11 .htaccess -rwxr-xr-x 1 vagrant vagrant 8.8K 2012-07-03 00:11 index.php drwxr-xr-x 1 vagrant vagrant 442 2012-07-03 00:11 installer -rwxr-xr-x 1 vagrant vagrant 2.8K 2012-07-03 00:11 LICENSE -rw-r--r-- 1 vagrant vagrant 1.1K 2012-07-03 00:11 phpdoc.dist.xml -rw-r--r-- 1 vagrant vagrant 3.3K 2012-07-03 00:11 README.md drwxr-xr-x 1 vagrant vagrant 204 2012-07-03 00:11 system -rw-r--r-- 1 vagrant vagrant 42 2012-07-03 00:11 .travis.yml drwxr-xr-x 1 vagrant vagrant 102 2012-07-03 00:11 uploads Whats up with that? My entire config can be found here.

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • Do glue records in non-circular dns-lookups speed up domain resolution or not?

    - by Joe Hopfgartner
    Doing a lookup for my domain on http://www.intodns.com/ I noticed theese two messages: In Parent section: DNS Parent sent Glue The parent nameserver g.gtld-servers.net is not sending out GLUE for every nameservers listed, meaning he is sending out your nameservers host names without sending the A records of those nameservers. It's ok but you have to know that this will require an extra A lookup that can delay a little the connections to your site. This happens a lot if you have nameservers on different TLD (domain.com for example with nameserver ns.domain.org.) and in NS section: Glue for NS records INFO: GLUE was not sent when I asked your nameservers for your NS records.This is ok but you should know that in this case an extra A record lookup is required in order to get the IPs of your NS records. The nameservers without glue are: 109.230.225.96 84.201.40.52 You can fix this for example by adding A records to your nameservers for the zones listed above. I do perfectly understand that the primary objective of glue records is to resolve circular dependencies. The classic use case: my domain is example.com and I want to have the nameserver ns1.example.com. This will never work because i cannot know the ip of ns1.example.com if I don't fetch example.com and in order to do that I need to fetch it from ns1.example.com. To resolve this deadlock I add a glue record to ns1.example.com containing the ip adress of the nameserver, so this can work out. So this problem does not occour if the nameservers are in a different TLD than the domain i want to look up. But however to fetch the zone information from the nameservers I need to know their ip adress right? And in order to know that i need to fetch the zone the nameservers are in from their respective nameservers, right? (or rather my ISP needs to do that in the background) So an extra lookup that takes time? If I now have glue records, I know the IP adress right away without the need to look it up - so this should speed up the resolution of my domain, shouldnt it? However my DNS zone provider (tecserver.at) replied that this would make no sense because "we are not running ns1.ourdomain.com an ns1.ourdomain.com as authorative NS for ourdomain.com. This would be the only sense for glue records. Tecserver has a glue record because the NS for tecserver.at are ns1.tecserver.at and ns2.tecserver.at. Therefore a glue record is needed for resolution.

    Read the article

  • Why is my Linux box dropping network connection? [closed]

    - by Robo
    I have a Debian server in the form of a Raspberry Pi running Raspian. It has a USB Wi-Fi connection. Sometimes it would not respond when I SSH to it, and would require a reboot. I found something in syslog that may indicate what the problem is, can someone help with what this means? Dec 16 15:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 16:17:01 raspberrypi /USR/SBIN/CRON[2109]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 16:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 17:17:01 raspberrypi /USR/SBIN/CRON[2127]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 17:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 18:17:01 raspberrypi /USR/SBIN/CRON[2142]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 18:34:17 raspberrypi wpa_supplicant[1501]: wlan0: WPA: Group rekeying completed with 00:21:29:6c:5c:3d [GTK=CCMP] Dec 16 19:17:01 raspberrypi /USR/SBIN/CRON[2161]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Dec 16 19:31:29 raspberrypi kernel: [16615.391509] ieee80211 phy0: wlan0: No probe response from AP 00:21:29:6c:5c:3d after 500ms, disconnecting. Dec 16 19:31:29 raspberrypi wpa_supplicant[1501]: wlan0: CTRL-EVENT-DISCONNECTED bssid=00:21:29:6c:5c:3d reason=4 Dec 16 19:31:29 raspberrypi kernel: [16615.416189] cfg80211: Calling CRDA to update world regulatory domain Dec 16 19:31:30 raspberrypi ifplugd(wlan0)[1444]: Link beat lost. Dec 16 19:31:40 raspberrypi ifplugd(wlan0)[1444]: Executing '/etc/ifplugd/ifplugd.action wlan0 down'. Dec 16 19:31:40 raspberrypi wpa_supplicant[1501]: wlan0: CTRL-EVENT-TERMINATING - signal 15 received Dec 16 19:31:40 raspberrypi ifplugd(wlan0)[1444]: Program executed successfully. Dec 16 19:31:42 raspberrypi ntpd[1928]: Deleting interface #2 wlan0, 192.168.1.10#123, interface stats: received=321, sent=327, dropped=0, active_time=16596 secs Dec 16 19:31:42 raspberrypi ntpd[1928]: 202.6.116.123 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 203.99.128.34 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 203.118.148.40 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: 202.89.49.65 interface 192.168.1.10 -> (none) Dec 16 19:31:42 raspberrypi ntpd[1928]: peers refreshed

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • Apache works on http and https, SVN only on http

    - by user27880
    I asked a question about this before, and got most of it fixed. If I switch off https redirect and go to http://mydomain.com/svn/test0, I get the authentication window popping up, and I can enter my AD credentials, and bingo. Switching https redirect back on, if I go to http://mydomain.com I am automatically redirected to https, which is what I want, and the 'CerntOS test page' pops up. Perfect. The problem occurs when I want to go to one of my test repos via https. Here is my httpd.conf file, with confidential information suitably hosed... === NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName svn.mycompany.com ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common Redirect permanent / https://svn.mycompany.com </VirtualHost> <VirtualHost svn.mycompany.com:443> SSLEngine On SSLCertificateFile /etc/httpd/ssl/wildcard.mycompany.com.crt SSLCertificateKeyFile /etc/httpd/ssl/wildcard.mycompany.com.key SSLCertificateChainFile /etc/httpd/ssl/intermediate.crt ServerName svn.mycompany.com ServerAdmin [email protected] ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common <Location /svn> DAV svn SVNParentPath /usr/local/subversion SVNListParentPath off AuthName "Subversion Repositories" # NT Logon Details Require valid-user AuthBasicProvider file ldap AuthType Basic AuthzLDAPAuthoritative off AuthUserFile /etc/httpd/conf/svnpasswd AuthName "Subversion Server II" AuthLDAPURL "ldap://our-pdc:389/OU=Company Name,DC=com,DC=co,DC=uk?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "DOMAIN\subversion" AuthLDAPBindPassword XXXXXXX AuthzSVNAccessFile /etc/httpd/conf/svnaccessfile </Location> </VirtualHost> === Now, in ssl_error_log, I get === ==> /etc/httpd/logs/ssl_error_log <== [Fri Nov 01 16:07:55 2013] [error] [client XXX.XXX.XXX.XXX] File does not exist: /var/www/html/svn === This comes from the DocumentRoot directive further up the httpd.conf file, which of course points to /var/www/html. I know that this location is wrong, but how can I get SVN to serve the repo? I tried an Alias directive as so .. Alias /svn /usr/local/subversion .. but this didn't work. I tried to alter the Location directive. That didn't work either. Can someone help? I sense that this is so close to being solved ... Thanks. Edit: apachectl -S output: [root@svn conf]# apachectl -S VirtualHost configuration: 127.0.0.1:443 svn.mycompany.com (/etc/httpd/conf/httpd.conf:1020) wildcard NameVirtualHosts and default servers: default:443 svn.mycompany.com (/etc/httpd/conf.d/ssl.conf:74) *:80 is a NameVirtualHost default server svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) port 80 namevhost svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) Syntax OK

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >