Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 163/240 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • How does IPv6 subnetting work and how does it differ from IPv4 subnetting?

    - by Michael Hampton
    This is a Canonical Question about IPv6 Subnetting. Related: How does IPv4 Subnetting Work? I know a lot about IPv4 Subnetting, and as I prepare to (deploy|work on) an IPv6 network I need to know how much of this knowledge is transferable and what I still need to learn. IPv6 seems at first glance to be much more complex than IPv4. So I would like to know: IPv6 is 128 bits, so why is /64 the smallest recommended subnet for hosts? Related to this: Why is it recommended to use /127 for point to point links between routers, and why was it recommended against in the past? Should I change existing router links to use /127? Why would virtual machines be provisioned with subnets smaller than /64? Are there other situations in which I would use a subnet smaller than /64? Can I map directly from IPv4 subnets to IPv6 subnets? My interfaces have several IPv6 addresses. Must the subnet be the same for all of them? Why do I sometimes see a % rather than a / in an IPv6 address and what does it mean? Am I wasting too many subnets? Aren't we just going to run out again? In what other major ways is IPv6 subnetting different from IPv4 subnetting?

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • vagrant and puppet security for ssl certificates

    - by Sirex
    I'm pretty new to vagrant, would someone who knows more about it (and puppet) be able to explain how vagrant deals with the ssl certs needed when making vagrant testing machines that are processing the same node definition as the real production machines ? I run puppet in master / client mode, and I wish to spin up a vagrant version of my puppet production nodes, primarily to test new puppet code against. If my production machine is, say, sql.domain.com I spin up a vagrant machine of, say, sql.vagrant.domain.com. In the vagrant file I then use the puppet_server provisioner, and give a puppet.puppet_node entry of “sql.domain.com” to it gets the same puppet node definition. On the puppet server I use a regex of something like /*.sql.domain.com/ on that node entry so that both the vagrant machine and the real one get that node entry on the puppet server. Finally, I enable auto-signing for *.vagrant.domain.com in puppet's autosign.conf, so the vagrant machine gets signed. So far, so good... However: If one machine on my network gets rooted, say, unimportant.domain.com, what's to stop the attacker changing the hostname on that machine to sql.vagrant.domain.com, deleting the old puppet ssl cert off of it and then re-run puppet with a given node name of sql.domain.com ? The new ssl cert would be autosigned by puppet, match the node name regex, and then this hacked node would get all the juicy information intended for the sql machine ?! One solution I can think of is to avoid autosigning, and put the known puppet ssl cert for the real production machine into the vagrant shared directory, and then have a vagrant ssh job move it into place. The downside of this is I end up with all my ssl certs for each production machine sitting in one git repo (my vagrant repo) and thereby on each developer's machine – which may or may not be an issue, but it dosen't sound like the right way of doing this. tl;dr: How do other people deal with vagrant & puppet ssl certificates for development or testing clones of production machines ?

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Re-configure Office 2007 installation unattended: Advertised components --> Local

    - by abstrask
    On our Citrix farm, I just found out that some sub-components are "Installed on 1st Use" (Advertised), which does play well on terminal servers. Not only that, but you also get a rather non-descriptive error message, when a document tried to use a component, which is "Installed on 1st Use" (described on Plan to deploy Office 2010 in a Remote Desktop Services environment): Microsoft Office cannot run this add-in. An error occurred and this feature is no longer functioning correctly. Please contact your system administrator. I have ~50 Citrix servers where I need to change the installation state of all Advertised components to Local, so I created an XML file like this: <?xml version="1.0" encoding="utf-8"?> <Configuration Product="ProPlus"> <Display Level="none" CompletionNotice="no" SuppressModal="yes" AcceptEula="yes" /> <Logging Type="standard" Path="C:\InstallLogs" Template="MS Office 2007 Install on 1st Use(*).log" /> <Option Id="AccessWizards" State="Local" /> <Option Id="DeveloperWizards" State="Local" /> <Setting Id="Reboot" Value="NEVER" /> </Configuration> I run it with a command like this (using the appropriate paths): "[..]\setup.exe" /config ProPlus /config "[..]\Install1stUse-to-Forced.xml" According to the log file, the syntax appears to be accepted and the config file parsed: Parsing command line. Config XML file specified: [..]\Install1stUse-to-Forced.xml Modify requested for product: PROPLUS Parsing config.xml at: [..]\Install1stUse-to-Forced.xml Preferred product specified in config.xml to be: PROPLUS But the "Final Option Tree" still reads: Final Option Tree: AlwaysInstalled:local Gimme_OnDemandData:local ProductFiles:local VSCommonPIAHidden:local dummy_MSCOMCTL_PIA:local dummy_Office_PIA:local ACCESSFiles:local ... AccessWizards:advertised DeveloperWizards:advertised ... And the components remain "Advertised". Just to see if the installation state is overridden in another XML file, I ran: findstr /l /s /i "AccessWizards" *.xml Against both my installation source and "%ProgramFiles%\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller", but just found DefaultState to be "Local". What am I doing wrong? Thanks!

    Read the article

  • Kerberos & localhost

    - by Alex Leach
    I've got a Kerberos v5 server set up on a Linux machine, and it's working very well when connecting to other hosts (using samba, ldap or ssh), for which there are principals in my kerberos database. Can I use kerberos to authenticate against localhost though? And if I can, are there reasons why I shouldn't? I haven't made a kerberos principal for localhost. I don't think I should; instead I think the principal should resolve to the machine's full hostname. Is that possible? I'd ideally like a way to configure this on just one server (whether kerberos, DNS, or ssh), but if each machine needs some custom configuration, that'd work too. e.g $ ssh -v localhost ... debug1: Unspecified GSS failure. Minor code may provide more information Server host/[email protected] not found in Kerberos database ... EDIT: So I had a bad /etc/hosts file. If I remember correctly, the original version I got with Ubuntu had two 127.0. IP addresses, something like:- 127.0.0.1 localhost 127.0.*1*.1 hostname For no good reason, I'd changed mine a long time ago to: 127.0.0.1 localhost 127.0.*0*.1 hostname.example.com hostname This seemed to work fine with everything until I tried out ssh with kerberos (a recent endeavour). Somehow this configuration led to sshd resolving the machine's kerberos principal to "host/localhost@\n", which I suppose makes sense if it uses /etc/hosts for forward and reverse dns lookups in preference to external dns. So I commented out the latter line, and sshd magically started authenticating with gssapi-with-mic. Awesome. (Then I investigated localhost and asked the question)

    Read the article

  • Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • LDAP Authentication woes

    - by Marcelo de Moraes Serpa
    Hello list, I have a local OpenLDAP server with a couple of users. I'm using it for development purposes, here's the ldif: #Top level - the organization dn: dc=site, dc=com dc: site description: My Organization objectClass: dcObject objectClass: organization o: Organization #Top level - manager dn: cn=Manager, dc=site, dc=com objectClass: organizationalRole cn: Manager #Second level - organizational units dn: ou=people, dc=site, dc=com ou: people description: All people in the organization objectClass: organizationalunit dn: ou=groups, dc=site, dc=com ou: groups description: All groups in the organization objectClass: organizationalunit #Third level - people dn: uid=celoserpa, ou=people, dc=site, dc=com objectclass: pilotPerson objectclass: uidObject uid: celoserpa cn: Marcelo de Moraes Serpa sn: de Moraes Serpa userPassword: secret_12345 mail: [email protected] So far, so good. I can bind with "cn=Manager,dc=site,dc=com" and the 12345678 password (the local server password, setup on slapd.conf). However, I would like to bind with any user in under the people OU. In this case, I'd like to bind with: dn: uid=celoserpa, ou=people, dc=site, dc=com userPassword: secret_12345 But I'm getting a "(49) - Invalid Credentials" error everytime. I have tried through CLI tools (such as ldapadd, ldapwhoami, etc) and also ruby/ldap. The bind with these credentials fails with a invalid credentials error. I thought that it could be an ACL issue, however, the ACLs on slapd.conf seem to be right: access to attrs=userPassword by self write by dn.sub="ou=people,dc=site,dc=com" read by anonymous auth access to * by * read I was suspecting that maybe OpenLDAP doesn't compare against userPassword? Or maybe some ACL configuration I am missing that is somehow affecting the read access to userPassword for the specific DN. I'm really lost here, any suggestion appreciated! Cheers, Marcelo.

    Read the article

  • Windows clients unable to access Samba share on AD joined Linux box every 7 days

    - by Hassle2
    The problem: Every 7 days, 2 Windows Servers are unable to access a SMB/CIFS share. It will start working after a handful of hours. The environment: OpenFiler Linux box joined to 2003 AD Domain Foreground app on Win2003 server access the SMB/CIFS share with windows credentials Another process on Win2008 access the share via SQL Server with windows credentials The Samba version on the Linux box is 3.4.5. Security is set to ADS wbinfo and getent return back expected users and groups Does not look to be a double hop issue as it's always the 2 accounts, regardless of the calling user. There is a DNS entry in both forward and reverse lookup zone for the linux box The linux box's computer object in active directory shows that it was modified around/at the same time that the two clients started failing to access the share Trying to access the share via IP works when by name does not Rebooting the Windows server takes care of it (it's production and only restarted it once) Restarting smbd, winbind, nmbd had no effect Error in samba log for the client in question: smbd/sesssetup.c:342(reply_spnego_kerberos) Failed to verify incoming ticket with error NT_STATUS_LOGON_FAILURE! The Question: Does this look like the machine account password is changing (hence the AD object showing the updated modified date) or are the two windows clients unable to request a new ticket that works against this linux box?

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Prevent Roaming profiles from syncing certain elements

    - by user29919
    Hello everyone, I'm somewhat new to the Server 2008 front, and I'm afraid I've hit my first snag: I've set up roaming profiles, and they appear to be working too well. Is there a way to limit, ideally on a folder/object basis, what gets synced with a roaming profile? What I'm trying to do is: 1) stop my roaming profile from syncing desktop layout - I run a dual-screen desktop and a laptop, and it's really annoying to have to reposition everything after logging onto the laptop, because it forces everything onto one screen. 2) stop it from syncing registry variables - specifically, I want Visual Studio to load different setting files on each computer. Currently, the variable that contains that path is getting synced whenever I log in, so I get the settings from whatever box I last logged out from. 3) stop syncing the start menu - this one's not as big, but I'm noticing 'program not found' icons even for programs that are installed. they work when I click them - they just look ugly. I'm running Windows SBS 2008 x64 with two Win7 clients (x86 Pro, and X64 Ultimate). Is there a simple way to do that? Or am I trying to work too much against what roaming profiles are designed for? I could, of course, set up different profiles for the desktop and laptop, but that seems to defeat the point of roaming profiles entirely... Thanks in advance! Any help will be much appreciated =)

    Read the article

  • sql 2008 disk layout on a budget this is for database mirroring

    - by user22215
    Guys I'm rolling out a SQL database server that will be used to back Sharepoint 2007. Right now I need some advice on my disk layout. I have two Dell servers that are configured a little differently in terms of storage. The principle server will be using a combination of local storage and san storage. I have to work with what I have the organization is currently all allocated on san storage it was like pulling teeth to even get what I have to work with now. My disk setup on the principle is as follows: raid 1 for OS raid 10 for logs raid 10 fiber on san for high IO databases raid 10 sata on san for content databases My question in regards to the principle server is where should I place the temp db? I thought about placing it on the fiber raid 10 which will be hosting my high IO Sharepoint SSP databases my only other choice is to move it to the raid 1 os partition which I’m sure you guys will be against. Now let’s talk about the mirror server it is not connected to the san it is all local 6 15k SAS drives. Now my question is the same do I put tempdb on the os partition or do I leave the os partition and use a single raid 10 for everything? Any help you can provide is much appreciated.

    Read the article

  • Portable USB drives hidden pertition - New request

    - by ZXC
    This question was made by Francesco on Jul 29 '11 at 17:14. and the replies were not satisfactory due they not point to an important problem that´s: Why could anyone want to make certain data only accesible for a program but not to the users?. For example: If I want to do a safe distribution of original music for demostration purposes I will need several requisites: 1) The music should be heard using a simple procedure like selecting the name of each song on a playlist of a mediaplayer. 2) The portable media, ussually a portable USB drive, must hide for complete and should make unaccesible the files that contain the audio data to anything but the mediaplayer, that must be in the first partition, the one that is visible. 3) Considering that´s impossible to really hide files in a non-hidden partition, a second hidden partition should be created in the USB drive and the audio data will be stored there. 4) The trick is to read the audio data files stored in the hidden partition with a mediaplayer stored in the visible partition, the media player also should be a complete standalone program and independent from any library of the operating system except of the OS audio system. 5) The hidden partition should have a copy protection scheme that could impede to do copies of the data or create working ISO images of it. I know that this description could not be technically accurate but it has a complete logic from the needs of a music producer against the problem of piracy. The philosophy that surrounds the concept is to transform a virtual object like a digital string of audio in a solid object like the analog vinyl discs are.

    Read the article

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • [SOLVED] Single Sign On for intranet with Apache and Linux MIT Kerberos

    - by Beerdude26
    EDIT: SOLVED! See my answer below. Greetings, I am looking for a way to do a single sign on to an intranet in the following manner: A Linux user logs on via a graphical frontend (for example, GNOME). He automatically requests a TGT for his username from the MIT Kerberos KDC. Via some way or another, the Apache server (which we'll assume is on the same server as the KDC), is informed that this user has logged in. When the user accesses the intranet, he is automatically granted access to his web applications. I don't think I've seen this kind of functionality while searching the net. I know the following possibilities exist: Using an authentication module such as mod_auth_kerb, a user is presented with a login prompt to enter his username and password, which are then authenticated against the MIT Kerberos server. (I would like this to be automatic.) IIS supports integrated Windows logon via ASP.Net when the user is part of an Active Directory. (I'm looking for the Linux / Apache equivalent.) Any suggestions, criticism and ideas are highly appreciated. This is for a school project to show a proof-of-concept, so every handy piece of information is more than welcome. :)

    Read the article

  • (Mac Intel) HP PS driver prints in B&W from Adobe Reader after installing Cannon PS driver

    - by JohnB
    I have a unique problem that leaves me at a loss as to where to start troubleshooting. We have three Macs we use for graphics, two of which are PowerPC and one which is Intel. They are set up to print to an HP 5500dn, but occasionally this printer gets tied up with a massive print job, so I installed the PS driver (iR-PSv1.81MacOSX) for the Cannon C3200 Printer/copier on each of the machines. Both of the PowerPC Macs installed without issue, but the Intel Mac exhibits strange behavior: I've confirmed that while the Cannon driver is installed (whether or not the Cannon is set up for printing in print settings), the HP 5500dn will print in color from Safari, but only prints in black and white from Adobe Reader. The Cannon printer itself has not exhibited any strange behavior As soon as the Cannon driver is uninstalled, the HP 5500dn prints in color from Adobe Reader again. We run a network of Windows PCs, and the 'Mac room' mostly takes care of itself, so we don't have any experienced Mac administrators onsite. The Cannon is capable of Appletalk, but the PS driver seemed easier to work with (and Appletalk is currently disable on the Cannon. I'm not against using the Appletalk compatible drivers, but I would rather use the PS driver if at all possible - I don't want to open up the proverbial can of worms. If someone has any clues or suggestions that would help troubleshoot this problem, I would be grateful. I've already done some googling, but due to the obscure nature of this problem, I haven't been very successful. I don't like to create multiple threads on multiple sites, but I'm posting here due to Chopper3's suggestion on my post on ServerFault (http://serverfault.com/questions/135349/mac-intel-hp-ps-driver-prints-in-bw-from-adobe-reader-after-installing-cannon)

    Read the article

  • How use DNS server to create simple HA (High availability) of my website?

    - by marc22
    Welcome, How can i use DNS server to create simple HA (High availability) of website ? For example if my web-server ( for better understanding i use internal IP in real it will be other hosting companies) 192.168.0.120 :80 (is offline) traffic go to 192.168.0.130 :80 You have right, i use bad word "hight avability" of course i was thinking about failover. Using few IP in A records is good for simple load-balancing. But not in case, if i want notice user about failure (for example display page, Oops something is wrong without our server, we working on it) against "can't establish connection". I was thinking about setting up something like this 2 DNS servers, one installed on www server Both have low TTL on my domain, set up 2 ns records first for DNS with my apache server second to other dns If user try connect he will get ip of www server using first dns, if that dns is offline (probably www server is also down) so it will try second NS record, what will point to another dns, that dns will point to "backup" page. That's what i would like to do. If You have other idea please share. Reverse proxy is not option, because IP of server can change, or i can use other country for backup.

    Read the article

  • Difference in performance: local machine VS amazon medium instance

    - by user644745
    I see a drastic difference in performance matrix when i run it with apache benchmark (ab) in my local machine VS production hosted in amazon medium instance. Same concurrent requests (5) and same total number of requests (111) has been run against both. Amazon has better memory than my local machine. But there are 2 CPUs in my local machine vs 1 CPU in m1.medium. My internet speed is very low at the moment, I am getting Transfer rate as 25.29KBps. How can I improve the performance ? Do not know how to interpret Connect, Processing, Waiting and total in ab output. Here is Localhost: Server Hostname: localhost Server Port: 9999 Document Path: / Document Length: 7631 bytes Concurrency Level: 5 Time taken for tests: 1.424 seconds Complete requests: 111 Failed requests: 102 (Connect: 0, Receive: 0, Length: 102, Exceptions: 0) Write errors: 0 Total transferred: 860808 bytes HTML transferred: 847155 bytes Requests per second: 77.95 [#/sec] (mean) Time per request: 64.148 [ms] (mean) Time per request: 12.830 [ms] (mean, across all concurrent requests) Transfer rate: 590.30 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 1 Processing: 14 63 99.9 43 562 Waiting: 14 60 96.7 39 560 Total: 14 63 99.9 43 563 And this is production: Document Path: / Document Length: 7783 bytes Concurrency Level: 5 Time taken for tests: 33.883 seconds Complete requests: 111 Failed requests: 0 Write errors: 0 Total transferred: 877566 bytes HTML transferred: 863913 bytes Requests per second: 3.28 [#/sec] (mean) Time per request: 1526.258 [ms] (mean) Time per request: 305.252 [ms] (mean, across all concurrent requests) Transfer rate: 25.29 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 290 297 14.0 293 413 Processing: 897 1178 63.4 1176 1391 Waiting: 296 606 135.6 588 1171 Total: 1191 1475 66.0 1471 1684

    Read the article

  • Can one really fry a monitor by setting the wrong HorizSync and VertRefresh?

    - by rumtscho
    I've encountered this problem on several different systems with several different monitors: a monitor functions perfectly under Windows. I install a Linux and the max resolution is at some impossibly low value, mostly 640x480, changing it in Xorg.conf doesn't work. The X.org log file then shows that the driver cannot determine the correct refresh rate for the monitor, so it ignores everything in Xorg.conf and just loads in some default minimalistic mode. Googling the problem leads to an easy solution: set the HorizSync and VertRefresh in Xorg.conf, and everything works. The problem seems to be a common one, and I've seen dozens of results recommending the solution. Each of them contains the warning that you should use the value ranges provided with the monitor. Because if you don't, and your video card sends a signal with the wrong refresh rate, this can damage your monitor. Of course, you don't have a user manual for your monitor any more. If you are lucky to find one on the attic or on the net, it doesn't contain any information about the supported refresh rate. So you just type in the value suggested in the solution description, which varies wildly depending on your source, and cross your fingers. You restart, and... ... you've set the wrong values. So the monitor shows a short message like "input signal out of range", and you do a hard restart, repair your Xorg.conf in recovery mode, and everything is fine, including your monitor. So does this warning reflect a real possibility, or is it just a geeky urban legend? Or is it something which used to happen in the past, before manufacturers started protecting the monitors against it? Is it technically possible with every monitor technology, or is it maybe something which can only happen to a CRT? If you think that it's true, why? Have you ever witnessed a monitor die from the wrong refresh config, or have you read of it in a reputable source?

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • SBS 2011 Essentials and too many new Mac users

    - by Harry Muscle
    We currently have about 15 users on a Windows SBS 2011 Essentials Server. I've just been informed that we plan to bring aboard about 15 more users that will be using Macs. We'll be using a Mac Server to manage the 15 new Macs, however, I'm looking for advice on how to best set this all up. Ideally I would just add the 15 new Mac users to Active Directory and setup the Mac Server to authenticate against AD, unfortunately the SBS 2011 Essentials Server has a limit of 25 users, so adding these new users to AD won't work unless we upgrade the Windows server (which I'd rather avoid since it's a lot of work and a lot of money). That leaves the option of creating user accounts for these 15 Mac users on the Mac Server only. The problem that this creates though is how do I share files been Mac users and Windows users since they are now using different systems for network authentication. Any advice (short of upgrade to SBS Standard) is highly appreciated. Thanks, Harry P.S. We don't run Exchange or anything else on our server ... it's mainly used for file sharing and enforcing security via group policies.

    Read the article

  • Fix for php 5.3.9 libxsl security "bug" fix

    - by Question Mark
    just this morning i updated my debian server to php 5.3.9 , change log (last item in list) has a fix for this bug and now when running any hosted site using XSL transforms i get: Warning: XSLTProcessor::transformToXml(): Can't set libxslt security properties, not doing transformation for security reasons I'm not using any <sax:output> tags in my xslt at all. Does anybody have any information on this, current chatter about it is thin, so i'm i little lost. Using the suggestion about switching ini settings on and off either side of -transformToXml(): ini_set("xsl.security_prefs", XSL_SECPREFS_NONE) or $xsl->setSecurityPreferences(XSL_SECPREFS_NONE) brings me back to the same error Many thanks. Progress: - Upgrading libxml and recompiling libxslt against the new version was a good suggestion, though has not fixed the issue. - Compiling the latest php5.3 snapshot does not fix the issue. Solution: I'm unsure what actually solved this, very sorry for anyone else having the same problem. firstly i upgraded libxml, then applied a few patches, then went into php source for the xsl parser and added some debugging and a few tweaks, after a few compiles getting the configure args right the error went away and wasn't reproducible. I would definitely recommend upgrading libxml as Petr suggested below and then grabbing the latest snapshot from php.net.

    Read the article

  • Application to automate Windows software installation in a test lab

    - by Marc
    I have several test environments (hyper-V) which contain a variety of windows servers. Each machine needs periodically rolling back to a given snapshot and then re-installing with the latest version of our software to test. The software installs are quite complex MSI's with a fair few option screens. I know that the installs can be driven from the command line, passing in parameters to override the wizard options. At the simplest level I suppose I could just write a batch file to kick off each install with the required parameters, however the values that are passed in do need to change from time to time (and environment to environment) so a tool with a config file and simple GUI seems like a better idea. I think what makes it slightly more painful is the multiple environments. For example one environment might contain 4 servers and need a config file with all the server names, service endpoints etc. Another environment might be a 1-box install with all names and endpoints set to localhost. So, ideally I want to be able to store different setup configurations and use them to run all the required installers with the relevant settings against the relevant machines. Before I go off to write the thing, does anyone know of an existing, simple, free tool that will let me achieve this?

    Read the article

  • DPM 2007 clashing with existing SQL backup job

    - by Paul D'Ambra
    I've recently installed a DPM2007 server on Server 2003 and have set up a protection group against a server 2003 server running SQL 2005 SP3. The SQL server in question has a full backup (as a sql agent job) once a day and transaction log backups hourly. These are zipped up and FTP'd to a server offsite by a scheduled task. Since adding the DPM job I'm receiving many error messages: DPM tried to do a SQL log backup, either as part of a backup job or a recovery to latest point in time job. The SQL log backup job has detected a discontinuity in the SQL log chain for database SERVER_NAME\DB_Name since the last backup. All incremental backup jobs will fail until an express full backup runs. My google-fu suggests that I need to change the full backup my sqlagent job is running to a copy_only job. But I think this means that I can't use that backup with the transaction_logs to restore the database if the building (including the DPM server) burns down. I'm sure I'm missing something obvious and thought I'd see what the hivemind suggests. It is an option to set-up a co-located DPM server elsewhere and have DPM stream the backup but that's obviously more expensive than the current set up. Many thanks in advance

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >