Search Results

Search found 32953 results on 1319 pages for 'access secret'.

Page 508/1319 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • certificate working on IP but not on URL

    - by Stephan
    I asked this question on stackoverflow, and I've been suggested to repost it here. I have a problem accessing my site (on https) with IEMobile 9 (WP 7.5). It says it's got problem with the certificate, as if it wasn't valid. Everything works on any other browser or platform I tested (android (several phones and a galaxy tab with stock browser, firefox, opera, dolphin), iOS (iphone and ipad with safari and chrome), an old nokia with symbian, windows 7, linux and mac). To try to solve this I saved the certificate (.cer) on the server and accessed it from the phone browser. It always complained except when I accessed it through the server IP (192.168.xx.xx). At that point it (said it) installed correctly the certificate. If then I try to access the index.html still using the IP all works fine and it doesn't complain about the certificate. If, though, I try to access the index using the actual URL (blah.myblah.com), it complains again about the certificate, as if it wasn't installed! It isn't a problem of DNS, cause that's up and serving the right ip, and the phone is correctly setup to use it. The certificate is signed by geotrust/rapidssl for *.myblah.com.

    Read the article

  • Sending emails with Thunderbird + Postfix + Zarafa does not work

    - by Sven Jung
    I installed zarafa on my vserver and use as MTA postfix. The webaccess works fine, I can revceive and send emails, also receiving mails with thunderbird (IMAP ssl/tls) works. But there is a problem, sending emails with thunderbird. I established an account in thunderbird with imap ssl/tls connection which works finde, and a starttls smtp connection on port 25 for the outgoing mail server. If I try to send an email with thunderbird I get an error: 5.7.1 Relay access denied this is my mail.log Sep 7 16:10:07 postfix/smtpd[6153]: connect from p4FE06C0A.dip.t-dialin.net[79.224.110.10] Sep 7 16:10:08 postfix/smtpd[6153]: NOQUEUE: reject: RCPT from p4FE06C0A.dip.t-dialin.net[79.224.110.10]: 554 5.7.1 <[email protected]>: Relay access denie$ Sep 7 16:10:10 postfix/smtpd[6153]: disconnect from p4FE06C0A.dip.t-dialin.net[79.224.110.10] and this my /etc/postfix/main.conf # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache virtual_mailbox_domains = firstdomain.de, seconddomain.de virtual_mailbox_maps = hash:/etc/postfix/virtual virtual_alias_maps = hash:/etc/postfix/virtual virtual_transport = lmtp:127.0.0.1:2003 myhostname = mail.firstdomain.de alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = ipv4 I don't know what to do, because actually sending mails to internal and external addresses works with the webaccess. Perhaps somebody can help me?

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update

    - by tags2k
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Fedora 12 on Vmware network disabled on restore

    - by Chaitanya
    I have a fedora 12 guest running on VMWare on windows 7. I use it mainly for the occasional linux dev. Whenever I restart the guest, networking works fine. But if I close the VMware player and save state, the next time I start the image, networking is disabled (red x on the network icon. message saying networking disabled). I can't seem to find a way to restore networking. I have to reboot the guest to get my network access back again. My Ubuntu image doesn't have this problem. I can close the player and when I re run the image, I can pick up where I left off, with all the open firefox windows and application windows as I left them. Fedora saves state, but doesn't seem to enable networking. There is a relevant warning I have seen "SELinux is preventing /sbin/ifconfig "read" access to/var/run/vmware-active-nics." But I am not sure how to solve it. I know fedora isn't officially supported by VMware, but it seems to be working fine for the most part and meeting my needs, except for this one little issue. Any help would be much appreciated.

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • django : nginx : jquery css not being served

    - by PlanetUnknown
    I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd, reason when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see any 404 any error in the nginx log files. Actually the logs for nginx are not getting refreshed at all. I restarted the nginx server using root, is that incorrect ? There is a user www-data defined in the nginx configuration file. Any pointers would be great.

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Vlans and subinterfaces

    - by Adeodatus
    I've inherited a moderate size network that I'm trying to bring some sanity to. Basically, its 8 public class Cs and a slew of private ranges all on one vlan (vlan1, of course). Most of the network is located throughout dark sites. I need to start separating some of the network. I've changed the ports from the main cisco switch (3560) to the cisco router (3825) and the other remote switches to trunking with dot1q encapsulation. I'd like to start moving a few select subnets to different vlans. To get some of the different services provided on our address space (and to separate customers) on to different vlans, do I need to create a subinterface on the router for each vlan and, if so, how do I get the switch port to work on a specific vlan? Keep in mind, these are dark sites and geting console access is difficult if not impossible at the moment. I was planning on creating a subinterface on the router for each vlan then setting the ports with services I want to move to a different vlan to allow only that vlan. Example of vlan3: 3825: interface GigabitEthernet0/1.3 description Vlan-3 encapsulation dot1Q 3 ip address 192.168.0.81 255.255.255.240 the connection between the switch and router: interface GigabitEthernet0/48 description Core-router switchport trunk encapsulation dot1q switchport mode trunk show interfaces gi0/48 switchport Name: Gi0/48 Switchport: Enabled Administrative Mode: trunk Operational Mode: trunk Administrative Trunking Encapsulation: dot1q Operational Trunking Encapsulation: dot1q Negotiation of Trunking: On Access Mode VLAN: 1 (default) Trunking Native Mode VLAN: 1 (default) Administrative Native VLAN tagging: enabled Voice VLAN: none Administrative private-vlan host-association: none Administrative private-vlan mapping: none Administrative private-vlan trunk native VLAN: none Administrative private-vlan trunk Native VLAN tagging: enabled Administrative private-vlan trunk encapsulation: dot1q Administrative private-vlan trunk normal VLANs: none Administrative private-vlan trunk private VLANs: none Operational private-vlan: none Trunking VLANs Enabled: ALL Pruning VLANs Enabled: 2-1001 Capture Mode Disabled Capture VLANs Allowed: ALL Protected: false Unknown unicast blocked: disabled Unknown multicast blocked: disabled Appliance trust: none So, if the boxen hanging off of gi0/18 on the 3560 are on an unmanaged layer2 switch and all within the 192.168.0.82-95 range and are using 192.168.0.81 as their gateway, what is left to do, especially to gi0/18, to get this working on vlan3? Are there any recommendations for a better setup without taking everything offline?

    Read the article

  • RAID administration in Debian Lenny

    - by Siim K
    I've got an old box that I don't want to scrap yet because it's got a nice working 5-disk RAID assembly. I want to create 2 arrays: RAID 1 with 2 disks and RAID 5 with the other 3 disks. The RAID card is Intel SRCU31L. I can create the RAID 1 volume in the console that you access with Ctrl+C at startup. But it only allows for creation of one volume so I can't do anything with the 3 remaining disks. I installed Debian Lenny on the RAID 1 volume and it worked out nicely. What utilites could I now use to create/manage the RAID volumes in Debian Linux? I installed the raidutils package but get an error when trying to fetch a list: #raidutil -L controller or #raidutil -L physical # raidutil -L controller osdOpenEngine : 11/08/110-18:16:08 Fatal error, no active controller device files found. Engine connect failed: Open What could I try to get this thing working? Can you suggest any other tools? Command #lspci -vv gives me this about the controller: 00:06.1 I2O: Intel Corporation Integrated RAID (rev 02) (prog-if 01) Subsystem: Intel Corporation Device 0001 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - <MAbort- >SERR- <PERR- INTx- Latency: 64, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: Memory at f9800000 (32-bit, prefetchable) [size=8M] [virtual] Expansion ROM at 30020000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: PCI_I2O Kernel modules: i2o_core

    Read the article

  • Tips and Suggestions IP Address Re-Addressing?

    - by RSXAdmin
    Hello serverfault Universe, My ever evolving and expanding local area network is currently using a class-C address. My network consists of multiple subnets depending on site/location. 192.168.1.x is site HQ 192.168.5.x is secondary site 192.168.10.x is so on and so forth. Long story short - I have inherited this network design from the previous admin who has left the company which started off with a dozen people and now has just over 300 full time/part time employees. We do not yet have client VPN access; but we do have site to site VPN setup. My question is, in preparation for outside client access to my network via Cisco ASA, I would like to re-address the HQ site because I understand a 192.168.1.x or 192.168.0.x are not very good choices for a company subnet - it may conflict with a home user's LAN when connecting to my LAN, I believe? Through your experience, does anyone out there have any suggestions and tips on how I can proceed with re-addressing my subnets. If I designed this network I would have gone with a 10.0.0.0 (mask 255.255.255.0) so I am leaning towards changing it to fit. Thank you.

    Read the article

  • Fail2Ban adds iptable rules but they are not working?

    - by EApubs
    Fail2Ban just blocked my IP for 3 SSH attempts. It added the iptables rule and I can see it using the "sudo iptables -L -n" command. But I can still access the site and login through SSH! What might be the problem? Is it because im using CloudFlare? I have set Nginx to write the real IPs to the access logs instead of the Cloud Flare IP. Isn't it enough? Chain fail2ban-ssh (1 references) target prot opt source destination DROP all -- 119.235.14.8 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 The input chain : Chain INPUT (policy DROP) target prot opt source destination fail2ban-NoAuthFailures tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 fail2ban-nginx-dos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,8090 fail2ban-postfix tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465 fail2ban-ssh-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0 LOG all -- 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Why is my biometric logon method no longer default and how to fix it? Win 7-based lenovo laptop

    - by StormRyder
    My laptop has been having some random problems with hibernating properly. That is a whole another topic that I still haven't resolved, but this is a different issue. The issues are connected, I guess, because one time after my computer experienced a failed hibernate, my login screen changed: since then my login screen always appears as the standard one with a prompt to type in a password. I can still use the finger scanner by clicking "other credentials" button. But that's annoying having to do that every time; previously the prompt to use finger scanner was the default one, whereas the typing password access was the alternate. How do I bring this arrangement back? In other words, how to switch the default from the type password prompt to the finger scan prompt? From online searches, I have only found discussions of turning the biometric access on or off... but clearly it is turned on and working, since I can use it. It's just not the default for some reason...

    Read the article

  • Options for small windows network setup without dedicated server?

    - by Mitch
    I'm very weak on networking and hope someone can point me in the right direction: I have written some windows client/server software which incorporates a database which is located on a windows server. I have a test installation running at a customer's office where the server has a static IP address. In this case its easy for the clients to access the database because of the fixed IP address. Also, customers with network servers generally have specialist support staff to set up my software, so its not such a problem for me. However I also need to offer the software to customers who have small offices with less than 10 PCs and no dedicated network server. In this case I want the customer to be able to nominate one PC as the database "server" and install my software and have the clients access it. But in this situation I believe the "server" PC may not have a dedicated IP address. Q1: What is the best way to set this up simply and make it work? Can I reliably reference the "server" by using its name, or is there a way to assign dummy fixed IP addresses? Ideally this needs to be workable on small networks running a mixture of XP/Vista/Windows7 as my target market may well have mixed OSes etc. I guess this would be akin to home networking? Many thanks Mitch

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • Mac OS X Lion Apache Server not Found

    - by Burak Erdem
    After upgrading to Lion 10.7.2 today, Apache virtual hosts are not working anymore. When I go to http://XYZ.localhost, it say "server not found". I am using Apache on my Mac OS X Lion and until today, it was working fine. I can access http://localhost but I can't access http://XYZ.localhost My /etc/hosts file is like below; 127.0.0.1 XYZ.localhost My /etc/apache2/extra/httpd-vhosts.conf file is like below; <VirtualHost *:80> ServerName XYZ.localhost DocumentRoot /Library/WebServer/Documents/XYZ <Directory /Library/WebServer/Documents/XYZ> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> I think I once had this problem too, after another OS X update, but I can't remember how I solved it. Is it a user permission issue? Or is there something wrong with Apache or any other setting? EDIT: It seems like my /etc/hosts file is not working correctly. Even if I add something like 127.0.0.1 apple.com it still goes to the real apple.com. Maybe this might help to solve the problem.

    Read the article

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • nginx: how do I add new site/server_name in nginx?

    - by Neo
    I'm just starting to explore Nginx on my Ubuntu 10.04. I installed Nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name, even when I make the changes in site-available/default file. Tried reloading/restarting Nginx, but nothing works. One interesting observation. "http://mycomputername" in browser works. So somehow there is a command like 'server_name $hostname' somewhere over-riding my rule. File: sites-available/mine.enpass server { listen 80; server_name mine.enpass ; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } } File: nginx.confg user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Multi domain on my dedicated server with Apache2

    - by x4vier
    I setup a server with Ubuntu 10.04 server edition. It's works for a long time with a single domain name. Now i want to add another domain wich will pointed to a new directory. I tried to change my Apache2 configuration but it does not seems to work properly. Here is my /etc/apache2/sites-available/default <VirtualHost *:80> DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com DocumentRoot /var/www/mydomain </VirtualHost> here is my /etc/hosts 127.0.0.1 localhost **.***.133.29 sd-***.****.fr sd-**** **.***.133.29 mediousgame.com # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ****::0 ip6-localnet ****: :0 ip6-mcastprefix ****::1 ip6-allnodes ****::2 ip6-allrouters ****::3 ip6-allhosts With this configuration when i try to access to mydomain it redirect to the /var/www/ content. Do you have any idea to redirect to the right folder ?

    Read the article

  • Adding users to Sharepoint when they are not in the same domain

    - by jim-work
    Bear with me as I explain this, I'm working my way through Sharepoint access as I go, but I'll clarify my question as I go along. The Problem We have about 10,000 users who need access to our Sharepoint 2005 based reporting. Because our organization is migrating from one domain to another, we need to add each user twice, once for each domain. For the current domain, this is no problem, we've got a powershell script that I tweaked to add all the users in a given CSV file, this takes about 5 minutes to run. The big problem we're having is with users who are NOT in our currently active domain. Because the sharepoint server cannot authenticate the new users, we can't add them directly. What we're doing is creating a temp user, then using STSADM.EXE to migrate that test user to the proper domain/user_name for each of our 10,000 users. The creation and migration takes about 5 seconds per user, or well over 12 hours to run. The Question Has anyone encountered this before? Is there a way to add users without requiring AD authentication? Why is STSADM.EXE running so slow? Thanks a lot for any advice or direction anyone can give me.

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Possible DNS Injection and/or SSL hijack?

    - by Anthony
    So if I go to my site without indicating the protocol, I'm taken to: http://example.org/test.php But if I go directly to: https://example.org/test.php I get a 404 back. If I go to just: https://example.org I get a totally different site (a page about martial arts). I went to the site via https not very long ago (maybe a week?) and it was fine. This is a shared server, as I understand it, and I do not have shell access, so I'm limited to the site's CPanel to do any further investigations. But when I go to: example.org:2083 I'm taken to https://example.org:2083, which, if someone has taken over the SSL port, could mean they have taken over the 2083 part as well (at least in my paranoid mind). I'm made more nervous by the fact that the cpanel login page at the above address looks very new (better, really) compared to the last time I went to it over the weekend. It's possible that wires got crossed somewhere after a system update, but I don't want to put in my name username and password in case it's a phishing attempt. Is there any way to know for sure without shell access to know for sure if someone has taken over? If I look up the IP address for the host name, the IP address matches what I have on a phpinfo page I can get to over http. If I go to the IP address directly on port 2083, I get the same login mentioned above (new and and suspiciously nice). But the SSL cert shows as good when I go this route. So if that's the case (I know the IP is right, the cert checks out, and there isn't any DNS involved), is that enough to feel safe at that point of entry? Finally, if I can safely log in via the IP, does anyone have any advice on where to check first on CPanel for why the SSL port is forwarding to a site on karate? Thanks.

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >