Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 190/274 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • Getting 401 when using client certificate with IIS 7.5

    - by Jacob
    I'm trying to configure a web site hosted under IIS 7.5 so that requests to a specific location require client certificate authentication. With my current setup, I still get a "401 - Unauthorized: Access is denied due to invalid credentials" when accessing the location with my client cert. Here's the web.config fragment that sets things up: <location path="MyWebService.asmx"> <system.webServer> <security> <access sslFlags="Ssl, SslNegotiateCert"/> <authentication> <windowsAuthentication enabled="false"/> <anonymousAuthentication enabled="false"/> <digestAuthentication enabled="false"/> <basicAuthentication enabled="false"/> <iisClientCertificateMappingAuthentication enabled="true" oneToOneCertificateMappingsEnabled="true"> <oneToOneMappings> <add enabled="true" certificate="MIICFDCCAYGgAwIBAgIQ+I0z6z8OWqpBIJt2lJHi6jAJBgUrDgMCHQUAMCQxIjAgBgNVBAMTGURldiBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkwHhcNMTAxMjI5MjI1ODE0WhcNMzkxMjMxMjM1OTU5WjAaMRgwFgYDVQQDEw9kZXYgY2xpZW50IGNlcnQwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBANJi10hI+Zt0OuNr6eduiUe6WwPtyMxh+hZtr/7eY3YezeJHC95Z+NqJCAW0n+ODHOsbkd3DuyK1YV+nKzyeGAJBDSFNdaMSnMtR6hQG47xKgtUphPFBKe64XXTG+ueQHkzOHmGuyHHD1fSli62i2V+NMG1SQqW9ed8NBN+lmqWZAgMBAAGjWTBXMFUGA1UdAQROMEyAENGUhUP+dENeJJ1nw3gR0NahJjAkMSIwIAYDVQQDExlEZXYgQ2VydGlmaWNhdGUgQXV0aG9yaXR5ghB6CLh2g6i5ikrpVODj8CpBMAkGBSsOAwIdBQADgYEAwwHjpVNWddgEY17i1kyG4gKxSTq0F3CMf1AdWVRUbNvJc+O68vcRaWEBZDo99MESIUjmNhjXxk4LDuvV1buPpwQmPbhb6mkm0BNIISapVP/cK0Htu4bbjYAraT6JP5Km5qZCc0iHZQJZuch7Uy6G9kXQXaweJMiHL06+GHx355Y="/> </oneToOneMappings> </iisClientCertificateMappingAuthentication> </authentication> </security> </system.webServer> </location> The client certificate I'm using in my web browser matches what I've placed in the web.config. What am I doing wrong here?

    Read the article

  • Windows 7, network shares, and authentication via local group instead of local user

    - by Donovan
    I have been doing some troubleshooting of my home network lately and have come to an odd conclusion that I was hoping to get some clarification on. I'm used to managing share permissions in a domain environment via groups instead of individual user accounts. I have a box at home running windows 7 ultimate and I decided to share some directories on that machine. I set it up to disallow guest access and require specifically granted permissions. (password moe?). Anyway, after a whole bunch of time i figured out that even though the shares I created were allowed via a local group i could not access them until i gave specific allowance to the intended user. I just didn't think i would have to do that. So here is the breakdown. Network is windows workgroup, not homegroup or nt domain PC_1 - win 7 ultimate - sharing in classic mode - user BOB - groups Admins PC_2 - win 7 starter - client - user BOB - groups admins PC_3 - win xp pro - client - user BOB - groups admins the share on PC_1 granted permission to only the local group administrators. local user BOB on PC_1 was a member of administrators. Both PC_2 and PC_3 could not browse the intended share on PC_1 because they were denied access. Also, no challenge was presented. They were simply denied. After adding BOB specifically to the intended share everything works just fine. Remember, its not an nt domain just a workgroup. But still, shouldn't i be able to manage share permissions via groups instead of individual user accounts? D.

    Read the article

  • Proper way to use hiera with puppetlabs-spec-helper?

    - by Lee Lowder
    I am trying to write some rspec tests for my modules. Most of them now use hiera. I have a .fixures.yml: fixtures: repositories: stdlib: https://github.com/puppetlabs/puppetlabs-stdlib.git hiera-puppet: https://github.com/puppetlabs/hiera-puppet.git symlinks: mongodb: "#{source_dir}" and a spec/classes/mongodb_spec.rb: require 'spec_helper' describe 'mongodb', :type => 'class' do context "On an Ubuntu install, admin and single user" do let :facts do { :osfamily => 'Debian', :operatingsystem => 'Ubuntu', :operatingsystemrelease => '12.04' } end it { should contain_user('XXXX').with( { 'uid' => '***' } ) should contain_group('XXXX').with( { 'gid' => '***' } ) should contain_package('mongodb').with( { 'name' => 'mongodb' } ) should contain_service('mongodb').with( { 'name' => 'mongodb' } ) } end end but when I run the spec test, I get: # rake spec /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color F Failures: 1) mongodb On an Ubuntu install, admin and single user Failure/Error: should contain_user('XXXX').with( { 'uid' => '***' } ) LoadError: no such file to load -- hiera_puppet # ./spec/fixtures/modules/hiera-puppet/lib/puppet/parser/functions/hiera.rb:3:in `function_hiera' # ./spec/classes/mongodb_spec.rb:15 Finished in 0.05415 seconds 1 example, 1 failure Failed examples: rspec ./spec/classes/mongodb_spec.rb:14 # mongodb On an Ubuntu install, admin and single user rake aborted! /usr/bin/ruby1.8 -S rspec spec/classes/mongodb_spec.rb --color failed Tasks: TOP => spec_standalone (See full trace by running task with --trace) Module spec testing is relatively new, as is hiera. So far I have been unable to find any suitable solutions. (the back and forth on puppet-dev was interesting, but not helpful). What changes do I need to make to get this to work? Installing puppet from a gem and hacking on rubylib isn't a viable solution due to corporate policy. I am using Ubuntu 12.04 LTS + Puppet 2.7.17 + hiera 0.3.0.

    Read the article

  • Recommendation on remote access setup for accessing customer systems

    - by gregmac
    I'm looking for a product recommendation (open or commercial) that will allow remote access to customer sites for tech support purposes. We need to be able to gain access to help troubleshoot problems on servers. Currently end up using anything from RDP on public IP, to various VPNs that clients happen to have, to webex-type sessions that require lots of interaction from both sides to get things working. This often means a problem that could take 10 minutes to solve takes an extra 30+ minutes messing around trying to get a connection up. There are multiple customer sites, which should NOT have access to each other. At each site, there is anywhere from 1 to 8 servers (Windows 2003 or 2008) that need to be accessed. Support connection to machines even if they're behind a firewall/router with no public IP Be able to selectively allow/deny access from customer site. Customer site should not be able to connect outbound to anywhere else (our systems, or other customer sites) Support multiple users from our end If not a VPN connection (where RDP could be used over top), should support: Remote desktop access, including copy/paste File transfers Preferably would have some way to list all remote systems, showing online/offline. Anyone have any suggestions?

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Sometimes, Synaptics Touchpad Tends Cursor to Top-Right Corner

    - by John Chadwick
    This has been a reoccurring issue for me, pretty much since I've owned this laptop (an ASUS G60JX.) Sometimes, the cursor will stop working properly and instead tend toward the top right of the screen. Basically, sometime into my usage (maybe after a couple hours) the touchpad will inevitably begin to malfunction, where it has confusing patterns of pushing the mouse cursor toward the top right of the screen. In addition, certain features (like momentum) seem to quit working entirely. It makes using the cursor extremely difficult. I've been having this issue across very many drivers. Pretty much as soon as multitouch came into the mix, although I don't believe multitouch has anything to do with it. It appears that, in the state of malfunction, it doesn't matter how you touch the touchpad, but where you touch it. Certain regions do not seem to trigger the cursor to move to the top right corner. In fact, no specific region seems to, but some areas do so more often than others. The issue can be resolved temporarily by putting the computer into sleep mode and awakening it. I have found no way to recreate this success without sleeping the computer or rebooting. Disabling and re-enabling the touchpad device does not do anything to resolve the problem. This issue does not affect my WACOM tablet nor any USB mice, and can be resolved (not to my satisfactory) by uninstalling the touchpad drivers. I'm looking for a solution, or at least a workaround that doesn't require sleep mode.

    Read the article

  • Wake OSX 10.8 over WiFi (WoWL - Wake on WiFi Lan)

    - by WrinkledCheese
    I have a stack of Apple Mac minis running SSH servers for remote login. The problem is that I can't seem to get them to wake up. From what I gathered, as of Mac OSX 10.7 you required to have a boot time option set - darkwake=0 10.7 and darkwake=no 10.8. So I tried this and then I came to the realization that this will probably work for a wired connection but I'm using WiFi. My wired connections are used for another local subnet without Internet access, so I have to get it to wake on WiFi. I realize that I can just set the stack of Mac minis to just not sleep, but I'm looking for a sleep enabled option. These services don't require initial response speed as once the connection is made they will be active and once they are no longer active they will hopefully go back to sleep. I have a FreeBSD box running avahi-daemon in order to try and wake the Macs with the Bonjour Service but it doesn't seem to work. I tried registering the service as Gordon suggested in the below post, but that just makes it so that there isn't a timeout when discovering services and resolving them. It still doesn't allow ssh connections to port 22 when it's asleep. For reference, I want what seems like what Gordon Davisson explained on this question: Wake on Demand for Apache server in OS X 10.8

    Read the article

  • Iptables QUEUE Target and Snort

    - by bradlis7
    I'm trying to set up a firewall with support for snort, and it is dropping all of my packets when I add the QUEUE target. I've made it like this, but the QUEUE target is not allowing the packets to be processed any further: -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -j QUEUE -A INPUT -j ACCEPT # It's not allowing anything past QUEUE, as you can see below in the count. > iptables -I INPUT -nv pkts bytes target prot opt in out source destination 6707 395K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 933 138K QUEUE all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 I'm eventually going to change it to forward, but I'm just trying to get it working for now. I start snort like so: snort -Q -D -c /etc/snort/snort.conf EDIT: More Information When I run it, it still sees the packets without having an iptables QUEUE target rule, but when I add a QUEUE target, it starts losing all of my packets. # snort -Qc /etc/snort/snort.conf -N -A console Enabling inline operation Running in IDS mode --== Initializing Snort ==-- Initializing Output Plugins! Initializing Preprocessors! Initializing Plug-ins! Parsing Rules file "/etc/snort/snort.conf" ## === CUT === *** *** interface device lookup found: bond0 *** Initializing Network Interface bond0 Decoding Ethernet on interface bond0 ## === CUT === Not Using PCAP_FRAMES So, it says inline, but the it says it's using bond0. Inline should not require an interface, right?

    Read the article

  • How do I ensure a process is running, even if it kills itself? (it needs to be restarted then)

    - by le_me
    I'm using linux. I want a process (an irc bot) to run every time I start the computer. But I've got a problem: The network is bad and it disconnects often, so I need to manually restart the bot a few times a day. How do I automate that? Additional information: The bot creates a pid file, called bot.pid The bot reconnects itself, but only a few times. The network is too bad, so the bot kills itself sometimes because it gets no response. What I do currently (aka my approach ;) ) I have a cron job executing startbot.rb every 5 minutes. (The script itself is in the same directory as the bot) The script: #!/usr/bin/ruby require 'fileutils' if File.exists?(File.expand_path('tmp/bot.pid')) @pid = File.read(File.expand_path('tmp/bot.pid')).chomp!.to_i begin raise "ouch" if Process.kill(0, @pid) != 1 rescue puts "Removing abandoned pid file" FileUtils.rm(File.expand_path('tmp/bot.pid')) puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) else puts "Bot up and running!" end else puts "Starting the bot!" Kernel.exec(File.expand_path('./bot.rb')) end What this does: It checks if the pid file exists, if that's true it checks if kill -s 0 BOT_PID == 1 (if the bot's running) and starts the bot if one of the two checks fail/are not true. My approach seems to be quite dirty so how do I do it better?

    Read the article

  • Do you really need cable management for a cabinet with just switches and patch panels?

    - by ObligatoryMoniker
    We are about to start wiring out a building expansion and our vendor has laid out the racks in the following configuration: Option 1 1U Fiber patch panel 2U Cable Manager 2U 48 port Patch Panel 2U Cable Manager 2U 48 port Patch Panel 2U Cable Manager 1U 48 port Switch 2U Cable Manager 1U 48 port Switch Total = 15U All the patch panels would be connected to the switches with 1ft+ cables fed through cable management. What I am considering instead is: Option 2 1U Fiber patch panel 1U 24 port Patch Panel 1U 48 port Switch 2U 48 port Patch Panel 1U 48 port Switch 2U 48 port Patch Panel Total = 8U All of the patch panels would be connected to the switches with .5 ft cables directly on their face with the top 24 ports of each switch patched to the patch panel above it and the bottom 24 ports of each switch patched to the patch panel beneath it which would not require any cable management. If I go with option 2 it save all of the space used by cable management and allows us to keep adding on switches and patch panels at the end without having to re-cable all of the patch panels above. Our vendor has indicated that this is not best practice and that .5ft cables will introduce cross talk. I could understand that being the case if we were connecting the .5 ft cable directly into another switch but we are connecting it to a patch panel that likely has another 150 ft cable run from the back of the patch panel out to the port in the building in which case the real resulting cable is 150.5 ft at minimum before even connecting it to a PC. It seems like it makes much more sense to go with option 2. It is easier to expand, saves space, and saves money on cabling and cable management. Does this kind of configuration make sense or is there a legitimate reason to choose Option 1 over Option 2?

    Read the article

  • apache using mod_auth_kerb always asks for the password twice

    - by DrStalker
    (Debian Squeeze) I'm trying to set apache up to use Kerberos authentication to allow AD users to log in. It is working, but prompts the user twice for a username and password, with the first time being ignored (no matter what is put it in.) Only the second prompt includes the AuthName string from the config (i.e.: the first windows is a generic username/password one, the second includes the title "Kerberos Login") I'm not worried about integrated windows authentication working at this stage, I just want users to be able to login with their AD account so we don't need to set up a second repository of user accounts. How do I fix this to eliminate that first useless prompt? The directives in the apache2.conf file: <Directory /var/www/kerberos> AuthType Kerberos AuthName "Kerberos Login" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms ONEVUE.COM.AU.LOCAL Krb5KeyTab /etc/krb5.keytab KrbServiceName HTTP/[email protected] require valid-user </Directory> krb5.conf: [libdefaults] default_realm = ONEVUE.COM.AU.LOCAL [realms] ONEVUE.COM.AU.LOCAL = { kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL master_kdc = SYD01PWDC01.ONEVUE.COM.AU.LOCAL admin_server = SYD01PWDC01.ONEVUE.COM.AU.LOCAL default_domain = ONEVUE.COM.AU.LOCAL } [login] krb4_convert = true krb4_get_tickets = false The access log when accessing the secured directory (note the two seperate 401's) 192.168.10.115 - - [24/Aug/2012:15:52:01 +1000] "GET /kerberos/ HTTP/1.1" 401 710 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - - [24/Aug/2012:15:52:06 +1000] "GET /kerberos/ HTTP/1.1" 401 680 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" 192.168.10.115 - [email protected] [24/Aug/2012:15:52:10 +1000] "GET /kerberos/ HTTP/1.1" 200 375 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.83 Safari/537.1" And one line in error.log [Fri Aug 24 15:52:06 2012] [error] [client 192.168.0.115] gss_accept_sec_context(2) failed: An unsupported mechanism was requested (, Unknown error)

    Read the article

  • Share Exchange Calendar Outside Organization

    - by CalCurious
    I'm trying to figure out the best way to meet a user's (Corp-A-User) request to share their calendar with someone at another company (Corp-B-User). We're running Microsoft SBS 2008 with Exchange 2007 and SharePoint. The remote user is running Exchange, version unknown. Corp-A-User wants to give the Corp-B-User the ability to create appointments on Corp-A-User's calendar. This will naturally require sharing of Free/Busy information. Corp-A-User naturally lacks the vision to seen ANY problem with giving Corp-B-User full access to their calendar. But, I see the problems with that and would prefer that Corp-B-User have only the ability to see Free/Busy and create appointments. Most of the external publishing options that I have thought of, such as WebDav, allow displaying a user's calendar, but there are problems with security and the ability to create appointments. Right now, I'm thinking the cleanest solution would be to use a Google calendar along with Google Calendar Sync for the two user's Outlook clients. But, I'm not sure if there isn't a better way and I hate teh idea of pushing a corporate calendar up to Google. Not to mention the issues likely to pop up from the multiple sync paths. Does any one have a good solution for this scenario that would be willing to share what they use?

    Read the article

  • Wiring my internet

    - by u8sand
    I have Verizon internet service and am currently using wifi. My router is in the basement and my desktop computer is 2 floors and on the other side of the house above it... Worst possible positioning but that's just how things worked out. My wireless currently is extremely unstable so I've decide to correct the problem by wiring my computer directly. The problem lies here: when redoing the room next to it (when the wall was open) we went ahead and wired some coaxial cable from our attic to our basement (with plenty of slack on both ends, don't ask me why we didn't go ahead and wire a CAT6 cable). The question is: Can I use the coaxial cable to bring me internet connection? Naturally the router (which needs to stay where it is) takes a coaxial cable input and has Ethernet outputs. So maybe I would have to take a ethernet cable, convert to coaxial-coaxial to my computer, convert back to ethernet. Is this even possible to convert from coaxial to ethernet? Or do I have to attempt to go ahead and fish a cat6 cable through my house. I cannot just split the signal because that would require two routers and two networks (which I don't believe would work with one cable-one ISP correct me if I'm wrong). Thanks

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • Cisco Configuration backup with Windows Script.

    - by Jeff
    We have a client with a lot of Cisco Devices and we would like to automate the backups of these devices through telnet. We have both 2003 and 2008 servers and ideally use tftp to back it up. I wrote this: Set WshShell = WScript.CreateObject("WScript.Shell") Dim fso Set fso = CreateObject("Scripting.FileSystemObject") Dim ciscoList ciscoList = "D:\Scripts\SwitchList.txt" Set theSwitchList = fso.OpenTextFile(ciscoList, 1) Do While theSwitchList.AtEndOfStream <> True cisco = theSwitchList.ReadLine Run "cmd.exe" SendKeys "telnet " SendKeys cisco SendKeys "{ENTER}" SendKeys "USERNAME" SendKeys "{ENTER}" SendKeys "PASSWORD" SendKeys "{ENTER}" SendKeys "en" SendKeys "{ENTER}" SendKeys "PASSWORD" SendKeys "{ENTER}" SendKeys "copy startup-config tftp{ENTER}" SendKeys "(TFTP IP){ENTER}" SendKeys "FileName.txt{ENTER}" SendKeys "exit{ENTER}" 'close telnet session' SendKeys "{ENTER}" 'get command prompt back SendKeys "{ENTER}" SendKeys "exit{ENTER}" 'close cmd.exe On Error Resume Next WScript.Sleep 3000 Loop Sub SendKeys(s) WshShell.SendKeys s WScript.Sleep 300 End Sub Sub Run(command) WshShell.Run command WScript.Sleep 100 WshShell.AppActivate command WScript.Sleep 300 End Sub But the problem with this is the sendkeys are sent to the console session, I'm trying to find a solution that would not require a user to be logged in. Does anyone have any ideas? I have some knowlage of VBS, PowerShell and a pretty good grasp on batching.

    Read the article

  • Controlling clone access to multiple mercurial repos served via hgwebdir.cgi

    - by chrislawlor
    I'm trying to host multiple hg repositories to use for my clients. I need to control access to each repository individually - not just push access, but clone as well. I've got an .htaccess set which requires authentication globally: AuthUserFile /path/to/hgweb.passwd AuthGroupFile /dev/null AuthName "Chris Lawlor Client Mercurial Repositories" AuthType Basic <Limit GET POST PUT> Require valid-user </Limit> <FilesMatch "\.(htaccess|passwd|config|bak)$"> Order Allow,Deny Deny from all </FilesMatch> Then in each repository, I've got a .hg/hgrc file requiring a valid user [web] allow_push = <comma seperated user list> This almost does what I need. The problem is that I need to add ALL my clients to hgweb.passwd, which gives them clone access to ALL of the repositories. The only solution I can think of is to have another .htaccess and .passwd file in EACH repository. I don't really want to do that though, seems a little convoluted. I can already specify a list of authorized users for each repository in that repos' hgrc file with the allow_push setting. If only there were an allow_clone setting as well... All the documentation I've found for hgwebdir.cgi is incomplete. I've read: http://mercurial.selenic.com/wiki/HgWebDirStepByStep http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi http://hgbook.red-bean.com/read/collaborating-with-other-people.html And others. I've yet to find a comprehensive list of hgrc settings. I guess this is as much an Apache question than a mercurial question. Unless I can find a better approach, I'll be going with a seperate .htaccess and .passwd file for each repo. This is a virtual host on Webfaction if it matters - set up roughly like this http://docs.webfaction.com/software/mercurial.html

    Read the article

  • Restricting Access to Application(s) on Point of Sale system

    - by BSchlinker
    I have a customer with two point of sale systems, a few workstations and a Windows 2003 SBS Server. The point of sale systems are typically running QuickBooks Point of Sale and are logged in with a user who has restricted permissions / access (via Group Policy). Occasionally, one of the managers needs to be able to run a few additional applications -- including some accounting software. I have created an additional user for this manager, allowing them to login and access the accounting software. The problem is, it can be problematic to switch users on the system, as QuickBooks takes a few minutes to close (on POSUser) and then reopen (on ManagerUser). If customers are waiting, this slows things down drastically. Since the accounting software is stored on a network drive, it would be easiest if the manager could simply double click something, authenticate against the network drive / domain controller and then the program would launch. When they close the program, the session to the network drive would be lost and the program would no longer be accessible. Is there any easy way to do this? Both users are on a domain and the system is Windows 7. I just don't want to require the user to switch back and forth. In a worst case scenario, they forget to switch back and leave the accounting software wide open.

    Read the article

  • One Apache VirtualHost entry overrides another?

    - by johnlai2004
    I can't tell why one apache virtual host entry keeps overriding another. The following file // filename: cbl <VirtualHost 74.207.237.23:80> ServerAdmin [email protected] ServerName completebeautylist.com ServerAlias www.completebeautylist.com DocumentRoot /srv/www/cbl/production/public_html/ ErrorLog /srv/www/cbl/production/logs/error.log CustomLog /srv/www/cbl/production/logs/access.log combined </VirtualHost> keeps overriding this file // filename: theccco.org <VirtualHost 74.207.237.23:80> SuexecUserGroup "#1010" "#1010" ServerName theccco.org ServerAlias www.theccco.org ServerAlias webmail.theccco.org ServerAlias admin.theccco.org DocumentRoot /home/theccco/public_html ErrorLog /var/log/virtualmin/theccco.org_error_log CustomLog /var/log/virtualmin/theccco.org_access_log combined ScriptAlias /cgi-bin/ /home/theccco/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/theccco/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks allow from all AllowOverride All </Directory> <Directory /home/theccco/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.theccco.org RewriteRule ^(.*) https://theccco.org:20000/ [R] RewriteCond %{HTTP_HOST} =admin.theccco.org RewriteRule ^(.*) https://theccco.org:10000/ [R] Alias /dav /home/theccco/public_html <Location /dav> DAV On AuthType Basic AuthName theccco.org AuthUserFile /home/theccco/etc/dav.digest.passwd Require valid-user ForceType text/plain Satisfy All RewriteEngine off </Location> </VirtualHost> I tried a2ensite, a2dissite, and reloading I get this message * Reloading web server config apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Thu Apr 15 10:47:36 2010] [warn] NameVirtualHost 74.207.237.23:443 has no VirtualHosts Aside from that, I don't know what else could be wrong. Can anyone tell me what to do?

    Read the article

  • su not giving proper message for restricted LDAP groups

    - by user1743881
    I have configured PAM authentication on Linux box to restrict particular group only to login. I have enabled pam and ldap through authconfig and modified access.conf like below, [root@test root]# tail -1 /etc/security/access.conf - : ALL EXCEPT root test-auth : ALL Also modified sudoers file, to get su for this group <code> [root@test ~]# tail -1 /etc/sudoers %test-auth ALL=/bin/su</code> Now, only this ldap group members can login to system. However when from any of this authorized user, I tried for su, it asks for password and then though I enter correct password it gives message like Incorrect password and login failed. /var/log/secure shows that user is not having permission to get the access, but then it should print message like Access denied.The way it prints for console login. My functionality is working but its no giving proper messages. Could anyone please help on this. My /etc/pam.d/su file, [root@test root]# cat /etc/pam.d/su #%PAM-1.0 auth sufficient pam_rootok.so # Uncomment the following line to implicitly trust users in the "wheel" group. #auth sufficient pam_wheel.so trust use_uid # Uncomment the following line to require a user to be in the "wheel" group. #auth required pam_wheel.so use_uid auth include system-auth account sufficient pam_succeed_if.so uid = 0 use_uid quiet account include system-auth password include system-auth session include system-auth session optional pam_xauth.so

    Read the article

  • Timeout settings for Remote Desktop Sessions to lock

    - by atroon
    Our office uses a Windows 2003 server to provide access to an accounting application. Recently I was asked to increase the amount of time it takes for the session to lock itself and require the entry of the user's password to resume. That seems to be about ten minutes, at present. I am familiar with group policy and have tweaked those settings to scavenge sessions (and thereby licenses) from sessions that have been disconnected (by the user closing the mstsc.exe client or by a network issue). That's simple and straightforward. But I can't find anything in GP to allow a longer time period before the RDP client window goes black and then, when clicked upon, requires a username and password to resume the session. I must admit this would be nice personally as well, since most of my time is spent documenting the application and/or monitoring its database, so I usually have a window open to the terminal server along with the rest of the staff in the accounting center, but I interact with it very little. I usually enter my password 10-15 times per workday, but I'm pretty good at it by now. ;) So, can this timeout period be adjusted, or are we out of luck?

    Read the article

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • DNS: how to get local server to superimpose results over authoritative server?

    - by growse
    I've got a domain for which the DNS I control, and is hosted on the internet. I also have a NAT'd internal network (192.168.0.0/24) which has internet access, and which I also control. On this internal network, I also have a DNS resolver. DNS software on both is PowerDNS. What I want to be able to do is for the DNS resolver on the internal network to be able to add/change records of queries and results that come down from the authoritative server. For example, the authoritative server might have a single record for animal.example.com: animal.example.com. IN AAAA 2001:140:283::1 However, I'd like it so that when internal clients do a dns lookup for animal.example.com, they might get back the following: animal.example.com. IN AAAA 2001:140:283::1 animal.example.com. IN A 192.168.0.2 Obviously, I could set up the internal DNS server to pretend to be authoritative for example.com, but that would require a fair bit of effort to keep the main DNS server and the internal DNS server in sync for the records which are the same between both. If the internal DNS server could somehow be made a slave of the main DNS server, but also have the provision to add its own results in, that would be ideal. Is this possible?

    Read the article

  • how could application installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • What's the best way to completely remove everything from a computer, without re-installing?

    - by Connor W
    I have a friend who wants to sell their computer, but obviously all personal information and software that it is on it needs to be removed before doing so. Usually I would format and reinstall it, but I cannot easily get hold of the required XP DVDs and I'm not 100% sure the serial number is stuck on the case as usual so getting hold of it will probably require more effort than I'm prepared to spend. So, what's the best and quickest way to remove and uninstall everything from the PC without reinstalling it? Thanks. EDITS: I'm looking to remove things like Internet History and all installed programs, too. I know how to remove the history and each individual program, but that could take hours. The machine is not branded and therefore there is no website I can go to download recovery software. There is no recovery partition on the computer and I'm not aware of any recovery DVDs for it either. I can only assume it was installed from a retail copy, and therefore there is no way to recover it to factory settings. It needs to have XP installed, not any distribution of Linux. Like most average people, the person getting the computer will not understand what to do with a computer that doesn't have Windows installed, and software like Office does not work on Linux either. Buying another licence is not really an option either. She has just brought a laptop to replace the computer, so buying another licence for a computer that she's getting rid of doesn't really make sense. Thanks for all the help so far!

    Read the article

  • FreeBSD rc.d script doesn't work when starting up

    - by kastermester
    I am trying to write a rc.d script to startup the fastcgi-mono-server4 on FreeBSD when the computer starts up - in order to run it with nginx. The script works when I execute it while being logged in on the server - but when booting I get the following message: eval: -applications=192.168.50.133:/:/usr/local/www/nginx: not found The script looks as follows: #!/bin/sh # PROVIDE: monofcgid # REQUIRE: LOGIN nginx # KEYWORD: shutdown . /etc/rc.subr name="monofcgid" rcvar="monofcgid_enable" stop_cmd="${name}_stop" start_cmd="${name}_start" start_precmd="${name}_prestart" start_postcmd="${name}_poststart" stop_postcmd="${name}_poststop" command=$(which fastcgi-mono-server4) apps="192.168.50.133:/:/usr/local/www/nginx" pidfile="/var/run/${name}.pid" monofcgid_prestart() { if [ -f $pidfile ]; then echo "monofcgid is already running." exit 0 fi } monofcgid_start() { echo "Starting monofcgid." ${command} -applications=${apps} -socket=tcp:127.0.0.1:9000 & } monofcgid_poststart() { MONOSERVER_PID=$(ps ax | grep mono/4.0/fastcgi-m | grep -v grep | awk '{print $1}') if [ -f $pidfile ]; then rm $pidfile fi if [ -n $MONOSERVER_PID ]; then echo $MONOSERVER_PID > $pidfile fi } monofcgid_stop() { if [ -f $pidfile ]; then echo "Stopping monofcgid." kill $(cat $pidfile) echo "Stopped monofcgid." else echo "monofcgid is not running." exit 0 fi } monofcgid_poststop() { rm $pidfile } load_rc_config $name run_rc_command "$1" In case it is not already super clear, I am fairly new to both FreeBSD and sh scripts, so I'm kind of prepared for some obvious little detail I overlooked. I would very much like to know exactly why this is failing and how to solve it, but also if anyone has a better way of accomplishing this, then I am all open to ideas.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >