Search Results

Search found 8896 results on 356 pages for 'jason block'.

Page 276/356 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Setup site folders on Apache and PHP

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd-vhosts.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle? Update: I tried to simplify things a little, so I reinstalled XAMPP and got back to a working http://localhost. Then I confirmed that httpd-vhosts.conf is included in httpd.conf and made the following changes to httpd-vhosts.conf: Uncommented the line NameVirtualHost *:80 Added a virtual host shown below. Restarted Apache and saw the expected page on http://localhost <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/" ServerName localhost ErrorLog "logs/dummy-host2.localhost-error.log" CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> I then created a new folder named C:\testweb, added an index.html file and changed the DocumentRoot line shown above. For all intents and purposes I would then expect the two configurations to be equivalent. But this setup gives me an error 403. Even though the C:\testweb folder already had the same permissions as the C:\xampp\htdocs folder, I then went further and gave the Everyone group full control of C:\testweb and got exactly the same problem. So what did I miss?

    Read the article

  • Is On-The-Fly string replacement possible using GreaseMonkey and Firefox

    - by Gary M. Mugford
    I have looked for means to stop Brightcove videos from autostarting in Firefox and have come to the conclusion it isn't possible without external programming via something like Grease Monkey. However, I'm not proficient in javascript let alone GM. So I thought I'd ask here first whether what I want to do is feasible, or whether it's a fool's errand. What I want to accomplish is have a site specific script executed to replace a string value on the run in that site's code. Specifically, what I am looking for is something GM-style that would do this: if site_domain = 'www.SiteWithAutoPlayVideos.com' then replace_all('<param name="autoStart" value="true" />', '<param name="autoStart" value="false" />'); Having looked through Super User for anything GreaseMonkey that might relate, I see notices that the sandbox GM executes scripts in has to remain separate for security reasons. So, I suspect I might be in for disappointment. BUT if it is accomplishable and somebody here can confirm it, then I will do my best to struggle through the learning curve and get this noisome little problem put to rest. Yes, I have tried Flash Block and FlashDisable in order to attack this issue with no avail. Thanks in advance for your time.

    Read the article

  • How do I use a self encrypting drive?

    - by Unique_Key
    I recently purchased a Micron RealSSD c400 self encrypting drive, and I am having a few issues when trying to get it recognized by my laptop (HP Elitebook 8440p running Windows 7 x64; also tried on a custom-built desktop). When I try to initialize the drive from disk management, I get a CRC error; also, when attempting to partition it from Windows setup, the program can't create the partitions. I also tried with UBCD, nothing. I assume this is due to drive security, but I haven't been able to find much information about this online; do I need a management software or something? I'm completely stumped here. EDIT As requested, when I try partitioning the device from Windows setup I get a 0x80300024 error; when I try initializing it from disk management, I get a "Data error (cyclic redundancy check)" message, and the event log shows the following under System: Source: VDS Basic Provider, message: unexpected failure. error code 490@01010004 (2x) Source: Virtual Disk Service, message: VDS fails to write boot code on a disk during clean operation. Error code: 80070001@02070008 (1x) Source: Disk, message: The device \Device\Harddisk2\DR2 has a bad block (2x) The security logs show nothing related. Also, when attempting to configure it from UBCD (utility: HDAT2), I get an error along the lines of "can't edit partition information" or something to that tune.

    Read the article

  • chrooting php-fpm with nginx

    - by dragonmantank
    I'm setting up a new server with PHP 5.3.9 and nginx, so I compiled PHP with the php-fpm SAPI options. By itself it works great using the following server entry in nginx: server { listen 80; server_name domain.com www.domain.com; root /var/www/clients/domain.com/www/public; index index.php; log_format gzip '$remote_addr - $remote_user [$time_local] "$request" $status $bytes_sent "$http_referer" "$http_user_agent" "$gzip_ratio"'; access_log /var/www/clients/domain.com/logs/www-access.log; error_log /var/www/clients/domain.com/logs/www-error.log error; location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/clients/domain.com/www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } } It servers my PHP files just fine. For added security I wanted to chroot my FPM instance, so I added the following lines to my conf file for this FPM instance: # FPM config chroot = /var/www/clients/domain.com and changed the nginx config: #nginx config for chroot location ~\.php$ { fastcgi_pass 127.0.0.1:9001; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME www/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /etc/nginx/fastcgi_params; } With those changes, nginx gives me a File not found message for any PHP scripts. Looking in the error log I can see that it's prepending the root path to my DOCUMENT_ROOT variable that's passed to fastcgi, so I tried to override it in the location block like this: fastcgi_param DOCUMENT_ROOT /www/public/; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; but I still get the same error, and the debug log shows the full, unchrooted path being sent to PHP-FPM. What am I missing to get this to work?

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • Error with procmail script to use Maildir format

    - by bradlis7
    I have this code in /etc/procmailrc: DROPPRIVS=yes DEFAULT=$HOME/Maildir/ :0 * ? /usr/bin/test -d $DEFAULT || /bin/mkdir $DEFAULT { } :0 E { # Bail out if directory could not be created EXITCODE=127 HOST=bail.out } MAILDIR=$HOME/Maildir/ But, when the directory already exists, sometimes it will send a return email with this error: 554 5.3.0 unknown mailer error 127. The email still gets delivered, mind you, but it sends back an error code to the sending user as well. I fixed this temporarily by commenting out the EXITCODE and HOST lines, but I'd like to know if there is a better solution. I found this block of code in multiple places across the net, but couldn't really find why this error was coming back to me. It seems to happen when I send an email to a local user. Sometimes the user has a .forward file to send it on to other users, sometimes not, but the result has been the same. I also tried removing DROPPRIVS, just in case it was messing up the forwarding, but it did not seem to affect it. Is the line starting with * ? /usr/bin/test a problem? The * signifies a regex, but the ? makes it return an integer value, correct? What is the integer being matched against? Or is it just comparing the integer return value? Do I need a space between the two blocks? Thanks for the help.

    Read the article

  • Best way to mount 3-4 monitor like this?

    - by jasondavis
    I just purchased 2 HP 2009m widescreen monitors, they are not the biggest thing on the block, they are like 19-20" and are only around 150-200$ so I think they are perfect. I bought 2 of them just to make sure I like them, with the full intention of purchasing more to make either a tripple or quad display. I now I am stuck trying to decide, if I purchase 1 more to have a tripple display I would then like to just wrap the third monitor to either the rigth or left side, I could do this without a mount most likely pretty easy. If I decide to go with 2 more monitors to make a quad display then I would like to add the 2 new monitor directly above the 2 that I have now. So it would make a grid of 2 wide and 2 high. I have posted a few photos belwo to show them now with the 2 I have, you will notice that I have them tilted inwards to make more of a "V" shape instead of them being side by side and "STRAIGHT". Now if I decide to make thegrid of 4 then I will need to buy or build a stand to hold them all tightly together (no whitespace or gap between the grid of monitors) but I would like to still have both rows invert to make the slight "V". Do you know of any existing stands I could purchase that would hold all 4 monitors without making them be STARIGHT without the "V" shape? Any tips appreciated please, also they do have holes in the back for VESA. a few photos... (they are from iphone and lighting made them note very good but you can see what I am working with here)

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • postgresql No space left on device

    - by pstanton
    Postgres is reporting that it is out of disk space while performing a rather large aggregation query: Caused by: org.postgresql.util.PSQLException: ERROR: could not write block 31840050 of temporary file: No space left on device at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:192) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304) at org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:189) ... 8 more However the disk has quite a lot of space: Filesystem Size Used Avail Use% Mounted on /dev/sda1 386G 123G 243G 34% / udev 5.9G 172K 5.9G 1% /dev none 5.9G 0 5.9G 0% /dev/shm none 5.9G 628K 5.9G 1% /var/run none 5.9G 0 5.9G 0% /var/lock none 5.9G 0 5.9G 0% /lib/init/rw The query is doing the following: INSERT INTO summary_table SELECT t.a, t.b, SUM(t.c) AS c, COUNT(t.*) AS count, t.d, t.e, DATE_TRUNC('month', t.start) AS month, tt.type AS type, FALSE, tt.duration FROM detail_table_1 t, detail_table_2 tt WHERE t.trid=tt.id AND tt.type='a' AND DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')>=23 OR DATE_PART('hour', t.start AT TIME ZONE 'Australia/Sydney' AT TIME ZONE 'America/New_York')<13 GROUP BY month, type, t.a, t.b, t.d, t.e, FALSE, tt.duration any tips?

    Read the article

  • Xen guests accessing LUNs

    - by mechcow
    We are using RHEL5.3 with a Clarion SAN attached by FC. Our situation is that we have a number of LUNs presented to Hosts and we want to dynamically present the LUNs to Xen Guests. We are not sure on what the best practice approach is to set this up. The Xen guests will form a cluster together and need the LUNs only for data partitions, i.e. when they are actively running services. So one approach would be to always present all disks to all Xen guests, and then rely up on the cluster software, and mount itself, to not mount the disk twice in two locations. This sounds kinda risky and also is not very secure (one cracked guest can see/destroy all the data). Another approach would be to dynamically add and remove the disks from the Xen guests at the dom0 level (using xm block-attach). This could work but sounds slightly complicated, I'm wondering whether Red Hat Cluster Suite supports this in some way or whether there are scripts to do this. Yet another approach would be to have the LUNs endpointed at the Xen guests themselves - I'm not sure whether this is technically possible since the multipathing has to be done at the Host level.

    Read the article

  • How to whitelist external access to an internal webserver via Cisco ACLs?

    - by Josh
    This is our company's internet gateway router. This is what I want to accomplish on our Cisco 2691 router: All employees need to be able to have unrestricted access to the internet (I've blocked facebook with an ACL, but other than that, full access) There is an internal webserver that should be accessible from any internal IP address, but only a select few external IP addresses. Basically, I want to whitelist access from outside the network. I don't have a hardware firewall appliance. Until now, the webserver has not needed to be accessible externally... or in any case, the occasional VPN has sufficed when needed. As such, the following config has been sufficient: access-list 106 deny ip 66.220.144.0 0.0.7.255 any access-list 106 deny ip ... (so on for the Facebook blocking) access-list 106 permit ip any any ! interface FastEthernet0/0 ip address x.x.x.x 255.255.255.248 ip access-group 106 in ip nat outside fa0/0 is the interface with the public IP However, when I add... ip nat inside source static tcp 192.168.0.52 80 x.x.x.x 80 extendable ...in order to forward web traffic to the webserver, that just opens it up entirely. That much makes sense to me. This is where I get stumped though. If I add a line to the ACL to explicitly permit (whitelist) an IP range... something like this: access-list 106 permit tcp x.x.x.x 0.0.255.255 192.168.0.52 0.0.0.0 eq 80 ... how do I then block other external access to the webserver while still maintaining unrestricted internet access for internal employees? I tried removing the access-list 106 permit ip any any. That ended up being a very short-lived config :) Would something like access-list 106 permit ip 192.168.0.0 0.0.0.255 any on an "outside-inbound" work?

    Read the article

  • How can i get more low memory with the following setup:

    - by user539484
    Modules using memory below 1 MB: Name Total = Conventional + Upper Memory -------- ---------------- ---------------- ---------------- MSDOS 14 317 (14K) 14 317 (14K) 0 (0K) HIMEM 1 120 (1K) 1 120 (1K) 0 (0K) EMM386 3 120 (3K) 3 120 (3K) 0 (0K) OAKCDROM 36 064 (35K) 36 064 (35K) 0 (0K) POWER 80 (0K) 80 (0K) 0 (0K) NLSFUNC 2 784 (3K) 2 784 (3K) 0 (0K) COMMAND 2 928 (3K) 2 928 (3K) 0 (0K) MSCDEX 15 712 (15K) 15 712 (15K) 0 (0K) SMARTDRV 30 384 (30K) 13 984 (14K) 16 400 (16K) KEYB 6 752 (7K) 6 752 (7K) 0 (0K) MOUSE 17 296 (17K) 17 296 (17K) 0 (0K) DISPLAY 8 336 (8K) 0 (0K) 8 336 (8K) SETVER 512 (1K) 0 (0K) 512 (1K) DOSKEY 4 144 (4K) 0 (0K) 4 144 (4K) POWER 4 672 (5K) 0 (0K) 4 672 (5K) Free 552 944 (540K) 539 088 (526K) 13 856 (14K) Memory Summary: Type of Memory Total = Used + Free ---------------- ---------- ---------- ---------- Conventional 653 312 114 224 539 088 Upper 47 920 34 064 13 856 Reserved 0 0 0 Extended (XMS)* 64 898 256 2 671 824 62 226 432 ---------------- ---------- ---------- ---------- Total memory 65 599 488 2 820 112 62 779 376 Total under 1 MB 701 232 148 288 552 944 Total Expanded (EMS) 33 947 648 (33 152K Free Expanded (EMS)* 33 538 048 (32 752K * EMM386 is using XMS memory to simulate EMS memory as needed. Free EMS memory may change as free XMS memory changes. Largest executable program size 538 976 (526K) Largest free upper memory block 7 488 (7K) MS-DOS is resident in the high memory area. I'm running MS-DOS 6.22 on VMWare virtual hardware. This is memory state after memmaker pass, so i'm looking for optimization beyond memmaker. Note: NLS drivers (DISPLAY, KEYB, NSLFUNC) are essential for me. Thanks to @mtone for valuable reminder about MSCDEX /E which gave me 16KiB of low memory (see the diff)!

    Read the article

  • Squid Authentication & streaming

    - by Steve Butler
    I've got squid setup using Kerberos authentication. I'm also using squidguard as an URL redirector to block out the usual nastiness of the web. There are some sites though that we allow certain users to, and others not. This all works well, assuming I'm not using any streaming. From what i can determine from the squid logs and the wireshark traces I've done, when the initial request to stream is sent, everything is good, the authenticated username is sent with the request to squidguard. The problem is that on subsequent traffic the username is not sent to squidguard, causing it to be blocked based on default policy. I've tried using the squid built-in allow/deny stuff, but its relatively clunky, and so far squidguard has been pretty easy and fast. Here comes the question(s): How do i get Squid to pass username on all requests? (something tells me this isn't the best way) How do i get squidguard to see traffic is authenticated to a specific user even when a username isn't passed? Is there any other way of accomplishing this? A few details that may be of importance: I'm using a list of users stored in a text file for squidguard to compare against. I'm using full kerberos auth with Squid. CentOS 6.0 Squid 3.1.4 Squidguard 1.3

    Read the article

  • How to prevent blocking http auth popups on firefox restart with many tabs open

    - by Glen S. Dalton
    I am using the latest firefox with tab mix plus and tabgoups manager. I have maybe 50 or 100 tabs oben in different tab groups. When I shutdown firefox and start it again all tabs and tab groups are perfectly rebuilt. But I have also many pages open that are behind a standard http auth, and these pages all request their usernames and passwords. So during startup firefox pops up all these pages' http auth windows. And they block everything else in firefox, they are like modal windows. (I am involved in website development and the beta versions are behind apache http auth.) I have to click many times the OK button in the popups, before I can do anything. All the usernames and passwords are already filled in. (And the firefox taskbar entry blinks and the firefox window heading also blinks, and focus switches back and foth, which also annoys me. And sometimes the popups do not react to my clicks, because firefox is maybe just switching focus somewhere else. This is the worst.) I want a plugin or some way to skip those popups. There are some plugins I tried some time ago, but they did not do what I need, because they require a mouse click for each login, which is no improvement over the situation like it already is. This is not about password storage (because firefox already stores them). But of course, if some password storing plugin could heal this it would be great.

    Read the article

  • Keyboard's media keys are blocked by a program

    - by Mike Hanson
    I've got a Microsoft Natural Ergonomic Keyboard 4000. In addition to the regular keys, it's also got keys for Web/Home, Search, Mail, Favorites (5), Calculator, and Media functions (Mute, Volume Up/Down, and Play/Pause). Everything works most of the time, and the exception is rather odd. I use a programming system called Clarion. When that has focus, the Media keys don't work. (All the others still do.) I've also discovered that programs that I create using Clarion also block the media keys (only when they have focus). This indicates that it's probably something in Clarion's Run-Time Library (RTL) that's causing the trouble. The keys will work if I click on a non-Clarion window before hitting the media key, but that's an undesirable hassle. The odd thing is that I have many colleagues with the same keyboard, and they have no problem. When I recently upgraded from Vista Professional to Win7 Ultimate, I noticed that various things "appear" differently. For example, with my old system, when I changed the volume or muted the volume bar visualization always appeared at the bottom right on the screen. Now it doesn't appear in certain programs, even when it works. This indicates an order of precedence for visual elements. I'm fairly certain a similar order of precedence exists for keyboard hooks. Depending on how the hooks are defined, and the order in which they're applied, it would seem that sometimes the IntelliType drivers don't see the media keystrokes. The Media keys probably behave differently than the rest of the "special" keys, because they are more of a standard across all keyboards, so perhaps are handled by a different driver hooking mechanism. Does anyone have any suggestions of how I might fix this problem? Is there some way to change the order of hooks? Delay the loading of the IntelliType driver? Thanks in advance!

    Read the article

  • Change the number of consecutive frequent ssh login before temporary blocking the user login

    - by Kenneth
    my server currently would temporarily refuse a user to login for certain amount of time (maybe ~20min) if the user consecutively frequent ssh login for 3 times. Can I change this behaviour (say relaxed the definition of frequent maybe from 'within 5 sec' to 'within 10 sec'; or increase the # of consecutive login from 3 to 5)? Thanks. Added: Ah.. now I think the problem was not with the ssh. I just tried on another newly installed server. consecutive successful login won't block the user. I have no sudo permission on the server I mentioned above. Now I suspect this behaviour may cause by the firewall in the system. Thanks everyone's comments. ADDED 2: Ah... after some searches. I think the server is using /sbin/iptables to do it as I can see the iptables program is there even though I don't have permission to list the rules. Thanks everyone, special thank to jaume and Mark!

    Read the article

  • Joining an Ubuntu 14.04 machine to active directory with realm and sssd

    - by tubaguy50035
    I've tried following this guide to set up realmd and sssd with active directory: http://funwithlinux.net/2014/04/join-ubuntu-14-04-to-active-directory-domain-using-realmd/ When I run the command realm –verbose join domain.company.com –user-principal=c-u14-dev1/[email protected] –unattended everything seems to connect. My sssd.conf looks like the following: [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 [sssd] domains = DOMAIN.COMPANY.COM config_file_version = 2 services = nss, pam [domain/DOMAIN.COMPANY.COM] ad_domain = DOMAIN.COMPANY.COM krb5_realm = DOMAIN.COMPANY.COM realmd_tags = manages-system joined-with-adcli cache_credentials = True id_provider = ad krb5_store_password_if_offline = True default_shell = /bin/bash ldap_id_mapping = True use_fully_qualified_names = True fallback_homedir = /home/%d/%u access_provider = ad My /etc/pam.d/common-auth looks like this: auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_sss.so use_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_cap.so However, when I try to SSH into the machine with my active directory user, I see the following in auth.log: Aug 21 10:35:59 c-u14-dev1 sshd[11285]: Invalid user nwalke from myip Aug 21 10:35:59 c-u14-dev1 sshd[11285]: input_userauth_request: invalid user nwalke [preauth] Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_krb5(sshd:auth): authentication failure; logname=nwalke uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): check pass; user unknown Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=myiphostname user=nwalke Aug 21 10:36:10 c-u14-dev1 sshd[11285]: pam_sss(sshd:auth): received for user nwalke: 10 (User not known to the underlying authentication module) Aug 21 10:36:12 c-u14-dev1 sshd[11285]: Failed password for invalid user nwalke from myip port 34455 ssh2 What do I need to do to allow active directory users the ability to log in?

    Read the article

  • How do you monitor SSD wear in Windows when the drives are presented as 'generic' devices?

    - by MikeyB
    Under Linux, we can monitor SSD wear fairly easily with smartmontools whether the drive is presented as a normal block device or a generic device (which happens when the drive has been hardware RAIDed by certain controllers such as the one on the IBM HS22). How can we do the equivalent under Windows? Does anyone actually use smartmontools? Or are there other packages out there? The problem is that SCSI Generic devices just don't show up in Windows. If the drives aren't RAIDed we can see them fine. How I'd do it in Linux: sles11-live:~ # lsscsi -g [1:0:0:0] disk SMART USB-IBM 8989 /dev/sda /dev/sg0 [2:0:0:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg1 [2:0:1:0] disk ATA MTFDDAK256MAR-1K MA44 - /dev/sg2 [2:1:8:0] disk LSILOGIC Logical Volume 3000 /dev/sdb /dev/sg3 sles11-live:~ # smartctl -l ssd /dev/sg1 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 26~ Percentage Used Endurance Indicator |_ ~ normalized value sles11-live:~ # smartctl -l ssd /dev/sg2 smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32.49-0.3-default] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net Device Statistics (GP Log 0x04) Page Offset Size Value Description 7 ===== = = == Solid State Device Statistics (rev 1) == 7 0x008 1 3~ Percentage Used Endurance Indicator |_ ~ normalized value

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS. [migrated]

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • If spambots are filling out forms on my website, should I mark the mail as spam?

    - by rob
    Apparently spambots have discovered the contact forms my website. I'm considering adding a CAPTCHA, but in the meantime I've been getting dozens of completed website forms in my Gmail account each day. I know I can configure a filter to always allow e-mails from my website to be delivered (i.e., "never send to spam"), but I'm wondering whether there are also larger implications. At first, I was marking the spam forms as spam because I figured it would block the sender's e-mail address from getting through again. But since then, I've given it some more thought. I know many spam filters use Bayesian filters and other techniques to identify spam based on its content. If I mark all these e-mails as spam, will legitimate e-mails from my website be marked as spam? Even if I stop marking the messages as spam, I can see another potential problem. My form is pretty standard (in fact, it's probably very similar to other online forms). If I mark my spambot-submitted website forms as spam, will Gmail begin to mark other people's similar-looking messages as spam?

    Read the article

  • New XPC: No video, no ethernet link, but drive spins

    - by Mike Pennington
    I bought a Shuttle XPC SH67H3 with integrated video. I installed: An Intel i5 2450P 16GB DDR3 RAM A SATA hard drive from my old linux server that still is bootable I have both power connectors plugged into the motherboard. I realize that the Intel i5-2450P doesn't have video capabilities; however, the drive spins like it's doing something useful. It seems like I should get an ethernet link light when I fire this up. I plan to run this headless anyway, so it would be really nice if I could figure out how to run this without a video card at all. I know the IP address and login info for the linux install on the disk. I plugged in speakers, but get no bios beeps when I power it up. Shuttle's bios manual has nothing in there that indicates I should have problems in this configuration. My questions: Is there a reason that the missing video card would block usage of the ethernet port? Are there settings on the motherboard / bios I can change to get this working?

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

  • nginx rewrite or internal redirection cycle

    - by gyre
    Im banging my head against a table trying to figure out what is causing redirection cycle in my nginx configuration when trying to access URL which does not exist Configuration goes as follows: server { listen 127.0.0.1:8080; server_name .somedomain.com; root /var/www/somedomain.com; access_log /var/log/nginx/somedomain.com-access.nginx.log; error_log /var/log/nginx/somedomain.com-error.nginx.log debug; location ~* \.php.$ { # Proxy all requests with an URI ending with .php* # (includes PHP, PHP3, PHP4, PHP5...) include /etc/nginx/fastcgi.conf; } # all other files location / { root /var/www/somedomain.com; try_files $uri $uri/ ; } error_page 404 /errors/404.html; location /errors/ { alias /var/www/errors/; } #this loads custom logging configuration which disables favicon error logging include /etc/nginx/drop.conf; } this domain is a simple STATIC HTML site just for some testing purposes. I'd expect that the error_page directive would kick in in response to PHP-FPM not being able to find given files as I have fastcgi_intercept_errors on; in http block and nave error_page set up, but I'm guessing the request fails even before that somewhere on internal redirects. Any help would be much appreciated.

    Read the article

  • Debian/OVH: How to configure multiple Failover IP on the same Xen (Debian) Virtual Machine?

    - by D.S.
    I have a problem on a Xen virtual machine (running latest Debian), when I try to configure a second failover IP address. OVH reports that my IP is misconfigured and they complaint they receive a massive quantity of ARP packets from this IPs, so they are going to block my IP unless I fix this issue. I suspect there's a routing issue, but I don't know (and can't find any useful info on the provider's website, and their support doesn't provide me a valid solution, just bounce me to their online - useless - guides). My /etc/network/interfaces look like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address AAA.AAA.AAA.AAA netmask 255.255.255.255 broadcast AAA.AAA.AAA.AAA post-up route add 000.000.000.254 dev eth0 post-up route add default default gw 000.000.000.254 dev eth0 # Secondary NIC auto eth0:0 iface eth0:0 inet static address BBB.BBB.BBB.BBB netmask 255.255.255.255 broadcast BBB.BBB.BBB.BBB And the routing table is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 000.000.000.254 0.0.0.0 255.255.255.255 UH 0 0 0 eth0 0.0.0.0 000.000.000.254 0.0.0.0 UG 0 0 0 eth0 In these examples (true IP addresses are replaced by fake ones, guess why :)), 000.000.000.000 is my main server's IP address (dom0), 000.000.000.254 is the default gateway OVH recommends, AAA.AAA.AAA.AAA is the first IP Failover and BBB.BBB.BBB.BBB is the second one. I need both AAA.AAA.AAA.AAA and BBB.BBB.BBB.BBB to be publicly reachable from Internet and point to my domU, and to be able to access Internet from inside the virtual machine (domU). I am using eth0 and eth0:0 because due to OVH support, I have to assign both IPs to the same MAC address and then create a virtual eth0:0 interface for the second IP. Any suggestion? What am I doing wrong? How can I stop OVH complaining about ARP flood? Many thanks in advance, DS

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >