Search Results

Search found 17977 results on 720 pages for 'someone smiley'.

Page 534/720 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Change A Password

    - by Thomas
    I have a non-domain machine that I use with our company's domain resources over vpn regularly. I switched to Windows 8 (fresh install), and the "Change a password" option went away from the Ctrl-Alt-Del window. Can't seem to google anything about this subject, or find a way to access that password change dialog. I tried running the .reg file from http://www.sevenforums.com/tutorials/63014-ctrl-alt-del-screen-add-remove-change-password.html with no luck. I also tried to Disable "Remove Change Password" via gpedit.msc. I could do it from my domain laptop, but I like to do it on this machine because it updates all my saved copies of those credentials. My local account is tied to my hotmail account if that matters. Updates: Administrator account. I apologize for stating this was an upgrade, it was a fresh install to a diff't drive. 64-bit Pro install. Bounty's almost up If someone can just confirm that the Change A Password... should or should not be present on a non-domain, Live tied, Win8 install, I'll be satisfied that I can or cannot expect to fix it.

    Read the article

  • Can ZFS ACL's be used over NFSv3 on host without /etc/group?

    - by Sandra
    Question at the bottom. Background My server setup is shown below, where I have an LDAP host which have a group called group1 that contains user1, user2. The NAS is FreeBSD 8.3 with ZFS with one zpool and a volume. serv1 gets /etc/passwd and /etc/group from the LDAP host. serv2 gets /etc/passwd from the LDAP host and /etc/group is local and read only. Hence it doesn't not know anything about which groups the LDAP have. Both servers connect to the NAS with NFS 3. What I would like to achieve I would like to be able to create/modify groups in LDAP to allow/deny users read/write access to NFS 3 shared directories on the NAS. Example: group1 should have read/write to /zfs/vol1/project1 and nothing more. Question The problem is that serv2 doesn't have a LDAP controlled /etc/group file. So the only way I can think of to solve this is to use ZFS permissions with inheritance, but I can't figure out how and what the permissions I shall set. Does someone know if this can be solved at all, and if so, any suggestions? +----------------------+ | LDAP | | group1: user1, user2 | +----------------------+ | | | |ldap |ldap |ldap | v | | +-----------+ | | | NAS | | | | /zfs/vol1 | | | +-----------+ | | ^ ^ | | |nfs3 |nfs3| v | | v +-----------------------+ +----------------------------+ | serv1 | | serv2 | | /etc/passwd from LDAP | | /etc/passwd from LDAP | | /etc/group from LDAP | | /etc/group local/read only | +-----------------------+ +----------------------------+

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Internet setup for my office

    - by prakash
    We have two internet connections to our office and our current setup is like this.. The internet connections require pppoe log in so i take each cable and plug it into a wifi router and configure the router to log in to the pppoe and then plug in a cable from the router to a switch and distribute the internet throughout my office. The problem with this setup is it is really hard to monitor and im not able to monitor who is hogging internet usage and what he or she is actually using it for. apart from this we also have a nas setup which is routed through another switch . Could someone please throw a little light on how i can restructure this setup for easy monitoring and better transparency... ? each wan router is connected to a different switch and is distributed to users accordingly.. we have around 40 users in the office.. we want to setup a single linux box to which i want to connect the two wan connections and from there distribute it to all our users.... im looking for a solution where we do not have to invest more that buying a single pc and a couple of nics

    Read the article

  • Block SMTP connections from mail domains which don't themselves accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the original sender refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • Google Apps: MX records for zonefile

    - by 23tux
    Hi everybody, I have a question about using Google Apps for handling emails. I don't want to set up a whole entire mail system on my server, so I decided to use Google Apps. The ownership of my domain is approved, and now I'm trying to change the MX records in the zone file of my domain. But I think I'm doing wrong, it doesn't work. I want to use mail.mydomain.com as a adress to the mail server for POP, SMTP and IMAP. My zone file looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011011700 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 111.111.111.111 localhost IN A 127.0.0.1 www IN A 111.111.111.111 ftp IN CNAME www loopback IN CNAME localhost mail IN CNAME @ relay IN CNAME www @ IN MX 10 ALT1.ASPMX.L.GOOGLE.COM. @ IN MX 10 ASPMX3.GOOGLEMAIL.COM. @ IN MX 10 ASPMX2.GOOGLEMAIL.COM. @ IN MX 10 ASPMX.L.GOOGLE.COM. @ IN MX 10 ALT2.ASPMX.L.GOOGLE.COM. I hope someone can figure out, what's wrong with this configuration. When I start a ping on mail.mydomain.org I get an answer from 111.111.111.111 and not from the google server ALT1.ASPMX.L.GOOGLE.COM. thx, tux

    Read the article

  • Simple, centralized user management on a small LAN - NIS or LDAP?

    - by einpoklum
    I'm setting up a small LAN for my team. It will, for all intents and purposes, not be connected to any external networks. I would it to have centralized control of user accounts (at least, I think I'd like that; I'm also considering using puppet, so theoretically I could just push /etc/passwd changes, or something). The number of machines is fixed, but not very small. Mostly they're 'attached' to a single user, but sometimes people work remotely on someone else's box; and there are a couple of servers. I've read this question, but my scenario is much simpler (even simpler than in this question) and I'd like to do something (relatively) quick, with not much hassle, but not a dirty totally-insecure hack. Is NIS relevant for my scenario? If not, what's the most hassle-free way to set up LDAP (or LDAP+Kerberos) to achieve the same? Notes: I have no experience with setting up either NIS or LDAP. We use Debian-flavored Linux distributions, mainly Kubuntu 12.04 (not my choice, but that's the way it is).

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • Fixed ruby/mysql connection with new libmysql.dll, and broke Apache in the process

    - by jmtoporek
    Ok so bit of background - all my development has been on a local Windows 7 machine. I had Apache with PHP/MySQL running with no issues. Been using ruby (1.9.3 and latest rails release 3.2.9) with built in webrick server, but had a devil of a time connecting to mysql. Did some research, updated my libmysql.dll file in c:/ruby/bin and it worked! Very happy... except now Apache stopped working. In my attempt to resolve the issue I found an older copy of libmysql.dll, renamed the new file, copied the old file back to c:ruby/bin and apache works, ruby does not. So I can take this ass backwards approach but obviously this seems pretty stupid. I was surprised that Apache was using the dll file in ruby/bin folder. I presume this is related to path variables perhaps? I guess I was hoping someone could direct me as to how I can use one dll file for apache and another for ruby. Or if you have some other smarter approach - I've smart enough to follow directions to install apache from scratch and enable php on windows as well as ubuntu, but I'm not much of a sys admin, just a semi competent web developer.

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • How can I get multiple video cards to work on linux?

    - by user17943
    I installed fedora 12. I have 2 ATI cards that I used to use on windows to run 4 monitors. A recurring problem has been to get them detected in linux. Only my secondary card is picked up linux. When I manage the displays it detects the 2 monitors connected that card. What are the specific steps I should take to get the second card detected? Supposedly there is a tool system-config-xfree. I don't have it, yum can't find it. Also I heard it has something to do with editing some xorg.conf file or something to that effect. I have absolutely no idea how to find the "bus id" of my card, or lookup the horizontal refresh rates, etc.. I would probably have no problem following the documentation & editing the file if I knew a good way to find these values. Someone also suggested installing linux twice and saving the xorg.conf it generates each time (with different card each time) and then merging the two by hand. That is like killing a fly with a hammer though, when I do this again and again in the future It'd be nice to not have to take twice as long. Thanks

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • ASUS laptop doesn't charge/use the battery after reinstalling Windows 7

    - by Stan
    I've done a clean install of Windows 7 x64 on an ASUS X501A laptop. The battery is detected and shows in the system tray as "plugged in, charging". However the charge level stays at 76% and if the AC cord is plugged out the laptop turns off. The laptop does not turn on without being plugged in either. Everything worked perfectly prior to reinstall. I've tried: Downloading and installing all the ASUS drivers, including the ATK ACPI driver Checking the BIOS - there do not seem to be any battery-related settings Flashing the BIOS to the latest version Uninstalling Microsoft ACPI-Compliant Control Method Battery in device manager as suggested on the internet Full power discharge/ATX reset as suggested by ASUS support: remove mains power charger, remove battery, press and hold power button for 10 seconds, reconnect battery and mains and turn on I have a feeling all this may have something to do with the EFI BIOS that comes on the laptop. During the reinstall I had to delete all partitions and start anew, because the Windows installer complained about the improper order of GPT partitions. The EFI System Partition was recreated by the installer, and I am guessing that it may be missing the particular ACPI driver needed to make the battery work. I've tried researching this, but could not come up with any useful info. I am hoping someone here may know a bit more about this and maybe help me understand what's going on and how to fix it. Barring that, I'll have to re-image the drive off an identical ASUS laptop with stock install and hope it fixes things.

    Read the article

  • How to fix high Load_Cycle_Count laptop drive (TOSHIBA MK6006GAH in Vaio TX1XP)?

    - by Sam Brightman
    Hoping someone knows exactly what's going on here. It seems this drive has some combination of aggressive power saving settings and Ubuntu defaults that has massively increased the Load_Cycle_Count for the drive: https://wiki.ubuntu.com/DanielHahler/Bug59695 So the drive is now so slow that it cannot boot because it takes long enough to access the data that the kernel will not recognise it properly. I'm not worried about the data on the drive, but would really like to keep the laptop functioning. There is some indication that this is possible because the figure is still low 200,000s and most drives supposedly go to 600,000. Additionally, SMART tests pass and consider the drive healthy and without errors. But the really surprising thing was when I ran mhdd... Every single read came up red (slow) until I pressed 'R' for reset drive. I noticed the next read was normal speed, so held down 'R'. Magically the drive read perfectly for as long as I held the key BUT resumed slow (and noisy) seeking/reading after releasing. I don't think the source code to mhdd is available, so I'm not exactly sure what this means (besides, I don't know enough low-level HDD stuff either). It seems like the drive should be able to work, but is stuck trying to power save or something. There are no BIOS options on the laptop. Does anyone know how I can stop the drive from doing extremely slow/noisy operations like this? Or is constantly resetting the drive also damanging, and only causing it to work well by luck (i.e. not a suggestion that it's fixable)?

    Read the article

  • Firefox: Clear History Is SUPER EFFECTIVE?

    - by acidzombie24
    I'm seeing a performance problem on certain sites (like gmail) which clearing the history should not affect. Is this a website problem or a firefox problem and what can i do to fix it w/o clearing my history? Also as a webdeveloper i am interested in how to make this happen (or not happen). I'm using firefox 8 and i confirmed the problem by copying my profile to firefox 11 (portable). To reproduce go to gmail.com and sign in. Have your task manager open. Once you click signin or hit enter gmail will bring up your emails. Keep your eye on the CPU usage. I checked and right now on this machine its using all my CPU for 22seconds!!!! Yes. 22 seconds. Once i cleared my "browser & download history" Its <6seconds. WTF. I have no idea why or how the size of history and CPU usage when loading up gmail are correlated. I have firefox setup so it never clears the history. But... 22seconds is a disaster. Can someone explain why this is happening or a fix that isnt clearing my history? I tried visiting a few websites and only gmail eats up that much CPU. Most websites only take <5sec of max CPU. So maybe this is a gmail problem? Or a firefox problem that gmail happens to hit? I still dont understand why it happens. -edit- I forgot to mention places.sqlite is 90mb. I dont think that matters. I have a sqlite file 400mb which is pretty much 2 large tables. It has no performance issues

    Read the article

  • VMWare vSphere 5: 4 pNICs for iSCSI vs. 2 pNICs

    - by gravyface
    New SAN for me, never used before: it's an IBM DS3512, dual controller with a quad 1GbE NIC per controller that a client bought and needs help setting up. Hosts (x2) have 8 pNICs and while I usually reserve 2 pNICs for iSCSI per host (and 2 for VM, 2 for management, 2 for vMotion, staggered across adapters), these extra ports on the SAN have me wondering if storage I/O would be significantly improved with 2 additional NICs per host, or if the limitations of the vmkernel/initiator would prevent the additional multipaths from ever being realized. I'm not seeing alot of 4 pNIC iSCSI implementations per host; 2 is the de facto standard from what I've read/seen online. I could and probably will do some I/O testing, but just wondering if there's a "wall" that someone else has discovered long ago (i.e. before 10GbE) that makes a 4 NIC iSCSI per host setup somewhat pointless. Just to clarify: I'm not looking for a how-to, but an explanation (link to paper, VMWare recommendation, benchmark, etc.) as to why 2-NIC configurations are the norm vs. 4-NIC iSCSI configurations. i.e. storage vendor limitations, VMKernel/initiator limitations, etc.

    Read the article

  • Which program is locking all my executable files?

    - by Tom Wijsman
    When updating any software product, as well as manually trying to replace .exe files, it says that access is denied to the file and in fact the System process is holding a handle to the file when I check it with Process Explorer. This must be a driver or something that is malfunctioning was my first though, but now I wonder how I figure out which driver / program is doing this and why it is so. Unlocker doesn't seem to be working for me, unless someone can tell me how to use it properly other than making it appear a magical wand in the notification area.... This is what Unlocker puts in my event log: The description for Event ID 1060 from source Application Popup cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer. If the event originated on another computer, the display information had to be saved with the event. The following information was included with the event: \??\C:\Program Files (x86)\Unlocker\UnlockerDriver5.sys the message resource is present but the message is not found in the string/message table Upon searching event 1060 I get: <file name> has been blocked from loading due to incompatibility with this system. Perhaps it is because I have 64 bit?

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • CPU not working on a specific motherboard

    - by Shaman
    I'm making a computer for someone and I met a weird problem. The CPU that I have doesn't work on this motherboard. The CPU is an Intel Pentium D 925 and the motherboard is an ECS G41T-M6, which in theory should work together. The only thing reused is the power source(400W). When I start the computer, the fans start, and that's it. The BIOS doesn't boot. I tried my own power source (600W Corsair) and nothing. Removed the RAM, no warning. In desperation I tried the last thing, swaped my own CPU with this one (Core2Duo E7200). Lo and behold, it worked. Both. The Core2Duo worked on the ECS with the old power source and the RAM that I used in the first place, and the Pentium D worked on my Gigabyte G31M-ES2L. What I discovered was that the Pentium D didn't receive power on the ECS, because I tried running it without the cooler and it remained at room temperature. On a side note, I also removed the HDDs just in case. So, in conclusion, any ideas? I can't return it, and I can still use it to upgrade another PC, but I would really prefer not to buy another CPU if possible.

    Read the article

  • LSI1068E hidden drives after failed raid volume creation

    - by silk
    We are using LSI 1068E raid chipset with SAS drives. We had added new drives to the system, and tried to create new raid volume with the lsiutil, unfortunately the creation failed. The problem is that now we do not have the new raid volume and disks 'disappeared' and are not available as targets for raid. Lsiutil option 8 (scan for devices) does not display these disks at all. lsiutil option 16 (display attached devices) does list them as targets. lsiutil option 21+30 (create raid) does not list these disks. Just after insrting them to enclosure these disks appeared in the system, as expected. During the raid creation kernel logged: Mar 4 08:40:02 kilo kernel: [57555.687946] mptbase: ioc0: RAID STATUS CHANGE for PhysDisk 2 id=0 Mar 4 08:40:02 kilo kernel: [57555.687978] mptbase: ioc0: PhysDisk has been created Mar 4 08:40:02 kilo kernel: [57555.695438] scsi target0:0:2: mptsas: ioc0: RAID Hidding: fw_channel=0, fw_id=0, physdsk 2, sas_addr 0x5000c50008ebe5fd for both of them, again as expected. Unfortunately they did not appear back even though the volume was not created. The same situation is in the controller's bios after a reboot. Taking the disks out and inserting in different slots did not help, either. Has someone seen a similar problem? And knows how to 'get back' our disks?

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • how do you view / access the contents of a mounted dmg drive through TERMINAL hdiutil diskmount

    - by A. O.
    My external USB drive failed. I made a .dmg image file of the drive using disk utility. Later I was not able to mount the .dmg image. I used terminal hdiutil attach -noverify -nomount name.dmg diskutil list diskutil mountDisk /dev/disk4 then received the following message: Volume(s) mounted successfully However, I cant see the drive or access its contents through Finder. DUtility shows the drive as ghost but I still cant mount it using diskutility. Terminal tells me that the drive is mounted and constantly shows it in the diskutil list. pwd is not the mounted .dmg image. I dont know how to enter into the mounted image drive to see its contents. So in case what I said sounds like I see the files in the mounted image no this is not the case. I do not know how to access or even change the pwd within Terminal. I was hoping to see the mounted drive tru finder but I do not see that. So I need help as to how to find a way to access the mounted image drive if it was really mounted. Terminal says that it was and it shows it under diskutil list as a /dev/disk4. Can someone please help me access the files on this drive?

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >