Search Results

Search found 6834 results on 274 pages for 'dojo require'.

Page 194/274 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • no administrator password for Windows 7

    - by huskergirl78
    I'm a secretary and my boss set up my new Windows 7 OptiPlex 7010 (Dell) computer for me while I was on vacation (he does not remember setting any "administrator" password). We are a small office so there is no system password set, either. I've used it for 6 months, all the while I couldn't access network drives, etc., without an administrator password. It was annoying, but I could still get my work done. Finally, on a slow day I took it upon myself to "fix" the problem, and in all my infinite wisdom, I managed to change my user account from administrator to standard user, so now I really can't do anything. I can't download or install any programs, move or rename files, etc. I tried the Dell suggested solution, but the BIOS tells me there is no password set, so it has to be a Windows 7 problem. All the solutions I have come across require an administrator password to let me do them. What can I do to find out the admin password so I can use my own darn computer!? Is there a default admin password?

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • How to repair a corrupted Excel file

    - by SjoerdV
    I'm trying to restore some files for a friend who messed up a USB stick by pulling it out of the computer too soon. By searching invisible partitions I was able to restore the files. All files are intact, except 1 excel file that seems to be corrupted. I've googled about ways to repair an Excel file, because the default repair options in Excel 2003-2010 do not work for this file. I've came across a numerous third-party apps, of which the one seemes to be able to do the job: Recovery Toolbox for Excel. Of-course, this and all other apps I found require a purchase to actually repair the file. It seems a bit silly to pay 30 dollars for a one time thing, and something that is not for myself. So, I would like to keep that as a last-resort option. Is there anything I can try to fix this? I've attached a screenshot of how the file looks when I open it on my OSX when converted to a HTML file (to view the code), for the option that it could be a character set problem.

    Read the article

  • Reading email from Emacs VM using a secure server (Gmail)

    - by Alan Wehmann
    This is a question (see below) originally entered at https://answers.launchpad.net/vm/+question/108267 and upon the recommendation of Uday Reddy the question and answers are being moved here. The date of the original question was May 4, 2010. One subject of the question is use of the program stunnel with program View Mail (run within Emacs) on a PC running Microsoft Windows, in order to read email from a server that requires use of TSL/SSL (Gmail). See the related question, How to configure Emacs smtp for secure server for using a secure server, for sending email. The programs discussed are Emacs, VM (ViewMail) and stunnel. The platform under discussion is MS Windows. The original question was asked by usr345 on 2010-04-24: I tried to install vm on Windows, but when I tried to get the mail from gmail using ssl, an error emerges, emacs hanges-up. Here is the code from .emacs: (add-to-list 'load-path (expand-file-name "~/vm/lisp")) (add-to-list 'Info-default-directory-list (expand-file-name "~/vm/info")) (require 'vm-autoloads) (setq vm-primary-inbox "~/mail/inbox.mbox") (setq vm-crash-box "~/mail/inbox.crash.mbox") (setq vm-spool-files `((,vm-primary-inbox "pop-ssl:pop.gmail.com:995:pass:usr345:PASSWORD" ,vm-crash-box))) (setq vm-stunnel-program "g:/program files/stunnel/stunnel.exe") So, the question: How to configure pop-ssl on Windows?

    Read the article

  • OpenVZ: Choosing right MySQL-Server depending on host

    - by Scheintod
    What I have: Two servers running Wheezy/OpenVZ with One MySQL container on each host master/master replicated (mysql1/mysql2) Replicated DNS on each host (dns1/dns2) different web-containers on each host but regulary backuped to the other. What I want: Each container should use the "local" MySQL-Server (the one which runs on the same hardware-node). I'd like to be able to move the web-containers between the to hosts. Each container should choose the MySQL-Server (semi) automatically. This scheme should continue working if one host is down. What I tried: Currently I'm keeping track on which container should run on which host by DNS entries which are queries by scripts e.g. for questions like: "Which container should be backuped on/to which host." For choosing the right MySQL server I have one extra entry like "mysql.container_abc" which resolves to either mysql1/mysql2. So in the applications in the container I can use "mysql.container_abc" for e.g. mysql_connect and if I want to move the container around I just need to change the dns. Now I notices one problem with this approach: Every mysql_connect generates one DNS query because the dns is not cached and this slows the request down unnecessarily. What I would like better: Some way of passing the information on which host we are running to the container and using it directly instead of using DNS. E.g. some way of setting a custom /etc/hosts entry in the container. Or any other great idea. Doesn't have to include DNS but shouldn't require to much special "magic" inside the container.

    Read the article

  • optimal folder structure for storing 100k files on a USB drive

    - by cherouvim
    I need to store 100k files (around 40GB) in a USB drive. Each file has a unique int id (e.g 45000). Option one is to put all files in a single folder: root/ root/1.pdf root/2.pdf root/3.pdf ... root/567.pdf root/568.pdf root/569.pdf ... root/10001.pdf root/10002.pdf root/10003.pdf ... root/99998.pdf root/99999.pdf root/100000.pdf Option two is to create a [1-9][0-9]* folder hierarchy based on that id: root/ root/1/file.pdf root/2/file.pdf root/3/file.pdf ... root/5/6/7/file.pdf root/5/6/8/file.pdf root/5/6/9/file.pdf ... root/1/0/0/0/1/file.pdf root/1/0/0/0/2/file.pdf root/1/0/0/0/3/file.pdf ... root/9/9/9/9/8/file.pdf root/9/9/9/9/9/file.pdf root/1/0/0/0/0/0/file.pdf Which option will scale better? I can understand that the second option will require tons of folders but each folder will at most contain 10 folders and 1 file. Maintenance will not be an issue since everything will be controlled by an application. Note that this is a USB drive on linux and based on the above I'd also like to know whether I should go with FAT32 or NTFS.

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • .htaccess issue on Apache Web Server in Ubuntu VM

    - by Neon Flash
    I just installed Apache Web Server on Ubuntu 11.04 in VMWare Workstation. I created a basic HTML page, named it index.html and placed it in /var/www directory (document root). I am able to access this web page from my Host OS (Windows 7), by pointing the browser to: http://192.168.2.2/index.html where, 192.168.2.2 is the IP Address of the Ubuntu VM. Next, to test various configurations of .htaccess files, I created a new directory in /var/www called, members. Inside this directory, I created and placed a .htaccess file with the following configuration: AuthUserFile /www/Neon/auth/.htpasswd AuthName "neon's home" AuthType Basic require valid-user IndexIgnore */* I created a directory path like /var/www/Neon/auth/ and then placed a .htpasswd file inside it. To place the username and hash inside the .htpasswd file: I created a username "neon" and calculated the DES hash of a password and placed it inside .htpasswd file in format: username:hash Now, when I try to access the web page: http://192.168.2.2/members/ It does not prompt me to enter the username and password with a popup box. Instead it just displays the index.html which is placed inside members directory. I would like to get this configuration working :)

    Read the article

  • Windows 2008 Server can't connect to FTP

    - by stivlo
    I have Windows 2008 Server R2, and I am trying to install FTP services. My problem is I can't connect from outside, FileZilla complains with: Error: Connection timed out Error: Could not connect to server Here is what I did. With the Server Manager, I've installed the Roles FTP Server, FTP Service and FTP Extensibility. In Internet Information Services version 7.5, I've chosen Add FTP Site, enabled Basic Authentication, Allow a user to connect Read and Write. In FTP Firewall support on the main server, just after start page, I've set Data Channel Port Range to 49100-49250 and set the external IP Address as the one I see from outside. If I click on FTP IPv4 Address and Domain Restrictions, and click on Edit Feature Settings, I see that access for unspecified clients is set to Allow, so I click OK without changing those defaults. In FTP SSL Policy, I've set to Require SSL connection, certificate is self signed. I tried to connect with FileZilla from the same host and it works, however it doesn't work remotely, as I said above. I've enabled pfirewall.log, but apparently nothing gets logged. The server is in Amazon EC2, and on the security group inbound firewall rules, I've set that ports 21 and ports 49100-49250 accepts connections from everywhere. What else should I be checking to solve the problem?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Internet setup for my office

    - by prakash
    We have two internet connections to our office and our current setup is like this.. The internet connections require pppoe log in so i take each cable and plug it into a wifi router and configure the router to log in to the pppoe and then plug in a cable from the router to a switch and distribute the internet throughout my office. The problem with this setup is it is really hard to monitor and im not able to monitor who is hogging internet usage and what he or she is actually using it for. apart from this we also have a nas setup which is routed through another switch . Could someone please throw a little light on how i can restructure this setup for easy monitoring and better transparency... ? each wan router is connected to a different switch and is distributed to users accordingly.. we have around 40 users in the office.. we want to setup a single linux box to which i want to connect the two wan connections and from there distribute it to all our users.... im looking for a solution where we do not have to invest more that buying a single pc and a couple of nics

    Read the article

  • How does one debug Windows network share authentication?

    - by ajs410
    I have machine0 with 32-bit Vista, logged in as a domain user, running a VMWare image of 32-bit Vista, logged in as a local user, with the VM set to bridge the network. From an administrator account (called admin) within the VM, I try to access the hidden C$ share on machine0 (i.e. start - run - "\\machine0\C$\"). I get no prompts for credentials. Worse, machine0 has an admin account (different password), and machine0\admin gets locked out when VM\admin tries to access the network share. I get a message several seconds later, which feels like a cached credential failure leading to the lockout. I have checked several places for cached credentials; net use, Stored Usernames and Passwords, mapped shares. I rebooted (both machine0 and VM) to make sure the session was clear of any cached credentials. I can force net use to use my domain credentials when accessing machine0, and then I can see the share. I can also see shares that do not require credentials. I decided to try another machine on the network (machine1), 64-bit Vista, local user. This machine has no lockout policy, and after several seconds (feels like failed cached credentials again) it prompts me for credentials. After I enter them, it re-prompts me, saying "logon unsuccessful" (tried my domain credentials, and also machine1\admin's). Which is bogus, because I proceed to log on with remote desktop using the machine1\admin credentials. I have tried this on another machine (machine2, 64-bit Vista), running a copy of the same 32-bit VM, and I don't remember having this problem. machine0 has a fingerprint reader...could that try storing passwords and interfere? Are there any places I'm missing where there could be cached credentials? Is there a way to see what credentials are flying around when I try to connect?

    Read the article

  • How large is the performance loss for a 64-bit VirtualBox guest running on a 32-bit host?

    - by IllvilJa
    I have a 64-bit Virtualbox guest running Gentoo Linux (amd64) and it is currently hosted on a 32-bit Gentoo laptop. I've noticed that the performance of the VM is very slow compared to the performance of the 32-bit host itself. Also when I compare with another 32-bit Linux VM running on the same host, performance is significantly less on the 64-bit VM. I know that running a 64-bit VM on a 32-bit host does incur some performance penalties for the VM, but does anyone have any deeper knowledge of how large a penalty one might expect in this scenario, roughly speaking? Is a 10% slowdown something to expect, or should it be a slowdown in the 90% range (running at 1/10 the normal speed)? Or to phrase it in another way: would it be reasonable to expect that the performance improvement for the 64-bit VM increases so much that it is worth reinstalling the host machine to run 64-bit Gentoo instead? I'm currently seriously considering that upgrade, but am curious about other peoples experience of the current scenario. I am aware that the host OS will require more RAM when running in 64-bit, but that's OK for me. Also, I do know that one usually don't run a 64-bit VM on a 32-bit server (I'm surprised I even got the VM started in the first place) but things turned out that way when I tried to future proof the VM I was setting up and decided to make it 64-bit anyway.

    Read the article

  • Active Directory: Determining DN or OU from log in credentials [closed]

    - by Christopher Broome
    I'm updating a PHP login process to leverage active directory on a Windows server. The logging in process seems pretty straight forward via a "ldap_bind", but I also want to pull some profile information from the AD server (first name, last name, etc...) which seems to require a robust distinguished name (DN). When on the windows server I can grab this via 'dsquery user' on the command prompt, but is there a way to get the same value from just the user's login credentials in PHP? I want to avoid getting a list of hundreds of DNs when on-boarding clients and associating each with one of our users, so any means to programmatically determine this would be preferential. Otherwise, I'll know the domain and host for the request so I can at least set the DC portions of the DN, but the organizational units (OU) seem to be pretty important for querying data. If I can find some of the root level OU values associated with the user I can do a ldap_search and crawl. I browsed through the existing questions and found some similar but nothing that really addressed this, so my apologies if the obvious answer is out there. Thanks for the help.

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • How to speed up request/response to django using apache or another solution?

    - by jbcurtin
    Hey all, I'm mainly a developer, but every now and again I jump into the sys-admin position. For the most part I've gotten away with deploying php and python apps using apache. I write today because I'm starting to research faster alternatives to apache, yet still have some of the core features I require like put and delete methods and the ability to connect to a socket via apache. ( This I have not tried, but might be a nice whistle if I ever employ comet on my apps. ) As you've probably guessed, I use javascript exclusively for all my websites utilizing deep linking for SEO support. The main areas that I'm looking to increase performance is the connection between the django apps and the web server to the client response. Every day I work my best to keep the smallest memory foot print as possible, however I am getting to the end of my rope when it comes to working with apache. In general, keep in mind that I'm just starting this research so I'm looking more for material to read then solutions at this moment. My main questions: Am I missing something about apache that makes it faster then everything else? What would be a good server environment to deploy just static files one? What are some of the leading open-source and paid alternatives?

    Read the article

  • Windows Explorer Keeps On Crashing

    - by Josefvz
    Hey Folks. I'm lost... I'm using Windows 7 Ultimate 64bit. My Pc is up to date(windows updates) and I've used Winutilities to scan my registry. My explorer.exe keeps on crashing. Just randomly it seems. I don't even need to be doing anything particular. I do have experience with pc in general as I'm a software developer. I know you will require additional info, but i don't know what, so just leave a comment and I'll update. Additional info I think i should also mention that explorer is the only app that crashes on my pc. The crash report i got now: Description: A problem caused this program to stop interacting with Windows. Problem signature: Problem Event Name: AppHangB1 Application Name: explorer.exe Application Version: 6.1.7600.16450 Application Timestamp: 4aebab8d Hang Signature: 0a1b Hang Type: 16897 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 7177 Additional Hang Signature 1: 0a1bdae38ae7300761c516c4416d992c Additional Hang Signature 2: 1c51 Additional Hang Signature 3: 1c518a49cc7d37652d26c521e96f66c2 Additional Hang Signature 4: 521e Additional Hang Signature 5: 521e607ec26a72aab4ae5a7126916ef3 Additional Hang Signature 6: e5e3 Additional Hang Signature 7: e5e3ca31dad607fa7b858ff5ea5c0fa9

    Read the article

  • MySQL based authentication with crypt()ed password fails in Apache 2.2

    - by Fester Bestertester
    I'm trying to set up a simple CalDAV/CardDAV server with a Radicale backend and an Apache 2.2 frontend. So far, it's all nice and simple, but I can't get the MySQL based authentication to work. I'd like to authenticate users against an existing MySQL database, and I need the REMOTE_USER variable to be set (pretty much like in the configuration examples for Radicale). I've tried mod_auth_mysql, which authenticated the users nicely, but failed to set the REMOTE_USER variable. The newer alternative seems to be mod_authn_dbd, which doesn't seem to like the crypted passwords in the MySQL database. According to the documentation, crypted passwords should work, so maybe I'm just missing a simple parameter. The configuration looks like this: DBDriver mysql DBDParams "sock=/var/run/mysqld/mysqld.sock dbname=myAuthDB user=myAuthUser pass=myAuthPW <Directory /> AllowOverride None Order allow,deny allow from all AuthName 'CalDav' AuthType Basic AuthBasicProvider dbd require valid-user AuthDBDUserPWQuery "SELECT crypt FROM myAuthTable WHERE id=%s" </Directory> I've tested the query, it works fine. And as mentioned before, mod_auth_mysql worked nicely against the same database, but didn't set the required variables. Am I just missing some configuration parameter? Or is mod_authn_dbd just not the right tool to achieve what I want?

    Read the article

  • Packet flooding while configuring a Debian L2TP/IPSec client?

    - by Joseph B.
    I'm currently at my wits end trying to configure an L2TP over IPSec VPN connection on my Debian using openswan and xl2tp box connecting to a server of unknown configuration. I've managed to successfully establish the connection and everything appears to be working well until I attempt to set the VPN connection as my default route, at which point I see a massive flood of packets simultaneously being transmitted (on the tune of ~1.5 GB in about 2min) until the server drops my connection. Prior to this network traffic on all my interfaces is minimal. According to iftop the majority of this traffic appears to be coming out of port 12, although I can't seem to figure out how to finger a specific process. If I instead just route traffic destined for 74.0.0.0/8 through it I'm able to access Google's servers through the VPN without issue. My xl2tp.conf file is: [lac vpn-nl] lns = example.vpn.com name = myusername pppoptfile = /etc/ppp/options.l2tpd.client My options.l2tpd.client file is: ipcp-accept-local ipcp-accept-remote refuse-eap require-mschap-v2 noccp noauth idle 1800 mtu 1410 mru 1410 usepeerdns lock name myusername password mypassword connect-delay 5000 And my routing table looks like: Destination Gateway Genmask Flags Metric Ref Use Iface 10.5.2.1 * 255.255.255.255 UH 0 0 0 ppp0 10.0.50.0 * 255.255.255.0 U 0 0 0 eth0 10.50.0.0 * 255.255.0.0 U 0 0 0 eth0 10.0.0.0 * 255.255.0.0 U 0 0 0 eth0 192.168.0.0 * 255.255.0.0 U 0 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default * 0.0.0.0 U 0 0 0 ppp0 I'm seeing absolutely nothing in auth.log and syslog during this time and can't seem to find any other log files it might be writing to. Any suggestions would be appreciated!

    Read the article

  • How do i play nicely with MS SQL/SQL Server 2008

    - by acidzombie24
    Big problem. I have nearly given up. I am trying to port my prototype to use MS SQL so it will work on a server once i get it (the server will be SQL Server 2008, shared, i dont know any more info). So i tried to connect to SQL Server via visual studios IDE and had no luck. I enabled TCP and named pipes and restarted the service (and computer) with still no luck. I remembered about mdf files so i made that after an obstacle of not being able to make the connect string require i figure out visual studio has it in its properties and successfully connected with that. Then i had a problem with nested transactions. After not being able to figure out how to check i wondered if i can configure it to allow it somehow. I always thought all of MS were the same except for limitations but sql server seems to support nested transactions so theres no point trying to work around the problem with .mdf files since i wont need them and really just used it to port the base of my sql code and to check if syntax is correct. I tried installing SQL Server Management Studio since people mentioned it several times (as a solution or at least help). When installing it on windows 7 it says it may not be compatible. After running it, it launched SQL Server Installation Center (64-bit) which doesnt seem to be the same thing as i dont see a way to modify any of my server (networking) configurations or edit user permissions, etc. I am clueless what to do next. Does anyone have any ideas? I'm posting here bc i think my problem is more configurations and sql server then programming.

    Read the article

  • Latitude D600 USB port problem

    - by Moab
    Both USB ports stopped communicating on my D600, they have power, my optical mouse still lights up, no device works on the ports, everything is fine in Device manager in Dual boot XP and W7. Checked the bios, not much in there for USB. No usb device shows up when I use the F12 boot device menu either, so it must be some hardware issue. I have another hard drive with Ubuntu on it, popped it in and USB does not communicate with it either. Appears to have 5v but no communication, any Ideas besides another motherboard or USB card for the pcmcia slot (these don't work to well from my research)? I mostly use them for mass storage devices and pcmcia slots don't supply enough power for these devices. Thanks to all who answer with last ditch efforts. I hate to give up on it, its been good to me and still runs rather well for its vintage. EDIT: I did inspect the ports with a flashlight and did a partial disassembly of the laptop in an attempt to check the solder joints, but would require complete motherboard removal to see them, that is where I stopped. .

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >