Search Results

Search found 6730 results on 270 pages for 'loaded'.

Page 188/270 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Is there any proper documentation for mod-evasive?

    - by Question Overflow
    mod_evasive20 is one of the loaded modules on my httpd server. I read good things about how it can stop a DOS attack and wanted to try it out on my localhost. A search for mod_evasive turns up a blog post by the author which briefly describes what it does. Other than that, I can't seem to find a reference or a documentation on the apache modules site. I was wondering whether it is a module recognised by Apache since there is no mention of it on its website. I have a mod_evasive.conf file sitting in the /etc/http/conf.d folder that contains the following lines: LoadModule evasive20_module modules/mod_evasive20.so <IfModule mod_evasive20.c> DOSHashTableSize 3097 DOSPageCount 2 DOSSiteCount 50 DOSPageInterval 1 DOSSiteInterval 1 DOSBlockingPeriod 10 </IfModule> My understanding from the setting is that if I were to click refresh or send a form more than two times in a one second interval, apache will issue a 403 error and bar me from the site for 10 seconds. But that is not happening on my localhost. And I would like to know the reason. Thanks.

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • How to verify system using right GPU, after system reset [duplicate]

    - by Antoros
    This question already has an answer here: Is my mobile AMD card being used? 2 answers OS: Windows 8 CPU: Intel® Core™ i7 Processor 3635QM GPU 1 : Intel HD Graphics 4000 GPU 2 : AMD Radeon™ HD 8870M other info: System Spects Problem: im unsure that CCC is using AMD card instead of Intel's, i have encountered several issues since updating to 8.1 and i don't know what to do What happened: Installed 8.1 patch first day After 1 minute of use, BBSOD, windows never loaded again System restore wouldnt recognize 8.0 restore points i did a system reset to windows 8 since the laptop was only 3 weeks old System Broke, it did restore to factory BUT kept the registry almost intact, i had to install almost everything again, since the factory drivers where working with the updated one's registry and several problems CCC Broke too <- What i've already done Installing new drivers on top of old ones didnt work, so i used AMD uninstaller first Uninstalled and Re-installed Intel's HD Graphics Driver Tried to install mobile center, but AMD told me that it wasnt compatible (even if thats the only driver that they provide via their page as seen Here) Tried to use Auto-Detect, couldnt install driver because card was disabled because it didnt have the drivers... (see what they did here?) Had to use a workaround with Samsung Update, the driver didnt appear as download so had to use search and downloaded the driver manually. Now the graphic card appears on device manager and catalyst but as 8800 series (not exact model), and cant check the card with neither dxdiag/GPU-z/HWMonitor when right-clicking on CCC only Intel card appears launching a game and using as "high performance" would speed it up a little but i cant be sure How to verify its working properly? HWMonitor wont show AMD card even when set to high performance Latest GPU-Z wont work because a problem with Intel's, and legacy ones wont either what can I do now? I don't even know if I fixed my problem or not, and i also want to to use Adobe Premier with it, and its locked (the option to run it with the amd card not intels) Edit: now it seems to work, but cant change the setting for adobe Premiere and other programs that i Need to

    Read the article

  • TEMP environment variable occasionally set incorrectly

    - by Roger Lipscombe
    Occasionally, I find my TEMP and TMP environment variables set to C:\Windows\TEMP. They should be set to %USERPROFILE%\AppData\Local\Temp, and are configured correctly in System Properties. This manifests itself as error messages like the following: ---> System.InvalidOperationException: Unable to generate a temporary class (result=1). error CS2001: Source file 'C:\Windows\TEMP\gb_pz65v.0.cs' could not be found error CS2008: No inputs specified ...which occurs in various .NET applications (in particular Visual Studio 2010 or SQL Server Management Studio). Alternatively, SQL Server Management Studio will report: Value cannot be null. Parameter name: viewInfo (Microsoft.SqlServer.Management.SqlStudio.Explorer) If I run PowerShell elevated, then $env:TEMP is set correctly. If I run PowerShell non-elevated, then it's not. I believe that it should be set correctly in both cases. If not, it's the wrong way round. The same is true for CMD.EXE. Rebooting fixes it, temporarily, until something breaks it again. Presumably something loaded into Explorer.exe is messing with its environment variables, but what? The values in the registry are correct, even while this is happening: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment has TEMP = %SYSTEMROOT%\Temp HKCU\Environment has TEMP = %USERPROFILE%\AppData\Local\Temp By setting a breakpoint on shell32!RegenerateUserEnvironment, I'm able to trap it when it happens, but I still don't know why explorer.exe is reading the wrong environment variables. I can reproduce it consistently by broadcasting a WM_SETTINGCHANGE message (I wrote a one-line C++ program to do this). Watching the activity in Process Monitor shows that explorer.exe doesn't even look at HKCU\Environment. What is going on?

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • script to list user's mapped drive not giving results or error

    - by user223631
    We are in the process of migrating two file servers to a new server. We have mapped drives via user group in group policy. Many users have manually mapped drives and we need to find these mappings. I have created a PowerShell script to run that remotely get the drive mappings. It works on most computers but there are many that are not returning results and I am not getting any error messages. Each workstation on the list creates a text file and the ones that are not returning results have no text in the files. I can ping these machines. If the machine is not turned on, it does come up error message that the RPC server is not available. My domain user account is in a group that is in the local admin account. I have no idea why some are not working. Here is the script. # Load list into variable, which will become an array of strings If( !(Test-Path C:\Scripts)) { New-Item C:\Scripts -ItemType directory } If( !(Test-Path C:\Scripts\Computers)) { New-Item C:\Scripts\Computers -ItemType directory } If( !(Test-Path C:\Scripts\Workstations.txt)) { "No Workstations found. Please enter a list of Workstations under Workstation.txt"; Return} If( !(Test-Path C:\Scripts\KnownMaps.txt)) { "No Mapping to check against. Please enter a list of Known Mappings under KnownMaps.txt"; Return} $computerlist = Get-Content C:\Scripts\Workstations.txt # Loop through each item in the array (each computer in the list of computers we loaded into the variable) ForEach ($computer in $computerlist) { $diskObject = Get-WmiObject Win32_MappedLogicalDisk -computerName $computer | Select Name,ProviderName | Out-File C:\Tester\Computers\$computer.txt -width 200 } Select-String -Path C:\Tester\Computers\*.txt -Pattern cmsfiles | Out-File C:\Tester\Drivemaps-all.txt $strings = Get-Content C:\Tester\KnownMaps.txt Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -notmatch -simplematch | Out-File C:\Tester\Drivemaps-nonmatch.txt -Width 200 Select-String -Path C:\Tester\Drivemaps-all.txt -Pattern $strings -simplematch | Out-File C:\Tester\Drivemaps-match.txt -Width 200

    Read the article

  • configuration issue with respect to .htaccess file on ubuntu

    - by Registered User
    I am building an application tshirtshop I have following configuration in /etc/apache2/sites-enabled/tshirtshop <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/tshirtshop <Directory /var/www/tshirtshop> Options Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> and following in .htaccess file in location /var/www/tshirtshop/.htaccess <IfModule mod_rewrite.c> # Enable mod_rewrite RewriteEngine On # Specify the folder in which the application resides. # Use / if the application is in the root. RewriteBase /tshirtshop #RewriteBase / # Rewrite to correct domain to avoid canonicalization problems # RewriteCond %{HTTP_HOST} !^www\.example\.com # RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] # Rewrite URLs ending in /index.php or /index.html to / RewriteCond %{THE_REQUEST} ^GET\ .*/index\.(php|html?)\ HTTP RewriteRule ^(.*)index\.(php|html?)$ $1 [R=301,L] # Rewrite category pages RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2&Page=$3 [L] RewriteRule ^.*-d([0-9]+)/.*-c([0-9]+)/?$ index.php?DepartmentId=$1&CategoryId=$2 [L] # Rewrite department pages RewriteRule ^.*-d([0-9]+)/page-([0-9]+)/?$ index.php?DepartmentId=$1&Page=$2 [L] RewriteRule ^.*-d([0-9]+)/?$ index.php?DepartmentId=$1 [L] # Rewrite subpages of the home page RewriteRule ^page-([0-9]+)/?$ index.php?Page=$1 [L] # Rewrite product details pages RewriteRule ^.*-p([0-9]+)/?$ index.php?ProductId=$1 [L] </IfModule> the site is working on localhost and is working as if there is no .htaccess rule specified i.e. if I were to view a page as http://localhost/tshirtshop/nature-d2 then I get a 404 Error but if I view the same page as http://localhost/tshirtshop/index.php?DepartmentId=2 then I can view it. What is the mistake if any one can point out in above configuration, or else I need to check any thing else? sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) reqtimeout_module (shared) rewrite_module (shared) setenvif_module (shared) status_module (shared) Syntax OK I am using Apache2 on Ubuntu 12.04

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: How do I remove the option to eject SATA drives from the Windows 7 tray icon?

    Read the article

  • I just got a virus 6 mins ago, how? Situation.

    - by acidzombie24
    -Edit- for the people who say it isn't a virus. Norton does detect it as a virus, an icon was placed on my system tray and rkpg.exe is in my C: which was placed 6 min ago around the time my computer rebooted on its own causing me to lose data :@. Situation I on Windows XP, behind a Linksys router, I don't have DMZ on so nothing should be connecting to me. I had Firefox, MSN and Visual Studio opened. With C# I programmed a quick application to scan some pages with Internet Explorer. The site it was scanning was deviantART (which is pretty trustworthy), I doubt any banners there would hold a virus. I went to a suspicious site called freetxt.com but that was on Firefox and it didn't load the site. With an extra check I ping it and got this message "Ping request could not find host freetxt.com." The virus seems to be called braviax. Right now it brought up a message saying my computer may be infected? How on earth did it get in? I don't have uTorrent installed or any torrent or p2p applications. Nothing is installed on my computer that I haven't installed before and I know the exact time it installed because I see rkpg.exe on my C drive and my computer restarted on its own around the same time. For the previous 30 minutes actually the previous hour all I did was talk on MSN, not click any links (I went to freetxt on my own) and had that Internet Explorer thing running (which I programmed). How did it get in? I really doubt it came from a banner on deviantART and installed when I loaded the page with the webbrowser-control so something else may have happened? Is there any system defaults I should turn off? I have remote assistance off but even if it was on I shouldn't be infected due to the router not forwarding any ports?

    Read the article

  • WAMP Installation - Multiple PHP Version Issues

    - by Pete171
    I have installed WAMP because I am attempting to modify an an application which uses Zend Optimizer (I cannot use Z.O. with PHP5.3+, which is why I decided to install WAMP). I downloaded the latest version - Wamp Server 2.1 - which comes bundled with PHP5.3.5. I then downloaded a PHP version that would be compatible with Z.O. - 5.2.9. - and a compatible Apache version, 2.0.63. My problem: PHP scripts run fine, but anything with MySQL does not work. Running the testmysql.php script returns the fatal error: Fatal error: Call to undefined function mysql_connect() in C:\wamp\www\testmysql.php on line 2". I have looked in the PHP INI files inside both PHP versions, and I'm fairly sure the relevant information is there. At least, there are parts inside that mention MySQL! Perhaps somebody could clarify exactly what information should be present? Also, when visiting a page that called phpinfo(), I noticed that the 'Loaded Configuration File' was pointing to C:\wamp\bin\php\php5.3.5\php.ini, even though I have enabled the older PHP version. I've stopped and started Apache, too, and that hasn't made a difference. Is anybody able to offer any assistance? Anything at all would be great; I'm not very good at messing around with Apache/

    Read the article

  • How can I configure Samba to share (read/write) any folder with root permissions?

    - by Mike Toews
    I have a CentOS 5 VirtualBox guest on a Win7x64 host. I am attempting to setup a read/write share a directory owned by root with my Windows host using Samba, but I'm having no luck after running around in circles. To simplify matters, I've disabled my Firewall (/etc/init.d/iptables stop). As security and permissions are irrelevant for this purpose, I'd rather not have to set up another unix user/group/password. Here is the output from testparm Load smb config files from /etc/samba/smb.conf rlimit_max: rlimit_max (1024) below minimum Windows limit (16384) Processing section "[Guest Share]" Loaded services file OK. Server role: ROLE_STANDALONE and the source of /etc/samba/smb.conf: [global] workgroup = WRKGRP netbios name = SMBSERVER security = SHARE load printers = No [Guest Share] comment = Guest access share path = /root/src read only = No guest ok = Yes Running /etc/init.d/smb restart shows an OK status. However, on my Windows host, I can only see the share folder on the guest \\IPv4, but I cannot go into "Guest Share": "The network name cannot be found" error message is a common error, with a likely cause: The user you are trying to access the share with does not have sufficient permissions to access the path for the share. Both read (r) and access (x) should be possible. Am I trying to use root as a passwordless Samba guest? I'd like to, is it possible? How can I configure Samba to share (read/write) any folder with root permissions?

    Read the article

  • Emacs 24.1: How do I restore i-search Ctrl-Y behavior from older versions?

    - by Eric
    In emacs 24.1, when you do Ctrl-Y in an interactive search, it yanks the kill buffer into the search string ("it pastes the clipboard contents" in any-other-app's language) and tries to match it. In the last 20 versions or so, pressing Ctrl-Y matches the rest of the current line. I have two very common use cases: Match this line, revert the buffer, and search for the line (less often:) Where else is this text in the buffer? I tried modifying /lisp/isearch.el, switching the bindings for isearch-yank-line (which I want) and isearch-yank-kill (which I'm fine binding to the ridiculous \M-s\C-e key sequence). But I don't think this file even gets picked up. But I don't think this file even gets loaded. If I explicitly load it, I still get the 24.1 behavior. Here's my change: (add-hook 'isearch-mode-hook (lambda () (define-key isearch-mode-map "\C-y" 'isearch-yank-line) (define-key isearch-mode-map "\M-s\C-e" 'isearch-yank-kill) )) No change in the behavior. I even tried hacking isearch.el, still no change. This is on Windows btw, but I suspect it doesn't matter. Could someone tell me how I can restore the old binding?

    Read the article

  • Need help toubleshooting PC

    - by brux
    I have had problems since my dog pee'd on my computer. Problem: loads windows fine, at random intervals from 5 minutes to 30 minutes it restarts itself. There is nothing in the event log such as errors, no BSOD, just cold restart. after restarting - sometimes- it POST's and restarts itself at the end of POST. It will do this many times and then finally load windows. The cycle then begins again, it will restart eventually. What I have done: I thought it was HDD at first, since this is the only part of the computer which actually got wet with any fluid ( the case is off the PC and the dog pee'd down the front where the HDD is located). Seatool, the seagate HDD tool, found errors when I ran it inside windows, so I ran it in DOS mode from boo-table USB and ran it. It found the same number of errors and fixed them all. I ran the scan again and it says "Good". I loaded windows and ran the scan and it also said "Good there. So the HDD appears to be fine but the problem persists, random restarts. What else could this be? I have taken the computer apart and cleaned everything and also taken the PSU apart and cleaned it thoroughly. The problem still persists, what should my next steps be?

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files are ready for rollforward

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • Centos 6.2 postfix install dependency issues

    - by Mishari
    I am administrating a VPS running cPanel and I'm trying to install postfix. Redhat-release says the version is CentOS release 6.2 (Final) and uname -a says: Linux server.mydomain.com 2.6.32-220.el6.i686 #1 SMP Tue Dec 6 16:15:40 GMT 2011 i686 i686 i386 GNU/Linux This is how I'm installing postfix (I had tried to solve the problem earlier by installing epel). # yum install postfix Loaded plugins: fastestmirror, security Loading mirror speeds from cached hostfile * epel: mirror.cogentco.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package postfix.i686 2:2.6.6-2.2.el6_1 will be installed --> Processing Dependency: mysql-libs for package: 2:postfix-2.6.6-2.2.el6_1.i686 --> Finished Dependency Resolution Error: Package: 2:postfix-2.6.6-2.2.el6_1.i686 (centos-burstnet) Requires: mysql-libs You could try using --skip-broken to work around the problem Attempts to install mysql-libs tells me several files conflict with "MySQL-server-5.1.61-0.glibc23.i386" I'm not sure why or how this is happening, does anyone know how to resolve this? Surely Centos 6.2 could not have shipped with a broken postfix.

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • .htaccess with godaddy not working in subdomain

    - by explorex
    Hi, i have a site uploaded to shared subdomain (which is inside a folder). and htaccess is not working. please get details from here. EDIT::copied from stack overflow Hi, i uploaded as website to a subdomain, and every page is not working except the front page please check it here. what could be the possible reason? i shoud have 8 pages in front level and many more on admin level but i am getting 404 error as you can see, does anyone has idea or suggestion? UPDATE:: .htaccess file RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] UPDATE to url rounting i do have few url router like below BUT i dont have any default router $router->addRoute( 'get-destination', new Zend_Controller_Router_Route('destination/get/:id/:dest-name', array( 'controller' => 'destination', 'action' => 'get', 'id' => 'id', 'dest-name' => 'dest-name' )) ); just to make look cooler and on my navigation (which is loaded from xml i have) something like <nav> <home> <label>HOME</label> <controller>index</controller> <action>index</action> <route>default</route> </home> since i was getting url problem from where url was routed and please check phpinfo at http://websmartus.com/demo/globaltours/public_html/phpinfo.php

    Read the article

  • Windows 7 hangs on black screen for a while after log in

    - by steini
    I get the welcome screen. I click on my user and get the "logging on" screen. After that all I get is a black screen with a mouse cursor. I can't even start task manager. No ctrl+alt+del or ctrl+shift+escape. It stays like this for about 10 minutes, then the desktop finally starts loading. According to the hdd led on my case, windows isn't even trying to access the hard drive for that whole time. It's just hanging doing nothing it seems. What I have tried: Uninstalled video driver and removed leftovers with driver sweeper Disabled all startup programs and non microsoft services Loaded "last known good configuration" Ran the alleged "black screen fix" from prevx against my best judgement (don't really like running random exes without knowing what they do at all) None of that works. I can boot into safe mode normally. My specs: i7 920 Gigabyte X58-UD3R Gigabyte HD5870 1GB 12GB Mushkin Silverline 1333MHz Windows 7 Ultimate x64 I'm also having another problem which I suspect is related. After I have gotten the computer up and running, everything works perfectly, but when it's been on for a while it starts behaving strangely when changing display modes. When I start up a game or anything that changes the screen resolution the computer freezes for about a minute every time until I reboot again. I think this is probably related to the black screen problem. Just thought I'd check to see if anyone has had the same problem. Let me know if I should post any more details about my system to help diagnose this. Thanks in advance.

    Read the article

  • DVI output only working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

  • nginx: js file loads indifferently every refresh

    - by poymode
    I have this nginx problem wherein a js file in a rails app loads indifferently. Whenever I try to access the JS file in the browser and refresh the page, the scrollbar changes length meaning sometimes it loads half the js page, sometimes the whole and sometimes just a part of it. the js file size is 71K. my nginx server is on different server,separate from my rails app. when I try to access the js file directly through the app server, lets say 10.48.30.150:3000/javascripts/file.js it works fine and doesnt show any half-loaded page. but when I use the nginx server which upstreams the rails app, it shows the indifferent page loads. here is my nginx http conf error_log /usr/local/nginx/logs/error.log; pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 256; access_log /usr/local/nginx/logs/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 0; tcp_nodelay on; #gzip on; #gzip_min_length 4096; #gzip_buffers 16 8k; #gzip_types application/x-javascript text/css text/plain; large_client_header_buffers 4 8k; client_max_body_size 2G; include /usr/local/nginx/conf.d/*.conf; }

    Read the article

  • Windows 7 breaks even in safe mode

    - by delenda
    Hi, I have a Dell XPS M1730 with Windows 7 installed. I noticed last night that after a few hours of use, the fans kicked into full and I couldn't do anything without it taking forever. Minimising windows, opening device manager or even opening process explorer took minutes and a game install I had just started took nearly 4 hours to complete. When procexp finally loaded, the refresh was so slow that it was mostly useless. From what I could gather, it was reporting 60% idle processes with procexp using nearly 40%. There were no hardware interrupts listed. When I rebooted, the problem went away for about 10 minutes and then the same thing happened. The issue persists in safe mode and even after I removed the graphics drivers, which have been an issue in the past, it still happens. Icons flash quite quickly on the desktop periodically and screen refresh is painfully slow. When booting now, the fans kick in to full as soon as the windows logon box comes up and it's taking 10 minutes to bring the desktop up. Chkdsk reports nothing and the raid check says that everything is fine. I'm thinking hardware failure, probably HDD but wanted some other opinions. I'm planning to try a linux live cd to see if it works without using the hard disks. If anyone has any input, it would be greatly appreciated. Delenda

    Read the article

  • What hardware would I need (approx) to run ESXi server?

    - by mr.b
    Hi, I am considering to purchase off-the-shelf commodity hardware in order to build server that will host virtual machines using ESXi server. Intended purpose for this server is NOT mission critical tasks. It will have to run perhaps 20-50 Windows XP/Vista/7 virtual machines (in total, but closer to 20 figure). Each guest would have to have 1-2 GB of ram, and probably two-three times more disk space than guest OS needs with clean install and all updates applied (that would be around 6-8 GB for XP, and i believe closer to 10-15 for win7). Those guests will act as a test ground for a new product that is network management software, thus guests will idle most of their time once initially loaded, but if I give them some task to complete, they should be able to perform reasonably well. Now, from what I have learned... CPU is usually not much of an issue (6 cores would do it), memory should not be lacking, but doesn't have to be sum of all guests, because of overcommitment... That leads me to IO, which is, as it seems, the bottleneck. Since I have very little experience with ESXi (and ESX, too) server, I'd like to ask: How much memory could I save by overcommitment, and how does it affect performance? Is 6-core cpu enough to run above described system? Would it be possible to run entire server off two (or even one) SSD drives (to host system virtual disks, with few additional HDDs (2-3) in RAID 0 to be used as secondary storage? I read somewhere that ESXi allows having something like "master image", essentially virtual machine that is "deployed" many times, so that disk space can be saved by having only differences stored by specific guests, instead of copying around whole virtual disks. Is this true, and how can this help me? Are there any other things I need to take into consideration when building this off-the-shelf solution? I should probably mention here that I'm fully aware of issues like SPOF regarding power supply, raid 0, etc, but since it's only a testing ground and not a production system, it's not so important for me. Thanks, B.

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Who should I run mysql as, on a personal computer?

    - by user664833
    I just installed mysql via homebrew (with brew install mysql, on Mac OS X Mountain Lion - recently installed from scratch). Following the installation, there is a "caveats" section with options around further necessary actions to take: ==> Caveats Set up databases to run AS YOUR USER ACCOUNT with: unset TMPDIR mysql_install_db --verbose --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/usr/local/var/mysql --tmpdir=/tmp To set up base tables in another folder, or use a different user to run mysqld, view the help for mysqld_install_db: mysql_install_db --help and view the MySQL documentation: * http://dev.mysql.com/doc/refman/5.5/en/mysql-install-db.html * http://dev.mysql.com/doc/refman/5.5/en/default-privileges.html To run as, for instance, user "mysql", you may need to `sudo`: sudo mysql_install_db ...options... Start mysqld manually with: mysql.server start Note: if this fails, you probably forgot to run the first two steps up above A "/etc/my.cnf" from another install may interfere with a Homebrew-built server starting up correctly. To connect: mysql -uroot To launch on startup: * if this is your first install: mkdir -p ~/Library/LaunchAgents cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist * if this is an upgrade and you already have the homebrew.mxcl.mysql.plist loaded: launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist cp /usr/local/Cellar/mysql/5.5.27/homebrew.mxcl.mysql.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist You may also need to edit the plist to use the correct "UserName". On previous versions of Mac OS X I ran mysql as mysql user, but now I am confronted by the idea of running it as myself. I am the only one who uses this computer (which happens to be my laptop), and I do programming for work and for pleasure. What are the pros & cons, or best practices, around choosing whether to run mysql AS YOUR USER ACCOUNT or as mysql or something else still?

    Read the article

  • DVI output _only_ working on Windows, not during booting or on Linux

    - by Mononofu
    So yesterday I booted my laptop up and the external monitor I have it connected to just stayed black. At first, I thought the problem would go away when Ubuntu was loaded, but it didn't. I tried to reboot a few times, to no avail. Then I decided to give Windows 7 a try, and suddenly (at the login-screen), my external monitor turned on and worked like normal. I have connected the monitor via DVI, and this only seems to work with Windows now. I don't even get a signal in my BIOS! Mind you, everything was working fine before that, and I didn't change a single thing. I then tried to connect the monitor via VGA (from my DVI jack, which can output VGA using an adaptor), and it worked again. However, 1920x1200 using VGA looks like crap - black print on white background is basically illegible. Do you have any ideas how to fix this peculiar problem? I only use windows for gaming, so it's no real help that it still works normally. Please also excuse any spelling mistakes, I am practically typing this blindly. Edit: I only have one graphics card in my laptop, and I can't select anything related to that in my BIOS. In fact, I can pretty much do almost nothing there. My laptop is a Nexoc Osiris E703, graphics gard is a GeForce Go 7900 GTX. As I mentioned before, DVI output during booting and on Ubuntu was working fine for years before yesterday!

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >