Search Results

Search found 19525 results on 781 pages for 'say'.

Page 590/781 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • MySQL permissions error when showing databases

    - by Tony
    I was trying to install homebrew and very very stupidly did this: sudo chown -R $USER /usr/local The Homebrew instructions say to do this and I'm not much of a sysadmin so I took their word for it. Lesson learned (although I wouldn't really know how to test this...seems like an "undo" script would be super valuable here) Anyway, what is done is done, but now I get this error: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 548 Server version: 5.1.33 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show databases; ERROR 1018 (HY000): Can't read dir of '.' (errno: 13) I tried chown-ing back to root with no avail. Does anyone know how I can fix this without reinstalling mysql? Optionally, if I have to reinstall mysql, how can I dump my databases without access to the command line so I don't lose all of my data. Thanks!

    Read the article

  • Gnome, open with, custom command, filename reference

    - by Tergiver
    I want to execute this custom command on a file from the Gnome File Browser: hexdump -C $f > $f.dump That would create a hexdump of the file with the file's name + .dump in the directory that the file exists in. When I say $f above I mean something that would substitute the name of the file that was opened. So I've tried "Open with", "Use a custom command". I can't get it to work. I've tried a number of symbols in place of $f. Is it even possible? Before you suggest getting a GUI hexdump program, this is just one example. I have the need to do this sort of thing for many terminal-type programs. Am I the only person on Earth who wishes for a hybrid File-Browser-slash-Command-Terminal? That would be a file browser which contained a terminal pane who's current directory always matched that of the file browser. One could execute shell commands in the context of what they were viewing in the browser.

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Heavy write to Galera cluster - table locked, cluster practically unusable

    - by Joe
    I set up Galera Cluster on 3 nodes. It works perfectly for reading data. I have done simple application to make some test on the cluster. Unfortunately I have to say that the Cluster fails totally when I try to do some writing. Maybe it can be configured differently or I do sth wrong? I have a simple stored procedure: CREATE PROCEDURE testproc(IN p_idWorker INTEGER) BEGIN DECLARE t_id INT DEFAULT -1; DECLARE t_counter INT ; UPDATE test SET idWorker = p_idWorker WHERE counter = 0 AND idWorker IS NULL limit 1; SELECT id FROM test WHERE idWorker = p_idWorker LIMIT 1 INTO t_id; SELECT ABS(MAX(counter)/MIN(counter)) FROM TEST INTO t_counter; SELECT COUNT(*) FROM test WHERE counter = 0 INTO t_counter; IF t_id >= 0 THEN UPDATE test SET counter = counter + 1 WHERE id = t_id; UPDATE test SET idWorker = NULL WHERE id = t_id; SELECT t_counter AS res; ELSE SELECT 'end' AS res; END IF; END $$ Now my simple C# application creates for example 3 MySQL clients in separate threads and each one executes the procedure every 100ms until there is no record where column 'counter' = 0. Unfortunately - after about 10 seconds sth is going bad. On servers there is process 'query_end' that never ends. After that - you cannot make update on the test table, MySQL returns: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction . You cant even restart mysql. What you can do is to restart server, sometimes whole cluster. Is Galera Cluster so unreliable when you do massive concucurrent writing/updates? Hard to believe.

    Read the article

  • Can mod-rewrite be used to set environmental variables?

    - by VLostBoy
    Hi, I've got an existing simple rewrite rule like so: <Directory /path> RewriteEngine on RewriteBase / # if the requested resource does not exist RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # route the uri to a front controller RewriteRule ^(.*)$ index.php/$1 [L] </Directory> This works fine, but I want to do one of either two things. On the basis of detecting the clients accept-language header, I want to either (i) Set the detected language as an environmental variable that the script can use or (ii)Rewrite the request so that the url begins with the language code (e.g. www.example.com/en/some/resource) In terms of implementing (i), I defined this rule: <Directory /path> RewriteEngine on RewriteBase / # if the requested resource does not exist RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # if the users preferred language is supported... RewriteCond %{HTTP:Accept-Language} ^.*(de|es|fr|it|ja|ru|en).*$ [NC] # define an environmental variable PREFER_LANG RewriteRule ^(.*)$ - [env=PREFER_LANG:%1] # route the uri to a front controller RewriteRule ^(.*)$ index.php/$1 [L] </Directory> I've tried a few variations, but PREFER_LANG is not defined in $_SERVER nor retrievable by getenv. In terms of implementing (ii)... lets just say its messy. I'll post it if I can't get an answer to one. Can anyone advise me? Thanks!

    Read the article

  • Exchange 2003 Internet Mail Size Limits

    - by scampbell
    I have unsuccessfully tried to increase per user incoming mail size settings by editing their user account settings on our Exchange server, but large incoming mail from external domains is still blocked using the default global settings. After reading here: http://support.microsoft.com/default.aspx?scid=kb;en-us;322679 I see that All Internet e-mail messages use the global setting for limits on sending and on receiving. The message categorizer evaluates the sender's sending limit and the recipient's receiving limit. In example 2 earlier, a user with a user mailbox limit of 3 MB could receive messages from another user with a 3-MB sending limit. Because Internet users use the global setting, they can send only a 2-MB message. Which to me is madness! Surely if I want to allow a user to receive mail up to a certain size then I should be able to set it as such? Is there a specific way of getting round this? Would setting the global defaults high and setting a lower, say 10MB, limit on the SMTP connector do the trick? Thanks.

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to [email protected] I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • Change to different user, or let different user execute a command

    - by WG-
    I have a problem. There is a server which I can access with an account by ssh, lets say WG. Now there is a folder with the following permissions. drwxr-s---+ 855 vvz www-data 20K Aug 21 17:56 pictures I want to copy this folder using rsync, however since I am not the user www-data but WG I cannot execute rsync. So I want www-data to execute a rsync command. However, I do not posses sudo powers. My friend however tells me that I am actually able to execute the rsync command as www-data, but he will not tell me how. I asked him for some clues and he told me that it had something to do with reverse shell (which I figured out to be that you connect by ssh to your server and then you connect back to your own server, or something). I also asked if it was by-design or actually a flaw in the system. He tells me it is both. Furthermore I think it has something to do with the group permissions. If I just make sure that I am with the group permissions then I can also read the files. Anybody has a clue?

    Read the article

  • How to use LVM on Rackspace Cloud

    - by batrick
    Dear all, I am trying to set up a simple but effective solution to make a backup of my rackspace cloud servers. These servers each run subversion, trac, and some database-backed custom php applications. My idea is to set up a LVM and mount a volume under, say, /srv. In this volume, I keep the data from all applications. Instead of caring about how to back-up each app in a different way (svn hotcopy, trac-admin hotcopy, huge mess for mysql), I simply take an LVM snapshot and back this one up cloud files using the excellent cloudcity script (http://github.com/jspringman/cloudcity/blob/master/cloudcity). The advantage of this solution is that it is quick and easy, and LVM allows to make decent backups. As more apps are added, it should not be required to change the backup script much. The downside, and main point of my question here, is that I am not sure how to get LVM working on Rackspace cloud, because there is only one root volume and no service like Amazon's EBS. I was thinking it may be possible to create a large empty file and use this as a "physical volume". Has anybody done anything like this before? Or do you know why it can never work? It would be great to hear from you. Thanks, batrick

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Terminal is not letting me make commands unless I hit enter a bunch of times

    - by ninja08
    Whenever I open terminal it normally allows me to immediately begin making commands. Only earlier today I did the setup for github here https://help.github.com/articles/set-up-git And then all of a sudden the thing where I give terminal commands won't allow me to give it commands unless I hit enter a few times. This is what it looks like: Last login: Fri Nov 9 11:43:28 on ttys001 mysql.save: Permission denied mysql.save: Permission denied /Users/Nick/.zshrc:32: command not found:  . ~ git: ? ~ git: ? ~ git: ? See the big space? That's because it simply will never show the ~ git: thing unless I hit enter 3-4 times. Also, it never used to say ~ git: before I did the git setup. I'm not sure what I changed. I've checked the zshrc file and commented everything out to find the line causing the problem. I've done that and it turns out it was the source $ZSH/oh-my-zsh.sh Within the oh-my-zsh.sh file I've commented out each block of code for the file starting at the top and I've found that this block is causing it: # Load the theme if [ "$ZSH_THEME" = "random" ] then themes=($ZSH/themes/*zsh-theme) N=${#themes[@]} ((N=(RANDOM%N)+1)) RANDOM_THEME=${themes[$N]} source "$RANDOM_THEME" echo "[oh-my-zsh] Random theme '$RANDOM_THEME' loaded..." else if [ ! "$ZSH_THEME" = "" ] then if [ -f "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" ] then source "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" else source "$ZSH/themes/$ZSH_THEME.zsh-theme" fi fi fi

    Read the article

  • Windows 8 install app for multiple user accounts

    - by Robert Graves
    I purchased Adera episode 2 intending to play through it with my son. We each have our own user account on the same PC. When my son logged in, he was prompted to purchase the app which I had already purchased, installed, and played on the same PC. So I checked the Terms of Use. After selecting an app in the store, there is a Terms of Use link on the left side under the Install button. It is almost impossible to identify it as a link unless you put your mouse over it. The Terms of Use are standard across all apps in the store, not specific to particular apps. The terms of use indicates that the app may be installed on up to five devices, but says nothing about multiple user accounts on those devices. However, this Microsoft blog article indicates that it is allowed. Say, for example, that your family has a shared PC. You have previously used your Microsoft account to purchase a game that all your kids like to play. You can install it for each of your kids by having each of them sign in to their Windows accounts on the shared PC, then launch the Store and sign in to the Store using your own Microsoft account. There, you’ll see all your apps and you can re-install the app on your kid’s Windows account. Installing apps on multiple user accounts on a shared PC still only counts as one of the five allowable PCs where you can install apps. So I have two questions: Is it permissible under the Terms of Use to install the app under multiple accounts on the same device? If so, how do I do so given that my son has already signed into the store using his own Microsoft account.

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

  • CPU usage always below 10% in windows server 2008 r2 x64

    - by ???
    I am using a server with windows server 2008 r2 running on it to run my program. The CPU of the server is Intel xeon x5570 2.93GHz with 2 processors, 8 cores per processer. However, I found that the cpu usage is almost always below 10% even I use 32 threads in my program. And I also found that sometimes the cpu usage could reach as high as 93% through the task manager when running my program and at that moment my program has processed over 1000 files per second while normally, it only processed over 50 files per second. However, this does not happen often. I use tools downloaded from the internet to make sure no core sleeps when the server is on, nothing changed. Also, I edited the windows register to make sure that I, as an administer, have no cpu usage limit. But it changed nothing. Is there anyway that I can make full use of my cpu? That is to say that each core runs a thread of my program and the total cpu usage could reach over 50% when I use a reasonable number of threads in my program. Did this happen to anyone of you? And could you help me with this ? Thank you!

    Read the article

  • Apps won't start after vanilla reboot

    - by Daniel R Hicks
    I had Adobe and Norton nagging me to reboot, so I did that -- clicked Reboot from the Start button. Everything seemed pretty normal as it shut down and came back up, but once up a bunch of apps won't start. The first one I noticed was Firefox. It would flash the disk light normally, but never appear on the screen. Then I tried to bring up an OpenOffice Calc window and same thing. I tried to bring up MS Word, and the splash screen appeared, but never the main screen, and the splash screen just sat there, with a swirly over it. But I tried Solitaire, Notepad++, Paint, and several others, and they popped up just fine. And I'm typing this from IE 8, which, if anything, came up faster than usual. When I try to open up "Network and Sharing Center" the window appears, but nothing appears in it, and eventually it's tagged "not responding". When I kill that window I get (after a delay) "Windows Explorer is not responding", and when I say "OK" the screen resets. I tried rebooting again, and no joy -- same as before. Have done nothing particularly strange on this box, and it's not generally at significant risk for malware. I haven't installed anything new other than the afore-mentioned updates. One other thing: Several minutes after rebooting I get the message "Error: Unable to start Bluetooth Stack Service." The Bluetooth radio is turned on, and I rarely have anything Bluetooth attached, and I don't recall that I've ever seen this message before. Added: Looking at Event Viewer, I'm getting a lot of "The description for Event ID 1 from source xxx cannot be found." Is there any significance to this? Added: I'm looking at restoring from backup, but the procedure is, at best, unclear. Is it sufficient to restore from "Backup and Restore Center", or must I restore from the restore DVD first?

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • Is it possible to map static IP to computer name instead of MAC address?

    - by xenon
    I have a number of computers with different hostnames connected to the network. They currently hold a static IP address based on their MAC address. In other words, the static IP address is mapped to their MAC address. This gives rise to a problem and that's when we swap the harddrive from one computer to another, the MAC address becomes different and the application we are running on the harddrive has problem getting the right static IP for it to work. We can't configure the IP address in the application all the time. And changing the static IP addresses to re-map to the computer's new MAC address can be quite a pain. Since all the computers have a unique computer name as their hostname, is it possible to configure such that when these computers grab IP addresses from the DHCP server, DHCP will learn about their hostname and assign the correct IP address? This is to say, the static IP is mapped to the computers' hostname instead of their MAC address. All the computers are running on Windows 7. Would this be possible? If so how should I go about do this?

    Read the article

  • nginx codeigniter rewrite: Controller name conflicts with directory

    - by palerdot
    I'm trying out nginx and porting my existing apache configuration to nginx. I have managed to reroute the codeigniter url's successfully, but I'm having a problem with one particular controller whose name coincides with a directory in site root. I managed to make my codeigniter url's work as it did in Apache except that, I have a particular url say http://localhost/hello which coincides with a hello directory in site root. Apache had no problem with this. But nginx routes to this directory instead of the controller. My reroute structure is as follows http://host_name/incoming_url => http://host_name/index.php/incoming_url All the codeigniter files are in site root. My nginx configuration (relevant parts) location / { # First attempt to serve request as file, then # as directory, then fall back to index.html index index.php index.html index.htm; try_files $uri $uri/ /index.php/$request_uri; #apache rewrite rule conversion if (!-e $request_filename){ rewrite ^(.*)/?$ /index.php?/$1 last; } # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php.*$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } I'm new to nginx and I need help in figuring out this directory conflict with the Controller name. I figured this configuration from various sources in the web, and any better way of writing my configuration is greatly appreciated.

    Read the article

  • Every month, scheduled task fails and password must be reset - why?

    - by Ducain
    [NOTE: I posted this originally at StackOverflow but it got no traction there - reposting here.] We have a bit of software installed at a few client locations that runs (via Windows task scheduler) a few times each day. In ONLY ONE of the client locations, we have a unique problem: each month, the task will stop working, after running every day for weeks. Twice now it's failed on the 2nd of the month. When I walk the client through troubleshooting it, we've found that it can't start - access denied. To fix it, we simply re-enter the same exact password, and then off it goes happy as a clam. I've never heard of this issue, and their IT people say they don't have anything running once a month that might cause that. I'm at a complete loss here. Any ideas as to why this might be happening? Further details: Windows XP pro machine. Task is being fired with credentials from a local admin account. Computer is always on, and connected to the net.

    Read the article

  • Uninstalled server 2008 now router won't handle DHCP

    - by john
    My set up is this. server behind router, router has a server and switch connected to it with multiple computers. router used to serve DHCP and DNS, a couple of days ago installed AD, DNS and DHCP on the server, and the server gave out IP's. For various reasons we had to uninstall the domain on our server. I removed AD, DHCP and DNS from the roles and set the router back to serving DHCP and DNS. Now I can't get computers on the network. I reset my router back to factory defaults, and if I plug a computer directly into the router I can get a IP address, but all the computers behind the switch can't get an IP address and can't see the router. All my computers say unidentified network, and if I ping the router it says host is unreachable. On the other hand, my wireless devices are just fine and connect no problem. But for desktops, ipconfig /release doesn't release anything and /renew can't find a server to renew on. My router log shows several FIN scans but they are from innocuous websites (google, netgear) and it shows a couple of smurf attacks but they are all from my external IP. Any ideas? the server isn't even connected to the route right now, and all the computers are set for dynamic IP addresses.. I don't know what else to try? Any help?

    Read the article

  • Laptop Synaptics Touchpad No Longer Functional After Windows 7 Upgrade

    - by Chance
    I have a Toshiba laptop, with an integral Synaptics (PS/2 port) touch pad, that I bought pre-installed with 32-bit Windows Vista. After doing a clean install of 32-bit Windows 7, I can no longer get the touch pad to respond. The function (FN) keys on the laptop still operate correctly, but they no longer display on screen when used. I have no way of knowing if the function key to enable / disable the touch pad is working correctly, although the function keys for dim / brighten display, as well as others, work fine. I have removed the device from the device manager and allowed it to reinstall with no success. I have removed the previous drivers and updated them with the current 32-bit Win7 compatible versions from Synaptics website, with no success. The Synaptics icon displays in the taskbar, and the touch pad is available in the Device Manager, both of which say the device is installed and working correctly. I have checked to make sure that the touch pad is enabled in the Synaptics menu, and have tried toggling enabled / disabled with no success. If anyone has any suggestions, or knows where I can find a solution, I would be very appreciative.

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • Remote reboot of windows to knoppix

    - by user64452
    I am attempting to develop an Auditing application. This audit application will be employed on windows networks. The Audit will need to discover Hardware and software details of all machines attached to the network (including Printers) I do not want to have to install this application on each workstation. The audit app. needs to discover all the ip addresses of all the networked workstations. I have been prototyping this app for the last couple of months and have decided to try a new tack Is this possible? a). You have a windows network, min Windows XP sp3 and upwards b). Maximum of 100 Networked machines (if that matters) c). I need to remotely reboot each WINDOWS machine in turn on the entire network and get it to startup using UNIX, say knoppix for example! d). however the knoppix live cd is only available from one of the networked machines Questions... Morphology? Longevity? Incept dates? Cheers DD

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >