Search Results

Search found 6981 results on 280 pages for 'force flow'.

Page 204/280 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Windows 7 install detects SSD but doesn't list it to install to

    - by Mohamed Meligy
    I'm having quite a weird problem when trying to install Windows 7 SP1 on a new Corsair Force Series 3 SSD to replace a failing HDD in my wife's laptop. When I boot to Windows install, it shows that I have no disks to install to, and tells me to find it a driver to any custom disks I may have. When I go to repair option on the first install window, and then open command prompt Window, I can see the disk using diskpart, and can partition it and format partitions, and then later access them from command prompt and copy files to them. After creating partitions, clicking the "browse" button in Windows install screen that shows no disks available to install Windows to, does show the partitions created by diskpart! So, it does detect the disk and partitions, but refuses to list them as options to install to. People on the Interwebs seem to suggest that just running diskpart "clean" solved the issue for most people, just creating an "active" "primary" partition is al most tutorials suggest. Both got me only as far as described above. The BIOS doesn't have RAID option, changing between "ATA" and "AHCI" (the only available options) didn't make any difference. Might be worth mentioning that this is on a laptop that has Sata III controller for main drive (which I connected the Sata3 SSD to), and Sata II for DVD (which I used for Windows install media). That's what googling brings at least (DELL XPS 15 L502). Any ideas? . Update: The SSD is 460 GB. I tried setting it all as one partition and creating 70-90 GB partition as well (NTFS). More importantly, Windows doesn't list the partition as one it cannot install to (which it does with disks in general when they are small for example). What happens here is different. It doesn't list anything at all. It shows empty list of drives.

    Read the article

  • Puppet classes out of order despite explicit arrow operator use

    - by Alexandr Kurilin
    Absolute puppet beginner here. I'm experiencing an interesting behavior with my puppet manifests and would love to know what I'm doing wrong. Let's for example say I'm configuring the instance with the following ordered classes: class { 'update_system': } -> class { 'facter': } -> class { 'user_sshkey': user => 'ubuntu', type => 'rsa', } -> class { 'tmux': user => 'ubuntu', } -> class { 'vim': user => 'ubuntu', } -> class { 'bashrc': user => 'ubuntu' } -> notify {"Configuring DB role":} -> class { 'postgresql': } when I run the manifest with the --debug switch, by looking at notify statements I can see the classes be executed in the following order: 1. update_system starts 2. a cron type inside of postgresql class (the very **last** class in that ordered list above) is executed 3. postgres::install starts 5. facter starts installing 6. postgres::configure and postgres::service start 7. the vim class is executed 8. "Configuring DB role" notification is made. All the way at the end here. etc Basically the thing is all over the place, the order doesn't seem to follow the arrow operators in any way. I'm guessing I'm missing something here that would force the classes to execute one at a time. Could it be that I'm missing some kind of anchor pattern here? Invalid containment?

    Read the article

  • Ubuntu: package installed, but files missing?

    - by jeckyll2hide
    I have been playing around with the /etc/asterisk directory, installing the related pacakge (asterisk-config), removing it, removing the directory manually (just playing around to get the configuration synced to my configuration repo). Now I just want to reinstall the official package, so I do: root@tethys:/etc# apt-get install asterisk-config root@tethys:/etc# tree asterisk/ asterisk/ +-- manager.d What?! Empty?!? Have I installed it? root@tethys:/etc# dpkg --get-selections | grep asterisk asterisk install asterisk-config install asterisk-core-sounds-en install asterisk-core-sounds-en-gsm install asterisk-modules install asterisk-moh-opsound-gsm install asterisk-voicemail install Indeed! Let me check the contents of the package: root@tethys:/etc# dpkg -L asterisk-config ... /etc /etc/asterisk /etc/asterisk/res_snmp.conf /etc/asterisk/dbsep.conf /etc/asterisk/cel_custom.conf /etc/asterisk/cel.conf /etc/asterisk/meetme.conf /etc/asterisk/jingle.conf /etc/asterisk/queuerules.conf ... So, what have I done that the package will get installed, but the contents are nowhere to be seen? And, more importantly, how can I force the contents to be installed, no matter what I have done before?

    Read the article

  • fail2ban Error Gentoo

    - by Mark Davidson
    Hi All I've recently setup a new VPS running Gentoo (My first time using the distro so please forgive me is this is a really easy one) and as I've done with other servers installed fail2ban. Setting it up to block the host via iptables, on too many unsuccessful logins with ssh. However I'm getting a strange error that I can't quite solve. When I start fail2ban I get these lines in the error log 2009-11-13 18:02:01,290 fail2ban.jail : INFO Jail 'ssh-iptables' started 2009-11-13 18:02:01,480 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 If I try and force a ban these errors show up in the log and the host is not banned 2009-11-13 11:23:26,905 fail2ban.actions: WARNING [ssh-iptables] Ban XXX.XXX.XXX.XXX 2009-11-13 11:23:26,929 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:26,930 fail2ban.actions.action: ERROR Invariant check failed. Trying to restore a sane environment 2009-11-13 11:23:27,007 fail2ban.actions.action: ERROR iptables -N fail2ban-SSH iptables -A fail2ban-SSH -j RETURN iptables -I INPUT -p tcp --dport ssh -j fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: ERROR iptables -n -L INPUT | grep -q fail2ban-SSH returned 100 2009-11-13 11:23:27,016 fail2ban.actions.action: CRITICAL Unable to restore environment My versions are as follows Linux masked 2.6.18-xen-r12 #2 SMP Wed Mar 4 11:45:03 GMT 2009 x86_64 Intel(R) Xeon(R) CPU E5504 @ 2.00GHz GenuineIntel GNU/Linux net-analyzer/fail2ban-0.8.4 net-firewall/iptables-1.4.3.2 If anyone could shead some light on these errors that would be great, I did wonder if it was a problem with iptables or some kernel modules but I can block an IP if I do. iptables -I INPUT -s 25.55.55.55 -j DROP so makes me think its something a bit more unusual. Thanks a lot in advance

    Read the article

  • What is the best keyboard for typing speed (not layouts)

    - by Gapton
    So I am a programmer, and I like playing typing speed games. My typing speed is, for common English words, 85 to 90 wpm, max 95. I type on various devices, my laptop, desktop, office pc.... they all have slightly different keyboards. Being a curious programmer, I wonder what types of keyboard is used for the highest possible typing speed. Or let me phrase it in another way, what is the type of keyboards that people use in typing speed contest? Here is something I know that I feel like I can share: It must be a wired keyboard, I can feel the lag as I am typing this on my wireless keyboard, even if it is a slightly more expensive model which claims to have zero lag. I know people prefer a mechanical keyboard, for the hepatic feedback, however I have not tried one. It lasts longer and is noisy, it also does not have the problem of normal keyboards where you press many keys at a time the signals will get all jammed and the computer will only receive one or two keys. I personally prefer those "thin profile" keyboards. I type a lot, and 95 wpm put me in the top 5%, this is of course just on a gaming site. However when I type on the fat keyboards, my fingers have to travel a much longer distance before the keys actually click. This is where I find myself typing much faster with those thin profile keyboards found on my laptop. Because my fingers only hover on the keys and I only need to press a short distance, each stroke takes less force and light rapid strokes is what makes me type fast. When I type on a fat keyboard, I was forced to use heavy strokes, and this slows me down. There must be some people out there who are keyboard scientists, who actually do experiments and user tests with different setups. It would be interesting to understand more about the things we use everyday for not just work but a majority of our communications. P.S. this is about hardware and not about switching keyboard layouts to dvorak

    Read the article

  • Drop database on DB2 9.5 - SQL1035N The database is currently in use

    - by Tommy
    I've never got this working the first time, but now I can't seem to do i at all. There is a connection pool somewhere using the database, so trying to drop the database when an application is using the database should give this error. The problem is there are no connection to the database when I issue these commands: db2 connect to mydatabase db2 quiesce database immediate force connections db2 connect reset db2 drop database mydatabase This allways give: SQL1035N The database is currently in use. SQLSTATE=57019 running this command shows no connections/applications DB2 list applications I can even deactivate the database, but still can't drop it. db2 => deactivate database mydatabase DB20000I The DEACTIVATE DATABASE command completed successfully. db2 => drop database mydatabase SQL1035N The database is currently in use. SQLSTATE=57019 db2 => Anyone got any clues? I'm running the cmd-windows as the local administrator (windows 2008) and this is also the admin for DB2. The connectionpool-user cannot connect during quiesce-state.

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • Xbox 360 formatting for MP4 h.264?

    - by Kayle
    For the life of me, I can't get the Xbox 360 to recognize anything other than a WMV. It's full updated with Xbox Live and I even added all the Video apps (Zune, Netflix, etc) just to see if it would force some sort of codec update. I'm using handbrake, at the moment, to try and convert an mkv with subtitles. Handbrake burns the subtitles into the video for me, which is important. I can't figure out what is missing though, as the Xbox refuses to acknowledge my MP4s. Here's the VLC codec info output: As far as I can tell, this is EXACTLY what the xbox supports. I don't want to convert every video twice by throwing it through Windows Movie Maker just to get it to WMV. None of the converters mentions above output to WMV and I don't like WMV since it's not as universal as MP4. I've tried changing the container name manually to M4V, MOV, and AVI to see if I can trick the Xbox... no beans. The video displays perfectly in WMP, of course. If I convert it to WMV, suddenly it works fine. But I don't find this acceptable, as I know the Xbox is supposed to support other filetypes. If anyone knows why my Xbox won't play anything but WMVs, I would be very appreciative to know why! It's driving me nuts. The only other information I can think to include is that it's about 3 years old and of the cheapest variety (Xbox Arcade). I've added a hard drive, xbox live, and have thoroughly updated it. Maybe there's a specific video update I'm unaware of??

    Read the article

  • Embedded Spotlight does not function in Outlook 2011

    - by syntaxcollector
    I have a rather strange problem. I manage a network of about 35 Mac, and we all recently switched from Mail.app to Outlook 2011 (Please don't debate this, I've already had this conversation ad nauseum) We are using network home directories (NHD) server from a Windows file server over the SMB protocol. The problem I'm having is Spotlight does not function inside of Outlook. But ONLY inside of Outlook. The global Spotlight can find all email and contacts with Outlook, but the embedded Spotlight cannot. As a test, I took one of my users and switched them from a network home directory to a portable home directory (PHD) (this means the home folder was copied to the local hard drive). This resulted in a working Spotlight within Outlook, as soon as I switched the user back to an NHD, however, it stopped working. I have already tried erasing the Spotlight index and killing the process to force re-indexing. I have exhausted all Spotlight troubleshooting, and since the global is working that is obviously not the issue. I believe it has something to do with the Spotlight plugin Microsoft wrote that is located in /Library/Spotlight. Any ideas?

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Set default MySQL connect charset for PHP (in RHEL)?

    - by Martijn Heemels
    We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL. To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini: mysql.connect_charset = latin1 mysqli.connect_charset = latin1 pdo_mysql.connect_charset = latin1 Specific more modern sites could override this in their bootstrapping code with: <?php mysql_set_charset("utf8", $dsn ); ...and all was well. Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice. In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset? I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all. How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Windows Server 2008 R2 SP1 Drivers Missing Would Not Install?

    - by bleepzter
    I am building my dev machine with WS2008R2SP1. I consider Windows Server as the best development environment for C# since I tend to focus on server-centric (integration) applications. The desktop flavor of Windows (7) doesn't play nicely with things like Commerce Server or BizTalk... so I to like stick to the environment I develop for. Previously I used to develop inside of VM's but I've found that it is super inefficient and tends to take a toll on the laptops. (I've gone through two of them in 6 months). Problem is that I multiple devices that do not want to be recognized by Windows: My machine is Dell Precision M4500: Intel Core i7-Q740, 1TB HDD, 8GB RAM, Dell re-branded Broadcom DW1501 802x11n Half-Mini Card, Dell re-branded Broadcom DW375 Bluetooh Module, Intel 82577LM Gigabit Network connection NVidia Quadro FX1800 Graphics The devices in question are the Dell rebranded broadcom network and bluetooth adapters: Broadcom USH: USB\VID_0A5C&PID_5800&REV_0101&MI_00 USB\VID_0A5C&PID_5800&MI_00 DW375 Bluetooth Module USB\VID_413C&PID_8187&REV_0517 USB\VID_413C&PID_8187 When I ran the broadcom installers I get "Operating System not supported" which I think is a big oversight on Broadcom's part. Why check for system version string? UGHGHGH Moreover if I try to manually force the driver in windows... I get an error: Driver Management concluded the process to install driver FileRepository\btwampsecfl.inf_amd64_neutral_d8fc2b85d035ed47\btwampsecfl.inf for Device Instance ID USB\VID_0A5C&PID_5800&MI_00\7&66DE6C9&0&0000 with the following status: 0xe0000217. '- or - Driver Management concluded the process to install driver FileRepository\btwampfl.inf_amd64_neutral_d4c4acf036c61299\btwampfl.inf for Device Instance ID USB\VID_413C&PID_8187\90004EEEF5A6 with the following status: 0xe0000217. I googled the 0xe0000217 error code and it says Bad service install section in the driver inf file... Any ideas on how to fix this? I also tried the post by BetaIQ on MSDN Forum, unfortunately the links to the driver package included in the post were dead :( PS. On a side note I also do mobile development for Android, iOS, and Windows Phone, and BB. Having the bluetooth is quite useful with mobile devices.

    Read the article

  • Windows 7, network connection with no default gateway: any way to change the "Unknown network" statu

    - by e-t172
    Hi, I have a computer running Windows 7 Pro RTM. This computer has two network connections: A Wi-fi connection to the Internet (through a home router) which works just fine. An OpenVPN virtual network connection. More precisely, this is a virtual Ethernet connection which behaves exactly like a physical Ethernet wired connection. My problem is that the "Network and sharing center" shows "Unknown network" for the OpenVPN connection. After some research I found that logical networks (outside a domain) are identified by the MAC address of the default gateway of the connection. Problem is, the OpenVPN connection has no default gateway: it is a private network, so I don't need one... Consequently, the "Unknown network" is always considered public, so the firewall is always in "public mode", which I don't want. Plus, I can't rename "Unknown connection" or anything (which makes sense), so it is kinda ugly. My goal is to define a proper logical network for the OpenVPN connection with the private profile. I know of some workarounds (disable the firewall, modify security policy to make all unknown networks "private") but they're still workarounds. I just want my clients to connect to the VPN without having to disable their firewall settings, without changing global configuration with potential side-effects (the "security policy" solution) and without having to look at an ugly "Unknown connection" in the Network and sharing center. Is there any way I can do this? I tried to check what was going on in the registry (HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList is interesting), but I still didn't find a way to "force" the OpenVPN connection to be assigned to a logical network. Any help would be very appreciated. A related question showed up at Superuser: http://superuser.com/questions/37355/windows-7-cant-identify-network/37422

    Read the article

  • Can't get SSH public key authentication to work

    - by Trey Parkman
    My server is running CentOS 5.3. I'm on a Mac running Leopard. I don't know which is responsible for this: I can log on to my server just fine via password authentication. I've gone through all of the steps for setting up PKA (as described at http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-ssh-beyondshell.html), but when I use SSH, it refuses to even attempt publickey verification. Using the command ssh -vvv user@host (where -vvv cranks up verbosity to the maximum level) I get the following relevant output: debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred keyboard-interactive,password debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password followed by a prompt for my password. If I try to force the issue with ssh -vvv -o PreferredAuthentications=publickey user@host I get debug2: key: /Users/me/.ssh/id_dsa (0x123456) debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug3: start over, passed a different list publickey,gssapi-with-mic,password debug3: preferred publickey debug3: authmethod_lookup publickey debug3: No more authentication methods to try. So, even though the server says it accepts the publickey authentication method, and my SSH client insists on it, I'm rebutted. (Note the conspicuous absence of an "Offering public key:" line above.) Any suggestions?

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Setup of HP ProCurve 2810-24G for iSCSI?

    - by 3molo
    Hi, I have a pair of ProCurve 2810-24G that I will use with a Dell Equallogic SAN and Vmware ESXi. Since ESXi does MPIO, I am a little uncertain on the configuration for links between the switches. Is a trunk the right way to go between the switches? I know that the ports for the SAN and the ESXi hosts should be untagged, so does that mean that I want tagged VLAN on the trunk ports? This is more or less the configuration: trunk 1-4 Trk1 Trunk snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 24,Trk1 ip address 10.180.3.1 255.255.255.0 no untagged 5-23 exit vlan 801 name "Storage" untagged 5-23 tagged Trk1 jumbo exit no fault-finder broadcast-storm stack commander "sanstack" spanning-tree spanning-tree Trk1 priority 4 spanning-tree force-version RSTP-operation The Equallogic PS4000 SAN has two controllers, with two network interfaces each. Dell recommends each controller to be connected to each of the switches. From vmware documentation, it seems creating one vmkernel per pNIC is recommended. With MPIO, this could allow for more than 1 Gbps throughput.

    Read the article

  • Freeradius authentication failed for unknown reason

    - by Moein7tl
    I followed this instruction to force freeradius to use mysql database. and run freeradius in debug mod. but it rejects all authentication. mysql database : mysql select * from radcheck; +----+----------+-----------+----+---------+ | id | username | attribute | op | value | +----+----------+-----------+----+---------+ | 1 | test | Password | == | test123 | | 2 | test | Auth-Type | == | Local | +----+----------+-----------+----+---------+ 2 rows in set (0.02 sec) radtest command : # radtest test test123 localhost 0 testing123 Sending Access-Request of id 235 to 127.0.0.1 port 1812 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0x00000000000000000000000000000000 rad_recv: Access-Reject packet from host 127.0.0.1 port 1812, id=235, length=20 radiusd debug mod log: rad_recv: Access-Request packet from host 127.0.0.1 port 51034, id=235, length=74 User-Name = "test" User-Password = "test123" NAS-IP-Address = 127.0.0.1 NAS-Port = 0 Message-Authenticator = 0xbf111cbbae24fb0f0a558bfa26f53476 # Executing section authorize from file /usr/local/etc/raddb/sites-enabled/default +- entering group authorize {...} ++[preprocess] returns ok ++[chap] returns noop ++[mschap] returns noop ++[digest] returns noop [suffix] No '@' in User-Name = "test", looking up realm NULL [suffix] No such realm "NULL" ++[suffix] returns noop [eap] No EAP-Message, not doing EAP ++[eap] returns noop ++[files] returns noop ++[expiration] returns noop ++[logintime] returns noop [pap] WARNING! No "known good" password found for the user. Authentication may fail because of this. ++[pap] returns noop ERROR: No authenticate method (Auth-Type) found for the request: Rejecting the user Failed to authenticate the user. Using Post-Auth-Type Reject # Executing group from file /usr/local/etc/raddb/sites-enabled/default +- entering group REJECT {...} [attr_filter.access_reject] expand: %{User-Name} - test attr_filter: Matched entry DEFAULT at line 11 ++[attr_filter.access_reject] returns updated Delaying reject of request 20 for 1 seconds Going to the next request Waking up in 0.9 seconds. Sending delayed reject for request 20 Sending Access-Reject of id 235 to 127.0.0.1 port 51034 Waking up in 4.9 seconds. Cleaning up request 20 ID 235 with timestamp +4325 Ready to process requests. where is the problem and how should I solve it?

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Dual VGA monitors with Quadro FX 580?

    - by dentrasi
    I have a machine with a Quadro FX 580 card (DVI and two Displayport). Attached to it are two 19" Acer screens, which are both (annoyingly) VGA. The first one works perfectly, with a DVI-VGA adaptor. The second one doesn't work. It's got a VGA cable, which goes into a VGA-DVI converted, which then goes into a DVI-Displayport converter. Initally, I was getting 'Cable Unplugged' on the screen, and it couldn't be seen by Windows or the nVidia control panel. After swapping the VGA-DVI adaptor (which works perfectly on another machine), Windows can now see the monitor. The nVidia panel sees the model and native resolution, but I get a constant 'No Signal' error. Switching to the other Displayport makes no difference. I suspect that the card is seeing a DVI connection plugged into it (nVididaCP shows the monitor as having a DVI connection), and it only sending out a digital signal because of this. Does anyone know of a solution (other than trying to get a Displayport-VGA adaptor), or of a way to force the card to see it as VGA? Thanks, ~Dentrasi

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >