Search Results

Search found 6395 results on 256 pages for 'resource mailbox'.

Page 197/256 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • Moodle serves on IP only - will not work with mod_proxy

    - by Jon H
    I'm trying to set a moodle server up on an Ubuntu box, which already serves Plone & Trac via Apache. In my Moodle config I have $CFG-wwwroot = 'http://www.server-name.org/moodle' The configuration below works fine for the first two, but when I visit www.server-name.com/moodle I get: Incorrect access detected, this server may be accessed only through "http://xxx.xxx.xxx.xxx:8888/moodle" address, sorry It then forwards to the IP address, where Moodle functions fine. What am I missing to get the server name approach working correctly? Apache Config follows: LoadModule transform_module /usr/lib/apache2/modules/mod_transform.so Listen 8080 Listen 8888 Include /etc/phpmyadmin/apache.conf <VirtualHost xxx.xxx.xxx.xxx:8080> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On <Location /> ProxyPass http://127.0.0.1:8082/ ProxyPassReverse http://127.0.0.1:8082/ </Location> </VirtualHost> <VirtualHost xxx.xxx.xxx.xxx:80> ServerName www.server-name.org ServerAlias server-name.org ProxyRequests Off FilterDeclare MyStyle RESOURCE FilterProvider MyStyle XSLT resp=Content-Type $text/html TransformOptions +ApacheFS +HTML TransformCache /theme.xsl /home/web/webapps/plone/theme.xsl TransformSet /theme.xsl FilterChain MyStyle ProxyPass /issue-tracker ! ProxyPass /moodle ! <Location /issue-tracker/login> AuthType Basic AuthName "Trac" AuthUserFile /home/web/webapps/plone/parts/trac/trac.htpasswd Require valid-user </Location> Alias /moodle /usr/share/moodle/ <Directory /usr/share/moodle/> Options +FollowSymLinks AllowOverride None order allow,deny allow from all <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> </Directory> </VirtualHost>

    Read the article

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it? Cross-posted from Stack Overflow. Deadlocks are normally a programming problem, but ITL deadlocks relate directly to how Oracle writes to disk, so this may be an area where DBAs have more experience.

    Read the article

  • Postfix Postscreen: how to use postscreen for smtp and smtps both

    - by petermolnar
    I'm trying to get postscreen work. I've followed the man page and it's already running correctly for smtp. But it I want to use it for smtps as well (adding the same line as smtp in master.cf but with smtps) i receive failure messages in syslog like: postfix/postscreen[8851]: fatal: btree:/var/lib/postfix/postscreen_cache: unable to get exclusive lock: Resource temporarily unavailable Some say that postscreen can only run once; that's ok. But can I use the same postscreen session for both smtp and smtps? If not, how to enable postscreen for smtps as well? Any help would be apprecieted! The parts of the configs: main.cf postscreen_access_list = permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr postscreen_dnsbl_threshold = 8 postscreen_dnsbl_sites = dnsbl.ahbl.org*3 dnsbl.njabl.org*3 dnsbl.sorbs.net*3 pbl.spamhaus.org*3 cbl.abuseat.org*3 bl.spamcannibal.org*3 nsbl.inps.de*3 spamrbl.imp.ch*3 postscreen_dnsbl_action = enforce postscreen_greet_action = enforce master.cf (full) smtpd pass - - n - - smtpd smtp inet n - n - 1 postscreen tlsproxy unix - - n - 0 tlsproxy dnsblog unix - - n - 0 dnsblog ### the problematic line ### smtps inet n - - - - smtpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp relay unix - - - - - smtp showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache dovecot unix - n n - - pipe flags=DRhu user=virtuser:virtuser argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender}

    Read the article

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • Spotlight actually searching every file on "This Mac"

    - by Cawas
    I know of 2 ways to search for any file in your machine using Finder (some say it's Spotlight) and no Terminal. To prevent answers / comments about Terminal, I consider it either for scripting something or as last resource. It's not practical for lots of usages. For instance, if you want to find something to attach to a mail, or embed in iTunes or any other app, you can just drag n' drop one or many of them. Definitely not practical to do under Terminal. There are many cases of use for any, but the focus here is Graphical User Interface. Well, the 2 ways basically are: Press Cmd + Opt + Spacebar and type in your search. Press the + button, select "System files" and "are included". This is so far my preferred way, but I'm not sure it will go through every file. Open Finder, press Cmd + Shift + G and/or select just one folder. Type in your search and select the folder rather than "This Mac". This will bring files not shown in "This Mac" if you select a folder outside of the default scope. Thing is, none of those is really convenient or have the nice presentation from regular Spotlight, which you get from Cmd + Spacebar and just typing. And, as far as I've heard, the default behavior on Spotlight in Tiger was actually being able to find files anywhere. So, is there any way to make the process significantly simpler? Maybe some tweak, configuration or really good Spotlight alternative? I'd rather keep it simple and tweak Spotlight.

    Read the article

  • What would cause an IIS6 website to be unavailable remotely randomly for a few minutes at a time?

    - by jskunkle
    Website is served by iis6 on windows server 2003. Never saw this problem once for months in beta. We made the new site live yesterday - its getting more traffic than in beta but not that much - resource utilization on the server and speed are fine. Today the site has been unavailable remotely a few (4?) times for a few minutes at a time. If you visit any page on the site - nothing is ever returned and eventually the request times out. While this is happening - I can connect to the server via remote desktop and the site loads fine from the live url when running a browser on the server locally. Other websites on the server continiue to function fine the entire time (using the same instance of iis, different app pools). Other computers on the same network can't access the website either. Other than not serving content - the server seems to behave normally - scheduled jobs in our custom job system continue to run, etc. We've looked at the iis logs quickly and we don't see any traffic out of the ordinary - no traffic spikes, etc. Any ideas? Thanks, Shane

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • openvpn& iptables -- portforwarding and gateway

    - by Smith.Lai
    The problem is similar to this scenario: iptables rule still take effect after deleted Scenario: There are several clients(C1~C10) providing some services, such as SSH,HTTP..... The clients are actually a personal computer behind NAT. Their IP might be 192.168.0.x For easily access these machines through internet, I built a OpenVPN server(S1). All the C1~C10 connect to S1 with VPN address 10.8.0.x If A user(U1) wanna access C1 SSH through internet, he can connect to S1 with port "55555", and S1 port forward 55555 to 10.8.0.6:22 echo 1 /proc/sys/net/ipv4/ip_forward iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 55555 -j DNAT --to-destination 10.8.0.6:22 It works well until I mark the following in the openvpn server.conf: I marked this because I think this will make all connection go through S1 ;push "redirect-gateway" |-------(NAT)--------| (C1)--| (INTERNET)----(U1) |-----(VPN)----(S1)--| The C1~C10 have their own path to access internet resource through NAT . The server loading would be heavy if all C1~C10 connection go through S1 (for example, C1 is sending data to C2, or C1 is downloading data from a FTP site). Is there a way to solve this quandary?

    Read the article

  • CentOS networking BNX2

    - by james moore
    Having some trouble with my NICs. The server starts fine and I can wget/ping etc. However, when I /etc/init.d/networking restart I then receive the following error: Bringing up interface eth0: bnx2: fw sync timeout, reset code = 1030009 SIOCSIFFLAGS: Device or resource busy Consequently, the task fails. I have searched around on google users suggesting to disable PNP in the BIOS but I see no option. Here is some system information: $ ethtool -i eth0 driver: bnx2 version: 2.0.8-rh firmware-version: bc 2.9.1 $ uname -a Linux host 2.6.18-238.9.1.el5 #1 SMP Tue Apr 12 18:10:13 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux $/sbin/lspci | grep Broadcom 04:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) 08:00.0 PCI brodge: Broadcom EPB PCI-Express to PCI-X Bridge (rev c3) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5700 Gigabit Ethernet (rev 12) $ lsmod | grep bnx2 bnx2i 81704 0 cnic 109512 1 bnx2i libiscsi2 77765 6 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp scsi_transport_iscsi2 73945 8 be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2 bnx2 224780 0 scsi_mod 199001 15 mpt2sas,scsi_transport_sas,mptctl,be2iscsi,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,libata,megaraid_sas,sd_mod $ rmmod bnx2; modprobe bnx2 PCI: Enabling device 0000:05:00.0 (0158 -> 015a) PCI: Enabling device 0000:09:00.0 (0158 -> 015a) bnx2: fw sync timeout, reset code = 10300003 Any help would be appreciated as I am at a loss.

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

  • MySQL problem with many concurrent connections

    - by user48303
    Hi, here's a sixcore with 32 GB RAM. I've installed MySQL 5.1.47 (backport). Config is nearly standard, except max_connections, which is set to 2000. On the other hand there is PHP5.3/FastCGI on nginx. There is a very simple php application which should be served. NGINX can handle thousands of request parallel on this machine. This application accesses MySQL via mysqli. When using non-persistent connections in mysqli there is a problem when reaching 100 concurrent connections. [error] 14074#0: *296 FastCGI sent in stderr: "PHP Warning: mysqli::mysqli(): [2002] Resource temporarily unavailable (trying to connect via unix:///tmp/mysqld.sock) in /var/www/libs/db.php on line 7 I've no idea to solve this. Connecting via tcp to mysql is terrible slow. The interesting thing is, when using persistent connections (add 'p:' to hostname in mysqli) the first 5000-10000 thousand requests fail with the same error as above until max connections (from webserver, set to 1500) is reached. After the first requests MySQL keeps it 1500 open connections and all is fine, so that I can make my 1500 concurrent requests. Huh? Is it possible, that this is a problem with PHP FastCGI?

    Read the article

  • Apache mod_proxy with SSL not redirecting

    - by simonszu
    I have a custom server running behind an apache reverse proxy. Since the custom server can only handle HTTP traffic, i am trying to use apache for wrapping proper SSL around it, and for some kind of HTTP authentication. So i enabled mod_proxy and mod_ssl and modified sites-available/default-ssl. The config is as following: <Location /server> order deny,allow allow from all AuthType Basic AuthName "Please log in" AuthUserFile /etc/apache2/htpasswd Require valid-user ProxyPass http://192.168.1.102:8181/server ProxyPassReverse http://192.168.1.102:8181/server </Location> The custom server is accessible from the internal network via the location specified in the ProxyPass directive. However, when the proxy is accessed from the outside, it presents the login prompt, and after successfully authenticated, i get a blank page with the words The resource can be found at http://192.168.1.102:8181/server. When i type the external URL again in an already authenticated browser instance, i am properly redirected to the server frontend. The access.log is full of entrys stating that my browser does successful GET requests, and the proxy is happily serving the /server ressource. However, the ressource isn't containing the server's frontend, but this blank page with these words on it.

    Read the article

  • How to manage mounted partitions (fstab + mount points) from puppet

    - by Cristian Ciupitu
    I want to manage the mounted partitions from puppet which includes both modifying /etc/fstab and creating the directories used as mount points. The mount resource type updates fstab just fine, but using file for creating the mount points is bit tricky. For example, by default the owner of the directory is root and if the root (/) of the mounted partition has another owner, puppet will try to change it and I don't want this. I know that I can set the owner of that directory, but why should I care what's on the mounted partition? All I want to do is mount it. Is there a way to make puppet not to care about the permissions of the directory used as the mount point? This is what I'm using right now: define extra_mount_point( $device, $location = "/mnt", $fstype = "xfs", $owner = "root", $group = "root", $mode = 0755, $seltype = "public_content_t" $options = "ro,relatime,nosuid,nodev,noexec", ) { file { "${location}/${name}": ensure => directory, owner => "${owner}", group => "${group}", mode => $mode, seltype => "${seltype}", } mount { "${location}/${name}": atboot => true, ensure => mounted, device => "${device}", fstype => "${fstype}", options => "${options}", dump => 0, pass => 2, require => File["${location}/${name}"], } } extra_mount_point { "sda3": device => "/dev/sda3", fstype => "xfs", owner => "ciupicri", group => "ciupicri", $options = "relatime,nosuid,nodev,noexec", } In case it matters, I'm using puppet-0.25.4-1.fc13.noarch.rpm and puppet-server-0.25.4-1.fc13.noarch.rpm.

    Read the article

  • nconf deployment.ini configuration for a basic Nagios server on CentOS 6.2

    - by jshin47
    I have set up nconf and Nagios but I cannot figure out how to configure deployment.ini to properly deploy the generated configuration to /usr/local/nagios/etc. Here are the directory listings of interest: [jshin@nag0 tmp]$ ls Default_collector global [jshin@nag0 tmp]$ cd Default_collector/ [jshin@nag0 Default_collector]$ ls advanced_services.cfg hostgroups.cfg service_dependencies.cfg services.cfg host_dependencies.cfg hosts.cfg servicegroups.cfg [jshin@nag0 Default_collector]$ cd .. [jshin@nag0 tmp]$ cd global/ [jshin@nag0 global]$ ls checkcommands.cfg contacts.cfg misccommands.cfg timeperiods.cfg contactgroups.cfg host_templates.cfg service_templates.cfg [jshin@nag0 global]$ cd .. [jshin@nag0 tmp]$ cd /usr/local/nagios/etc/ [jshin@nag0 etc]$ ls cgi.cfg htpasswd.users nagios.cfg objects resource.cfg [jshin@nag0 etc]$ cd objects/ [jshin@nag0 objects]$ ls commands.cfg localhost.cfg switch.cfg timeperiods.cfg contacts.cfg printer.cfg templates.cfg windows.cfg Here is my deployment.ini (pretty much the default setting) ;; LOCAL deployment ;; [extract config] type = local source_file = "/var/www/html/nconf/output/NagiosConfig.tgz" target_file = "/tmp/" action = extract [copy collector config] type = local source_file = "/tmp/Default_collector/" target_file = "/usr/local/nagios/etc/Default_collector/" action = copy [copy global config] type = local source_file = "/tmp/global/" target_file = "/usr/local/nagios/etc/global" action = copy reload_command = "service nagios restart" What I am wondering is why the directory structure that the default deployment.ini seems to suggest, with Default_collector and global, is different from the one that Nagios has by default, with only a folder called objects. What am I missing? Or more importantly, how does your deployment.ini look?

    Read the article

  • How to diagnose a hang when creating a new folder in explorer.exe

    - by Jack Ukleja
    I have been having some issues with explorer.exe hanging when I create a new folder. If I use Analyse Wait Chain in the Resource Monitor it says "One or more threads of explorer.exe are waiting to finish network I/O". When I look at the offending thread in Process Explorer it reveals nothing interesting: ntdll.dll!ZwWaitForMultipleObjects+0xa KERNELBASE.dll!GetCurrentThread+0x36 kernel32.dll!WaitForMultipleObjectsEx+0xb3 USER32.dll!PeekMessageW+0x1cd USER32.dll!MsgWaitForMultipleObjectsEx+0x2a USER32.dll!MsgWaitForMultipleObjects+0x20 SHELL32.dll!SHAppBarMessage+0x41e SHELL32.dll!DragAcceptFiles+0x2a3c SHELL32.dll!DragAcceptFiles+0x2a4f SHELL32.dll!Ordinal211+0x124 SHELL32.dll!SHChangeNotification_Unlock+0x12f4 USER32.dll!GetSystemMetrics+0x2b1 USER32.dll!IsDialogMessageW+0x19b USER32.dll!IsDialogMessageW+0x1e1 ntdll.dll!KiUserCallbackDispatcher+0x1f USER32.dll!PeekMessageW+0xba USER32.dll!PeekMessageW+0x89 SHELL32.dll!SHChangeNotification_Unlock+0xd9f SHELL32.dll!Ordinal885+0x1407 SHLWAPI.dll!SHRegGetUSValueW+0x306 kernel32.dll!BaseThreadInitThunk+0xd ntdll.dll!RtlUserThreadStart+0x21 While I was looking at the explorer.exe threads I did notice a fair few that talk about ETW (Event Tracing for Windows) so obviously explorer.exe uses tracing. So I decided to try and user TraceView.exe to try and listen in on the explorer.exe traces. The problem is TraceView requires some difficult-to-come-by stuff... either pdbs, or CTL files, and .TMF files. I tried using the explorer.pdb that comes with the Windows SDK but that did not work. I do not see explorer.exe in the "named providers". And I have no idea where to locate the ctl or .TMF files for explorer.exe. So the question is: Is there a way to view the ETW trace messages from explorer? Or shall I just not bother and go back to the age old technique of disabling every explorer extenion one-by-one in the hope its one of them. (Prefer the former as I like to get to the bottom of things!!)

    Read the article

  • Windows 7 - Windwos XP - sharing - why isn't working?

    - by durumdara
    Hi! This is seems to be "hardware" and not "software" / "programming" question, but I need to use this share in my programs, so it is "close to programming". We had an XP based wireless network. The server is XP Professional, the clients are XP Home (Notebooks). This was working well with folder sharing (with user rights, not simple share). Then we replaced the one of the notebook with Win7/X64 notebook. First time this can reach the server, and the another client too. Later I went to another sites, and connect to another servers, another networks. And then, when I return to this network, I saw that I cannot connect to this server. Nothing of resources I see, and when try to dbl click on this computer, I got login window, where I can write anything, never I can login... The interesting part, that: Another XP home can see the server, can login as quest, or with other user. The server can see the XP home notebook. The Win7 can see the notebook's shared folders, and XP home can see the Win7 shared folders. The server can see the Win7 folders, BUT: the Win7 cannot see the server folders. Cannot see the resources too... The Win7 is in "work networking group", the group name is not mshome. I tried everything on the server, I tried to remove MS client, restore it with simple sharing, set guest password, etc., but I lost the possibilities to access this server from Win7. Does anyone have any idea what I need to see, what I need to set to access these resource - to use them in my programs? Thanks for every info, link: dd

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • Throughput; capacity planning help for C10K like design

    - by z8000
    I am designing a network service in which clients connect and stay connected -- the model is not far off from IRC less the s2s connections. I could use some help understanding how to do capacity planning, in particular with the system resource costs associated with handling messages from/to clients. There's an article that tried to get 1 million clients connected to the same server [1]. Of course, most of these clients were completely idle in the test. If the clients sent a message every 5 seconds or so the system would surely be brought to its knees. But... How do you do less hand-waving and you know, measure such a breaking point? We're talking about messages being sent by a client over a TCP socket, into the kernel, and read by an application. The data is shuffled around in memory from one buffer to another. Do I need to consider memory throughput ("5 GT/s" [2], etc.)? I'm pretty sure I have the ability to measure the basic memory requirements due to TCP/IP buffers, expected bandwidth, and CPU resources required to process messages. I'm a little dim on what I'm calling "thoughput". Help! Also, does anyone really do this? Or, do most people sort of hand-wave and see what the real world offers, and then react appropriately? [1] http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3/ [2] http://en.wikipedia.org/wiki/GT/s

    Read the article

  • Multicast image restoration with adaptive speed

    - by Clinton Blackmore
    I'm curious to know if there are any tools for restoring disk images (or even transferring files) via multicast -- for any platform, especially if the project has source available -- where the multicast rate adjusts itself on the fly. On the Mac, all multicast solutions I am aware of (such as Deploy Studio, and NetRestore before it) make use of multicast ASR (apple software restore), which has one glaring deficiency -- you have to set the multicast speed before you start sending a disk image over the network, and that speed is locked in. Either your clients can keep up and restore, or they can't*. It seems to me that it must be possible for the multicast server to adjust the data rate, so you basically say "start sending this image", clients connect, and, if they can't keep up, they tell the server so it slows down. (Likewise, I'd expect the server to try speeding up if no client is having difficulties keeping up, and I'd expect to be able to cap that maximum throughput so that other network activities can go on without being resource starved.) So, what sort of tools are out there? For Linux? Windows? Is there something for the Mac I've overlooked. [It just kills me that it is true that, by the time you get multicast up and going at a good speed to restore a lab, you could've unicasted the data to all the computers and be done.] * There is a little leeway involved. I think individual clients can say, "I missed a little bit of data" and get it, and they can opt to listen in the next time the image is sent over the network, but on the whole, if they missed it the first go round, you have to image the machine again, and there is no time savings.

    Read the article

  • How to set up that specific domains are tunneled to another server

    - by Peter Smit
    I am working at an university as research assistant. Often I would like to connect from home to university resources over http or ssh, but they are blocked from outside access. Therefore, they have a front-end ssh server where we can ssh into and from there to other hosts. For http access they advise to set up an ssh tunnel like this ssh -L 1234:proxyserver.university.fi:8080 publicsshserver.university.fi and put the proxy settings of your browser to point to port 1234 All nice and working, but I would not like to let all my other internet traffic go over this proxy server, and everytime I want to connect to the university I have to do this steps again. What would I like: - Set up a ssh tunnel everytime I log in my computer. I have a certificate, so no passwords are needed - Have a way to redirect some wildcard-domains always through the ssh-server first. So that when I type intra.university.fi in my browser, transparently the request is going through the tunnel. Same when I want to ssh into another resource within the university Is this possible? For the http part I think I maybe should set up my own local transparent proxy to have this easily done. How about the ssh part?

    Read the article

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • Help with memory usage issues on VPS

    - by Niall Collins
    Hi there, I am running a VPS server with 6 .net web sites/applications running on it. I am having issues with performance on the server, mainly it running out of memory. I contacted the company that lease the server to me and they told me it was because I also had sql server 2008 express also running on the server. So I went ahead and removed this, uninstalled etc. However I still seem to be having issues. For example at present, looking at resource consumption, the virtual memory is: ID: vprvmem Current Use: 894,328,832 bytes Limit: 1,073,741,824 bytes This means useage of ~80%. Is there any way I can check out exactly that applications, web sites, software is taking up most of the servers memory, so I can look at rectifying it. I feel that 80% is much to high to allow for contingency for a spike in traffic. I have got extra memory resources added to the box recently, but I would prefer finding the source of the problem rather than throwing extra memory at it. Maybe these levels are correct and alls running ok, but would like to investigate it to make sure. My knowledge of hardware is limited as I mostly deal in the spectrum of software. So any tools out there that can help me or any pertient advice.

    Read the article

  • Zenoss No space left on device Error

    - by Pastelinux
    Site Error An error was encountered while publishing this resource. Sorry, a site error occurred. Traceback (innermost last): Module ZPublisher.Publish, line 231, in publish_module_standard Module ZPublisher.Publish, line 165, in publish Module Zope2.App.startup, line 211, in __call__ Module Products.ZenUI3.browser, line 105, in __call__ Module Products.Five.browser.pagetemplatefile, line 60, in __call__ Module zope.pagetemplate.pagetemplate, line 115, in pt_render Module zope.tal.talinterpreter, line 271, in __call__ Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 858, in do_defineMacro Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 533, in do_optTag_tal Module zope.tal.talinterpreter, line 518, in do_optTag Module zope.tal.talinterpreter, line 513, in no_tag Module zope.tal.talinterpreter, line 343, in interpret Module zope.tal.talinterpreter, line 620, in do_insertText_tal Module Products.PageTemplates.Expressions, line 203, in evaluateText Module Products.PageTemplates.Expressions, line 222, in _handleText Module zope.component._api, line 174, in queryUtility Module zope.component.registry, line 165, in queryUtility Module ZODB.Connection, line 834, in setstate Module ZODB.Connection, line 884, in _setstate Module ZEO.ClientStorage, line 815, in load Module ZEO.cache, line 143, in call Module ZEO.cache, line 607, in store IOError: [Errno 28] No space left on device Went in to check my server through zenoss today and it looks like somehow my server is full. Which when i look at my server its only 85% full: unclebob:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 1.9G 1.5G 335M 82% / tmpfs 471M 0 471M 0% /lib/init/rw udev 10M 820K 9.2M 9% /dev tmpfs 471M 0 471M 0% /dev/shm overflow 1.0M 1.0M 0 100% /tmp /dev/hde1 942M 36M 859M 5% /boot unclebob:/tmp# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/unclebob--vg0-unclebob--root 121920 54844 67076 45% / tmpfs 120489 3 120486 1% /lib/init/rw udev 120489 1520 118969 2% /dev tmpfs 120489 1 120488 1% /dev/shm overflow 120489 14 120475 1% /tmp /dev/hde1 61312 33 61279 1% /boot It looks like theres these two files: .ICE-unix/ .X11-unix/ They had been hidden. I'll remove those. Any idea upon what they maybe? Any ideas on a fix? Probably has something to do with Zenoss

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >