Search Results

Search found 9494 results on 380 pages for 'least squares'.

Page 262/380 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • mod_fcgi in virtualmin: graceful kill fail, sending SIGKILL?

    - by mgjk
    Yesterday around 1am, our server ground to a crawl. This doesn't happen often, but I'm trying to get to the bottom of it. There is no unusual traffic volume, no unusual processes running, just all of the sudden the server started killing fcgid processes. [Thu Aug 02 01:17:32 2012] [warn] mod_fcgid: process 26460 graceful kill fail, sending SIGKILL ... for as many fcgid processes as we have... CPU idle fell to 0% and I/O seemed to take up most of the load. The issue lasted about 5 minutes. I suspect there was some swap activity, although I'm not sure if it was due to killed processes being swapped in to die, or if it was because some process ramped up memory usage faster than my process watching scripts can see them. The oom-killer wasn't triggered (at least it's not logged), so I think this was Apache for some reason restarting the processes. This is not regular, and nothing obvious appears in cron. Is there a normal Apache process which might cause this? We run dozens of different sites, and it was late at night, so volume was very, very low. (maybe 200 requests in a 10 minute period).

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Patch management on multiple systems

    - by Pierre
    I'm in charge of auditing the security configuration of an important farm of Unix servers. So far, I came up with a way to assess the basic configuration but not the installed updates. The very problem here is that I just can't trust the package management tools on those machine. Indeed some of them did not sync with the repository for a long time (So I can't do a "yum check-updates" on Redhat for example). Some of those servers are not even connected to the internet and use an company repository. Another problem is that I have multiple target systems: AIX, Debian, Centos/Redhat, etc... So the version could be different (AIX) and the tools available will be different. And, last but not least, I can't install anything on the target system. So I need to use a script to retrieve the information and either: process it directly or save the information to be able to process it later on a server (Which may happen to run a different distribution than the one on which the information have been retrieved). The best ideas I could come up with were: either retrieve the list of installed packages on the machine (dpkg -l for example on debian) and process it on a dedicated server (Directly parsing the "Packages" file of debian repositories). Still, the problem remains the same for AIX and Redhat... or use Nessus' scripts to assess vulnerability on the installed packages, but I find this a bit dirty. Does anyone know any better/efficient way of doing this ? P.S: I already took time to review some answers to similar problems. Unfortunately Chef, puppet, ... don't meet the requirements I have to meet. Edit: Long story short. I need to have the list of missing updates on a Unix system just like MBSA on Windows. I'm not authorized to install anything on this system as it's not mine. All I have are scripts languages. Thanks.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • How to prevent asymmetric routing with multiple eBGP routers?

    - by Andy Shinn
    I have 2 routers announcing a /22 subnet to different providers (one providers connects to each of the 2 routers). I have split the /22 in two /23 to announce one /23 on each of the routers plus the /22 (the providers will take the more specific route). This allows me to fail over and keep traffic inside the /23 in and out the same provider. What are other ways in which I could announce just the /22 with both routers and have packets from servers on the network behind the routers go back out the same router in which they came in from? EDIT: The main problem I come across, which end users and clients complain about the most, is that the least hop route is sometimes not the "optimal" route. In my case, I know that Provider B may have better latency to X nation. But when packets come in from provider B, they may go out Provider A or provider B. The reverse is also true. If I send a packet to X nation out provider A, even though it may have more hops back, the packet will likely come in from Provider B (which may have higher latency, packet loss, etc. to this nation)

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • Problem with Strange VMWare behaviour when shutting down guest.

    - by adza77
    Hi, I've been having a problem for a while now with VMWare Workstaion. (Originally with 6.5, but now with 7.0 and 7.0.1 too). The problem occurs when I choose to shut down a guest. VMWare itself seems to hang. If I choose to shut down a guest that's opened full screen, and during the process I minimise the screen to work on other applications in the host, often (not all the time) when I return to the guest I have a 'greyed out' screen and the system becomes unresponsive. The host O/S still seems to be working, but I am unable to switch to other applications. (I can bring up the taskbar on the host and 'see' other applications and even switch to them, but VMWare still stays 'on top' being unresponsive). I can not terminate VMWare even when windows says that the application has become unresponsive and gives me the option to terminate. VMWare stays on top, and I'm forced to either shutdown, or log off and log back on in order to regain control of my computer. This happens with both Windows 7 and Windows Vista guest operating systems (32 bit), and I have had it happen on multiple host machines, and multiple guest machines too. Current Host: Windows 7 64 bit, 8GB Ram, 500GB HDD, i7 Processor. I have been searching for more than 6 months for a solution but have found none, so finally decided to post here. Does anyone know what might be causing the problem (+or even how to minimize the VM so I can at least access any other applications and save work before forcing a logoff / reboot+) would be extrememly handly. If I know the correct keystrokes to save and close in an application on the host I can do this by task-switching to the desired app to save and close successfully, but I can't see what I'm doing because VMWare Workstation is still on-top 'greyed' out. Cheers Adam.

    Read the article

  • Certificate revocation check fails for non-domain guest in spite of accessible CRL

    - by 0xFE
    When we try to use certificates on computers that are not part of the domain, Windows complains that The revocation function was unable to check revocation because the revocation server was offline. However, if I manually open the certificate and check the CRL Distribution Point property, I see an ldap:/// URL and an http:// URL that points to externally-accessible IIS site that hosts the CRLs. Of course, the non-domain-joined client cannot access the ldap:/// URL, but it can download the CRL from the http:// link (at least in a browser). I enabled CAPI logging and I see the event that corresponds to this failed revocation check. The RevocationInfo section is: RevocationInfo [ freshnessTime] PT11H27M4S RevocationResult The revocation function was unable to check revocation because the revocation server was offline. [ value] 80092013 CertificateRevocationList [ location] UrlCache [ url] http://the correct URL [fileRef] 6E463C2583E17C63EF9EAC4EFBF2AEAFA04794EB.crl [issuerName] the name of the CA Furthermore, I can see the HTTP request to the correct URL and the server's response (HTTP 304 Not Modified) with Microsoft Network Monitor. I ran certutil -verify -urlfetch, and it seems to show the same thing: the computer recognizes both URLs, tries both, and even though the http:// link succeeds, returns the same error. Is there a way to have non-domain-joined clients skip the ldap:/// link and only check the http:// one? Edit: The ldap:/// URL is ldap:///CN=<name of CA>,CN=<name of server that is running the CA>,CN=CDP,CN=Public Key Services,CN=Services,CN=Configuration,DC=<domain name>?certificateRevocationList?base?objectClass=cRLDistributionPoint The non-domain-joined clients may be on the domain network or on an external network. The http:// CDP is accessible from the public internet.

    Read the article

  • What's the best way to clone multiple PCs from one machine?

    - by Jason T.
    Where I work we have dozens and dozens of old ThinkPad laptops. A lot of these can be reused but not for our needs. They have been long since replaced. The higher-ups have decided to donate them to charity. For better or for worse I have been tasked with reimaging them. I took a laptop and installed the factory copy of Windows, updated it, configured it appropriately. Now I'm trying to reimage it to dozens of other laptops. What's some good software to do this? First I used clonezilla to clone the hdd in the laptop to an internal drive in an external enclosure and it worked. Then I tried taking the base image out and connecting it externally to a laptop that needed to be imaged and I got it to work a few times. So far so good, right? Well once I informed my boss of my findings and what I would want to do then the images started to not work on new laptops. One of three things would happen: The Thinkpads would just blink at me and Windows wouldn't load. Or Windows would load but freeze within two minutes. Last but not least the laptops would BSOD during the Windows XP bootup. These laptops are not going to be used by the company. They're going to charity. So can anyone else recommend a way to reimage multiple laptops?

    Read the article

  • Problem with Strange VMWare behaviour when shutting down guest.

    - by adza77
    Hi, I've been having a problem for a while now with VMWare Workstaion. (Originally with 6.5, but now with 7.0 and 7.0.1 too). The problem occurs when I choose to shut down a guest. VMWare itself seems to hang. If I choose to shut down a guest that's opened full screen, and during the process I minimise the screen to work on other applications in the host, often (not all the time) when I return to the guest I have a 'greyed out' screen and the system becomes unresponsive. The host O/S still seems to be working, but I am unable to switch to other applications. (I can bring up the taskbar on the host and 'see' other applications and even switch to them, but VMWare still stays 'on top' being unresponsive). I can not terminate VMWare even when windows says that the application has become unresponsive and gives me the option to terminate. VMWare stays on top, and I'm forced to either shutdown, or log off and log back on in order to regain control of my computer. This happens with both Windows 7 and Windows Vista guest operating systems (32 bit), and I have had it happen on multiple host machines, and multiple guest machines too. Current Host: Windows 7 64 bit, 8GB Ram, 500GB HDD, i7 Processor. I have been searching for more than 6 months for a solution but have found none, so finally decided to post here. Does anyone know what might be causing the problem (+or even how to minimize the VM so I can at least access any other applications and save work before forcing a logoff / reboot+) would be extrememly handly. If I know the correct keystrokes to save and close in an application on the host I can do this by task-switching to the desired app to save and close successfully, but I can't see what I'm doing because VMWare Workstation is still on-top 'greyed' out. Cheers Adam.

    Read the article

  • Requests are making it to my app server, but not into node.js -- why?

    - by Zane Claes
    I detailed in this question on StackOverflow how some random requests are not making it from the client to my Node.js app server, resulting in a gateway timeout. In summary, identical requests are, at random, not even making it far enough to trigger a console.log() in my first line of express middleware. I need to narrow down the problem, though, to find out WHERE the traffic is being lost and it was suggested that I try a packet sniffer on my app servers. Here's my setup: 2x Load Balancers (m1.larges) 2x node.js servers (also m1.large) Here's what's interesting/unusual: the node.js servers started as PHP servers with an Apache stack and continue to serve PHP files for my domain (streamified.me). However, I use a little httpd.conf magic on the app servers so that requests to api.streamified.me get routed over port 8888 to the node.js server: RewriteCond %{HTTP_HOST} ^api.streamified.me RewriteRule ^(.*) http://localhost:8888$1 [P] So, the request hits the load balancer = goes to an app server = gets routed to port 8888 if it's intended for the API = gets handled by node.js So, in the same httpd.conf file, I turned on RewriteLogLevel 5 and then created a simple PHP+CURL script on my localhost to hit my api.streamified.me with a random URL (which should cause node.js to trigger a simple "not found" response) until it resulted in a Gateway timeout. Here, you can see that it has happened -- and the rewrite log shows that the request was definitely received by the app server and forwarded to port 8888... but it was never received by node.js (or, at least, the first line of code in the first line of middleware never gets it...) Image Link: http://i.stack.imgur.com/3OQxS.png

    Read the article

  • Windows VPN always disconnects after < 3 minutes, only from my network

    - by hemp
    First, this problem has existed for almost two years. Until serverfault was born, I pretty much gave up on solving it - but now, hope is reborn! I've set up a Windows 2003 server as a domain controller and VPN server at a remote office. I am able to connect to and work over the VPN from every windows client I've tried, including XP, Vista, and Windows 7 without issue, from at least five different networks (corporate and home, domain and non.) It works fine from all of them. However, whenever I connect from clients on my home network, the connection drops (silently) after 3 minutes or less. After a short while, it will eventually tell me the connection has dropped and attempt to redial/reconnect (if I've configured the client that way.) If I reconnect, the connection will re-establish and appear to work correctly, but again will silently drop, this time after a seemingly shorter time period. These are not intermittent drops. It happens every single time, in exactly the same way. The only variable is how long the connection survives. It doesn't matter what type of traffic I send. I can sit idle, send continuous pings, RDP, transfer files, all of that at once - it makes no difference. The result is always the same. Connected for a few minutes, then silent death. Since I doubt anyone has experienced this exact situation, what steps can I take to troubleshoot my evanescing VPN?

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • Replacing the LCD panel in a netbook (Asus Eee PC 1005)

    - by neilfein
    Yesterday, I was cleaning up and dropped my Asus Eee 1005PE. The screen is cracked inside (i.e., cracks are visible only when on), and no longer works properly. I booted up with another monitor attached, and the computer itself is fine, but needs a new screen. Best Buy wants at least $250 to repair it (that includes their $150 fee to breathe in the same room as the unit), and Asus was of no help at all. (They're incredibly cagey and won't provide any money numbers at all, not even the cost of the part.) If replacing the LCD is no more trouble than replacing memory or a hard drive, I can do that. It's within my means to buy the part (18G241010402, a TFT LCD), but I'd like to know more about the procedure involved. My question: How does one replace the screen in this unit? Do I simply open the case and swap out the unit, or do I need to disassemble anything else to get to the screen? I don't want to order the part and then end up in a situation like this. Is the case screwed shut, or is it like an iPod where they glue things closed? I know enough about my abilities with a soldering gun to not attempt to solder tiny wires, would any of that be involved?

    Read the article

  • How to minimize the risk of employees spreading critical information?

    - by Industrial
    Hi everyone, What's common sense when it comes to minimising the risk of employees spreading critical information to rivalling companies? As of today, it's clear that not even the US government and military can be sure that their data stays safely within their doors. Thereby I understand that my question probably instead should be written as "What is common sense to make it harder for employees to spread business critical information?" If anyone would want to spread information, they will find a way. That's the way life work and always has. If we make the scenario a bit more realistic by narrowing our workforce by assuming we only have regular John Does onboard and not Linux-loving sysadmins , what should be good precautions to at least make it harder for the employees to send business-critical information to the competition? As far as I can tell, there's a few obvious solutions that clearly has both pros and cons: Block services such as Dropbox and similar, preventing anyone to send gigabytes of data through the wire. Ensure that only files below a set size can be sent as email (?) Setup VLANs between departments to make it harder for kleptomaniacs and curious people to snoop around. Plug all removable media units - CD/DVD, Floppy drives and USB Make sure that no configurations to hardware can be made (?) Monitor network traffic for non-linear events (how?) What is realistic to do in a real world? How does big companies handle this? Sure, we can take the former employer to court and sue, but by then the damage has already been caused... Thanks a lot

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • break Folder Protection, Folder Guard Lock or Folder in Windows XP?

    - by SonyAdi
    when I'm making a new partition by the partition magic. Then all of a sudden power failure. Unfortunately because my computer is not equipped with UPS (Power supply Uniterruptible), my computer finally died, too. When power is restored, I tried to turn on the computer. Suddenly my computer can not boot normally into windows. Option through safemode and others all I've tried. The result fails, can not boot at all, into safe mode also can not. And I know the cause. Partition Magic did not finish the work and stopped in the middle of the road and cause the transfer of data files or stopped, finally file2 any default windows were destroyed as well. Unfortunately my important data I store in my document. Finally, I take my hard drive to a friend. Hopes to open a computer hard drive through friend, at least I could save my important data, and then I can install window again by reformatting my hard drive is first. I read the hard drive in explorer my friend, complete with their data, but the data of my important data in my document can not get to go because it requires administrator privileges or the original user's default start my windows (my computer) to open my document folder tersebut.Ini actually very similar to the work or Folder Protection Folder Guard. result I was disappointed and almost desperate to get back my important data is. how do i break Folder Protection, Folder Guard Lock or Folder in Windows XP?

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • dom0 enable IPv6 for guests

    - by user98651
    I am looking at deploying IPv6 to my virtual machines. Right now I have v6 working great on the dom0 using a 6in4 provided by Hurricane Electric as I do not have native v6. However, I would like to distribute some of the /48 I am receiving to the domUs (/64 per machine would be ideal, but I am open to your suggestions). Static configuration on the domU side is fine. All I want to accomplish is getting the traffic to pass through the dom0 to the domU. To say the least, I'm still trying to wrap my head around all the virtual interfaces and bridges Xen creates. Yes, I have Googled around for this a bit and have not found anything great. I tried using two "vif-route6" bash scripts with no luck (possibly due to my ignorance with Xen networking). I am still stuck (mainly in how to configure the dom0). I would like to imagine this problem is relatively easy to solve and I look forward to your suggestions and help! Edited post to clarify my end goal: getting IPv6 to domU guests. I am completely open to suggestions but am hoping for something other than setting up a tunnel for every guest.

    Read the article

  • a brand new FS based on a database without using fuse

    - by Devrim
    hi all, To serve millions of files out of a single directory, being able to connect to a drive from hundreds of endpoints, and for some other reasons (to avoid gluster/nfs/all fs based networking solutions), I want to evaluate the possibility of making a filesystem that's based on a mongodb (or any other). Basically, it works like fusefs, every single file is kept in mongo gridfs. In theory, I do, mount mongodbfs /mountPoint mongodb://localhost then when i say touch /mountPoint/test.txt this file is inserted into mongodb. This FS will also store uid/gid and perms with the file, we can throw hundreds of servers to it, and no useradd will be necessary. I'm not thinking to include all the features of FS, just the ones we need. My question is, how do I start my quest in finding resources, books, links, people, developers who'd help me implement this? at least a proof of concept. Is it feasible? What should I expect as a timeline for such undertaking? Please only think about gazillion small files and folders.

    Read the article

  • Messed up partitions... system will not boot!

    - by someguy
    I did a really dumb thing. cfdisk threw an error at me saying "FATAL ERROR: Bad primary partition 3: Partition ends in the final partial cylinder", so I installed Partition Table Doctor to see if I could fix the problem. When the program started up, it told me there were problems with my partitions, and asked if I wanted them fixed (cannot remember real message, but I believe it had something to do with the cylinder boundaries), so, blindly, without thinking of the consequences, I did. Now, my system will not boot. I tried booting from the Windows 7 installation CD. I went to install a fresh copy, but it said that "No drives were found". I then opened up diskpart. According to diskpart, there is only one partition, containing one volume, assigned the letter "C". Before, I had four partitions! It is also saying that the file system is RAW. Is there any way I can fix this? I have important data that I do not want to lose. Later on... I tried fdisk with the option -l, which lists the partition table(s), and this is what I got: Ignoring extra extended partition 4 Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 64 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x163df116 Device Boot Start End Blocks Id System /dev/sda1 6 18 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 18 7851 62918572+ 7 HPFS/NTFS /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30402 139196416 f W95 Ext'd (LBA) /dev/sda3 13073 30403 139203193 7 HPFS/NTFS I don't know if this will help, but it's extra information, at least. Also, this is how I had my partitions: 40MB (Unallocated) 100MB (System Reserved) 60GB (Windows, C:) 40GB (Was reserved for secondary OS) ~132GB (Home, E:)

    Read the article

  • Not able to access Silverlight.net and ONLY Silverlight.net - All other domains work!

    - by Sootah
    Alrighty folks, I have an extremely odd problem. I am able to surf the web fine with one odd (and really annoying at the moment) exception: Microsoft's Silverlight.net. Every other site that I go to works just fine. This is quite frustrating because I'm in the middle of programming a web app in Silverlight 4.0, and whenever I do a search for any code examples, tutorials, or whatnot at least 50% of the results are hosted in the silverlight.net forums. The error message that I get is: Oops! Google Chrome could not find www.silverlight.net It doesn't work in my other browsers either (both IE and FireFox). What's odd, is that while the error message would lead me to assume it's a DNS error, I can ping the URL just fine. C:\Users\The Doot>ping silverlight.net Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Ping statistics for 206.72.125.201: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 103ms, Maximum = 110ms, Average = 106ms I've checked my HOSTS file, and there's nothing that refers to ANY Microsoft URL in there. What could be causing this!?? More importantly, how do I fix it? Just for kicks, I've even included the results of a traceroute here for your enjoyment. OS: Windows 7 Ultimate Thanks in advance! -Sootah

    Read the article

  • Windows Server Connected to Domain Without Being Domain Controller

    - by saluce
    Can a Windows Server be connected to an Active Directory domain without being a domain controller? Here's the scenario: I want to use Windows Server 2012 to run several virtual machines for testing our web application in a variety of environments. We have a corporate domain, and I'd like to use the corporate login (or at least a common login) on each of the virtual machines without necessarily having to get IT to set up each virtual machine on the corporate domain. Also, I need the server itself to be able to authenticate domain logins (the app uses domain login information for users to login). However, I absolutely do NOT want it to be a DC on the corporate network. Thus, my questions: Can a Windows Server be connected to an Active Directory domain without being a DC? Can a Windows Server authenticate users on another domain without being a part of that domain? Can a Windows Server be a domain controller in a small network (comprised of just the server and itself) and use the corporate domain's Active Directory for authenticating user logins to the server, the web app, and the virtual machines?

    Read the article

  • Wireshark WPA 4-way handshake

    - by cYrus
    From this wiki page: WPA and WPA2 use keys derived from an EAPOL handshake to encrypt traffic. Unless all four handshake packets are present for the session you're trying to decrypt, Wireshark won't be able to decrypt the traffic. You can use the display filter eapol to locate EAPOL packets in your capture. I've noticed that the decryption works with (1, 2, 4) too, but not with (1, 2, 3). As far as I know the first two packets are enough, at least for what concern unicast traffic. Can someone please explain exactly how does Wireshark deal with that, in other words why does only the former sequence work, given that the fourth packet is just an acknowledgement? Also, is it guaranteed that the (1, 2, 4) will always work when (1, 2, 3, 4) works? Test case This is the gzipped handshake (1, 2, 4) and an ecrypted ARP packet (SSID: SSID, password: password) in base64 encoding: H4sICEarjU8AA2hhbmRzaGFrZS5jYXAAu3J400ImBhYGGPj/n4GhHkhfXNHr37KQgWEqAwQzMAgx 6HkAKbFWzgUMhxgZGDiYrjIwKGUqcW5g4Ldd3rcFQn5IXbWKGaiso4+RmSH+H0MngwLUZMarj4Rn S8vInf5yfO7mgrMyr9g/Jpa9XVbRdaxH58v1fO3vDCQDkCNv7mFgWMsAwXBHMoEceQ3kSMZbDFDn ITk1gBnJkeX/GDkRjmyccfus4BKl75HC2cnW1eXrjExNf66uYz+VGLl+snrF7j2EnHQy3JjDKPb9 3fOd9zT0TmofYZC4K8YQ8IkR6JaAT0zIJMjxtWaMmCEMdvwNnI5PYEYJYSTHM5EegqhggYbFhgsJ 9gJXy42PMx9JzYKEcFkcG0MJULYE2ZEGrZwHIMnASwc1GSw4mmH1JCCNQYEF7C7tjasVT+0/J3LP gie59HFL+5RDIdmZ8rGMEldN5s668eb/tp8vQ+7OrT9jPj/B7425QIGJI3Pft72dLxav8BefvcGU 7+kfABxJX+SjAgAA Decode with: $ base64 -d | gunzip > handshake.cap Run tshark to see if it correctly decrypt the ARP packet: $ tshark -r handshake.cap -o wlan.enable_decryption:TRUE -o wlan.wep_key1:wpa-pwd:password:SSID It should print: 1 0.000000 D-Link_a7:8e:b4 - HonHaiPr_22:09:b0 EAPOL Key 2 0.006997 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 3 0.038137 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 4 0.376050 ZyxelCom_68:3a:e4 - HonHaiPr_22:09:b0 ARP 192.168.1.1 is at 00:a0:c5:68:3a:e4

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >