Search Results

Search found 21283 results on 852 pages for 'control flow'.

Page 703/852 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • Mac OS Leopard: SyncServer process constantly using 100% CPU

    - by macca1
    I am running Leopard that I upgraded from Tiger. I've been noticing that every once in a while the SyncServer process starts up and eats up all the CPU. The fans will start going at full blast and the laptop will slow down to a crawl. I need to force quit the process from Activity Monitor to get it under control. It disappears for a while, but eventually gets started again. I do have an iphone as well that I sync so I'm wondering if syncServer might be an apple process checking for my phone plugged in. Edit: Tried iSync and the manual resetsync as suggested, but got this output: Vince-2:~ vince$ /System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl full 2010-03-12 08:03:50.230 perl[176:10b] SyncServer is unavailable: exception when connecting: connection timeout: did not receive reply PerlObjCBridge: NSException raised while sending reallyResetSyncData to NSObject object name: "ISyncServerUnavailableException" reason: "Can't connect to the sync server: NSPortTimeoutException: connection timeout: did not receive reply ((null))" userInfo: "" location: "/System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl line 16" ** PerlObjCBridge: dying due to NSException Vince-2:~ vince$ And during that syncServer started spinning up 95-100% just like it always does.

    Read the article

  • HTTP responses curl and wget different results

    - by Fab
    To check HTTP response header for a set of urls I send with curl the following request headers foreach ( $urls as $url ) { // Setup headers - I used the same headers from Firefox version 2.0.0.6 $header[ ] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[ ] = "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[ ] = "Cache-Control: max-age=0"; $header[ ] = "Connection: keep-alive"; $header[ ] = "Keep-Alive: 300"; $header[ ] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[ ] = "Accept-Language: en-us,en;q=0.5"; $header[ ] = "Pragma: "; // browsers keep this blank. curl_setopt( $ch, CURLOPT_URL, $url ); curl_setopt( $ch, CURLOPT_USERAGENT, 'Googlebot/2.1 (+http://www.google.com/bot.html)'); curl_setopt( $ch, CURLOPT_HTTPHEADER, $header); curl_setopt( $ch, CURLOPT_REFERER, 'http://www.google.com'); curl_setopt( $ch, CURLOPT_HEADER, true ); curl_setopt( $ch, CURLOPT_NOBODY, true ); curl_setopt( $ch, CURLOPT_RETURNTRANSFER, true ); curl_setopt( $ch, CURLOPT_FOLLOWLOCATION, true ); curl_setopt( $ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY ); curl_setopt( $ch, CURLOPT_TIMEOUT, 10 ); //timeout 10 seconds } Sometimes I receive 200 OK which is good other time 301, 302, 307 which I consider good as well, but other times I receive weird status as 406, 500, 504 which should identify an invalid url but when I open it on the browser they are fine for example the script returns http://www.awe.co.uk/ => HTTP/1.1 406 Not Acceptable and wget returns wget http://www.awe.co.uk/ --2011-06-23 15:26:26-- http://www.awe.co.uk/ Resolving www.awe.co.uk... 77.73.123.140 Connecting to www.awe.co.uk|77.73.123.140|:80... connected. HTTP request sent, awaiting response... 200 OK Does anyone know which request header I am missing or adding in excess?

    Read the article

  • Scanning website for vulnerablities

    - by Kristen
    I have found that the local school's website installed a Perl Calendar - this was years ago, it has not been used for ages, but Google has it indexed (which is how I found it) and it full of Viagra links and the like ... program was by Matt Kruse, here is details of the exploit: http://www.securiteam.com/exploits/5IP040A1QI.html I've got the school to remove that, but I think they also have MySQL installed and I'm aware that out-of-the-box there have been some exploits of Admin Tools / Login in old versions. For all I know they also have PHPBB and the like installed ... The school is just using some cheap, shared hosting; the HTTP response header I get is: Apache/1.3.29 (Unix) (Red-Hat/Linux) Chili!Soft-ASP/3.6.2 mod_ssl/2.8.14 OpenSSL/0.9.6b PHP/4.4.9 FrontPage/5.0.2.2510 I'm looking for some means of checking if they have other junk installed (quite possibly from way back, and now unused) that might put the site at risk. I'm more interested in something that can scan for things like the MySQL Admin exploit rather than open ports etc. My guess is that they have little control over the hosting space that they have - but I'm a Windows DEV, so this *nix stuff is all Greek to me. I found http://www.beyondsecurity.com/ which looks like it might do what I want (within their evaluation :) ) but I have a worry about how to find out if they are well known / honest - otherwise I will be tipping them a wink with a Domain Name that may be at risk! Many thanks.

    Read the article

  • mod_cache not working

    - by Pistos
    I have a PHP site that has many dynamically generated pages. I'm trying to turn to mod_cache to help boost performance, because in most cases, content does not change in a given day. I have configured mod_cache as best I could, following examples around the web, including the mod_cache page on apache.org. When I set LogLevel debug, I see a bit of information about the caching that is [not] happening. There are plenty of pairs of lines like this: [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(141): Adding CACHE_SAVE filter for /foo/bar [Fri Jun 01 17:28:18 2012] [debug] mod_cache.c(148): Adding CACHE_REMOVE_URL filter for /foo/bar Which is fine, because I've set CacheEnable disk /foo, to indicate that I want everything under /foo cached. I'm new to mod_cache, but my understanding about these lines is that it just means that mod_cache has acknowledged that the URL is supposed to be cached, but there are supposed to be more lines indicating that it is saving the data to cache, and then later retrieving them on subsequent hits to the same URL. I can hit the same URL till I'm blue in the face, whether with F5 refreshing, or not, or with different browsers, or different computers. It's always that pair of lines that shows in the logs, and nothing else. When I set CacheEnable disk /, then I see more activity. But I don't want to cache the entire site, and there are many, many different subpaths to the site, so I don't want to have to modify code to set no-cache headers in all the necessary places. I'll mention that mod_rewrite is in use here, rewriting /foo/bar to something like index.php?baz=/foo/bar, but my understanding is that mod_cache uses the pre-rewrite URL, not the post-rewrite URL. As far as I can tell, I have the response headers not getting in the way of caching. Here's an example from one hit: Cache-Control:must-revalidate, max-age=3600 Connection:Keep-Alive Content-Encoding:gzip Content-Length:16790 Content-Type:text/html Date:Fri, 01 Jun 2012 21:43:09 GMT Expires:Fri, 1 Jun 2012 18:43:09 -0400 Keep-Alive:timeout=15, max=100 Pragma: Server:Apache Vary:Accept-Encoding mod_cache config is as follows: CacheRoot /var/cache/apache2/ CacheDirLevels 3 CacheDirLength 2 CacheEnable disk /foo What is getting in the way of mod_cache doing its job of caching?

    Read the article

  • Problems with synaptic touchpad

    - by penlite
    The "enhanced" features of the synaptics touchpad (ie scroll, pinch zoom) on my windows 7 64 bit dell studio 17 only work intermittently. This was an issue with the original OEM driver and also with the new driver from synaptics - please do not suggest that i update, reinstall, install as administrator, check or jiggle the driver - if i hear that one more time i will freak out. I have researched this for days and appears to be a common problem without a solution (many suggestions to update the driver though -argh). Dell and synaptics have not been helpful. I have checked that syntpehn.exe and syntphelper.exe are running (sometimes the UAC seems to block syntpehn.exe from running at startup). When the touchpad fails i can still use it to control the pointer and access the synaptics settings although some of the options are disabled (greyed out or simply unresponsive). It is the only pointing device i have installed - i had a logitech trackball attached but have uninstalled all software and drivers (including hidden devices) as per synaptic. It is also the only device using IRQ 12. Any help would be greatly appreciated except "update the driver".

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

  • How to tell which process is hogging my CPU when they don't add up to 100%?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. The individual processes do not add up to 100%. I try killing lots of processes, but the overall usage continues to be 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, which is never anywhere near 100% CPU unless I'm doing something that requires it (like loading 32 Firefox tabs), after which it goes back to a normal idle level. It's not a new install or anything. There is no reason the processor should be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing every process, one at a time, until the problem went away, and killing vino-server finally fixed it, even though that process never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). But the question remains: How did a single process manage to use 100% CPU while top only showed that process at 5%? How do I identify culprits like this in the future? Looks like I'm not the only one who's had this problem: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Setfacl configuration issue in Linux

    - by Balualways
    I am configuring a Linux Server with ACL[Access Control Lists]. It is not allowing me to perform setfacl operation on one of the directoriy /xfiles. I am able to perform the setfacl on other directories as /tmp /op/applocal/. I am getting the error as : root@asifdl01devv # setfacl -m user:eqtrd:rw-,user:feedmgr:r--,user::---,group::r--,mask:rw-,other:--- /xfiles/change1/testfile setfacl: /xfiles/change1/testfile: Operation not supported I have defined my /etc/fstab as /dev/ROOTVG/rootlv / ext3 defaults 1 1 /dev/ROOTVG/varlv /var ext3 defaults 1 2 /dev/ROOTVG/optlv /opt ext3 defaults 1 2 /dev/ROOTVG/crashlv /var/crash ext3 defaults 1 2 /dev/ROOTVG/tmplv /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/ROOTVG/swaplv swap swap defaults 0 0 /dev/APPVG/home /home ext3 defaults 1 2 /dev/APPVG/archives /archives ext3 defaults 1 2 /dev/APPVG/test /test ext3 defaults 1 2 /dev/APPVG/oracle /opt/oracle ext3 defaults 1 2 /dev/APPVG/ifeeds /xfiles ext3 defaults 1 2 I have a solaris server where the vfstab is defined as cat vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/vx/dsk/bootdg/swapvol - - swap - no - swap - /tmp tmpfs - yes size=1024m /dev/vx/dsk/bootdg/rootvol /dev/vx/rdsk/bootdg/rootvol / ufs 1 no logging /dev/vx/dsk/bootdg/var /dev/vx/rdsk/bootdg/var /var ufs 1 no logging /dev/vx/dsk/bootdg/home /dev/vx/rdsk/bootdg/home /home ufs 2 yes logging /dev/vx/dsk/APP/test /dev/vx/rdsk/APP/test /test vxfs 3 yes - /dev/vx/dsk/APP/archives /dev/vx/rdsk/APP/archives /archives vxfs 3 yes - /dev/vx/dsk/APP/oracle /dev/vx/rdsk/APP/oracle /opt/oracle vxfs 3 yes - /dev/vx/dsk/APP/xfiles /dev/vx/rdsk/APP/xfiles /xfiles vxfs 3 yes - I am not able to find out the issue. Any help would be appreciated.

    Read the article

  • Remote management interface for managing ip6tables (or an alternative firewall)

    - by Matthew Iselin
    I'm working with IPv6 and have run into an issue configuring ip6tables on our main router in order to control what can come into the network. A default DROP rule in the FORWARD section has worked well (obviously leaving ESTABLISHED,RELATED as ACCEPT) to keep internal clients' open ports from being accessed. However, running an ip6tables command for every little change is unwieldy. Whilst we are able to continue creating rules manually, I'm wondering if there's some sort of management interface we could use to create the rules quickly and easily. We're looking to be able to save time working on our firewall as well as providing a simple method for modifying rules for those who will eventually replace us. I know webmin (heavily locked down on our network, naturally) has support for modifying iptables rules, but seemingly no support for ip6tables. Something similar would be fantastic. Alternatively, suggestions for a firewall solution apart from iptables/ip6tables which can be managed remotely wouldn't be out of order. A web interface for management is certainly preferable, even if it is just a wrapper with shiny buttons over the raw config files.

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Using GUI ftp on Win7 and Vista without additional software

    - by Stephen Jones
    Goal: provide a 'no-software' method for 'less technical' users to access password protect ftp location from Win7 and Vista (existing approach for WinXP works). 'No software' method to mean without installing additional software (e.g. FileZilla, WinSCP) - the solution is supplied to external non-technical users. WinXP (works): Using Windows Explorer, WinXP supports non-technical ftp access by pasting: ftp://username:[email protected] into the address bar. The remote ftp site's files / directory structure becomes available and can be copied to / from easily (in the style of local file copy / paste) by a 'less technical' user. Win7 / Vista (doesn't work): Pasting the same URL into the Windows Explorer on Win7 or Vista causes an error: An error occurred opening that folder on the FTP server. Make sure you have permission to access that folder. Details: The connection with the server was reset. Notes: a) The same username/password/server typed from the (DOS) command line achieves access to the server, but this is a more 'technical' solution than desired. I am looking for a WinXP equivalent solution. b) Under 'Control Panel' / 'Internet options' / 'Advanced' tab - the boxes for 'Enable FTP folder view' and 'Use Passive FTP' are ticked (enabled) c) Adding an inbound firewall rule for local port 20 (TCP) was attempted with no difference in results (i.e. failure)

    Read the article

  • LG W3000H-BN monitor cannot go above 1280x800

    - by Jo Profit
    I noticed that there are many people complaining about this issue with the W3000H but I have yet to find a solution that works for me. I am using Windows 7 Professional and and using a nVidia Quadro NVS 240 video card with a 4 monitor splitter cable. The cable from the monitor and the splitter are rated DVI-D Dual Link and the video card itself is rated for 2560x1600. I have installed the latest drivers for the video card and just grabbed the .inf, icm and cat file from the LG website and manually installed the monitor drivers. Does anyone have problems with the same setup? I have 3 other monitors (2 at 1920x1080 and 1 at 1280x1024). I really would like to be able to display the full resolution or else the large screen is useless. (I triple checked that the monitor itself supports this resolution). So monitor, cable, splitter and card supposedly support 2560x1600. Drivers are up to date but I cannot select that resolution when in the "Screen Resolution" menu, nor through the nVidia control panel. Please save me from madness :)

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • No internet access when using static IP

    - by Endy Tjahjono
    I have just upgraded to Windows 8.1, and after the upgrade process is finished, I can't connect to internet. I tried running the "Troubleshoot problems": It concluded that DHCP needs to be activated: I let it activate DHCP, and I got internet connection back. The problem is I want to set this PC to a certain IP address (the IP address that it has been using all this time). I am also using Hyper-V, which I suspect has something to do with this problem. After I regained internet connection, I tried running one of my Hyper-V VM. From inside the VM I can connect to internet. That VM has static IP address. I also noticed that in "Control Panel\Network and Internet\Network Connections", I usually have a network connection called vEthernet (Realtek PCIe GBE Family Controller Virtual Switch). I didn't find it there after upgrade. How do I set my PC to a static IP while retaining internet access in Windows 8.1? EDIT I have managed to recreate vEthernet (Realtek PCIe GBE Family Controller Virtual Switch) by unchecking Allow management operating system to share this network adapter in Hyper-V's Virtual Switch Manager and then checking it again. But when I changed the adapter to use static IP, it still can't connect to internet. Result of Get-NetAdapter -Name * | fl (with MAC address removed): Name : vEthernet (Realtek PCIe GBE Family Controller Virtual Switch) InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2 InterfaceIndex : 5 MacAddress : 55-55-55-55-55-55 MediaType : 802.3 PhysicalMediaType : Unspecified InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Mbps) : 100 MediaConnectionState : Connected ConnectorPresent : False DriverInformation : Driver Date 2006-06-21 Version 6.3.9600.16384 NDIS 6.40 Name : Ethernet 3 InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3 InterfaceIndex : 6 MacAddress : 55-55-55-55-55-56 MediaType : 802.3 PhysicalMediaType : Unspecified InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Gbps) : 10 MediaConnectionState : Connected ConnectorPresent : False DriverInformation : Driver Date 2006-06-21 Version 6.3.9600.16384 NDIS 6.40 Name : Ethernet InterfaceDescription : Realtek PCIe GBE Family Controller InterfaceIndex : 2 MacAddress : 55-55-55-55-55-57 MediaType : 802.3 PhysicalMediaType : 802.3 InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Mbps) : 100 MediaConnectionState : Connected ConnectorPresent : True DriverInformation : Driver Date 2013-05-10 Version 8.1.510.2013 NDIS 6.30

    Read the article

  • Windows XP over two monitors, but one of them switches off at boot... how to fix? How to switch bac

    - by jae
    When booting into XP (x64, Athlon II X2 245, 4GB RAM), my main monitor (got two 19" TFTs hooked up, two gfx cards, a 4650 (1GB, the primary monitor's on this) and a 4350 (512MB)) switches off. Logging in blind (cursor down key, typing password) gets me one screen, the secondary. Booted correctly until about two days ago. No clue what's the cause, last change was (if I don't overlook something) installing the ATI 9-12 hotfix. And booting into Windows 7, after returning from 7, it was like this. For some weird reason, I cannot start Catalyst Control Center (I right-click desktop, choose the CCC entry, the pointer changes to hourglass for a half-second... and nothing. Likewise with "Properties"... I think, as all windows open on the primary (off) screen, and no entry appears in the task bar for Properties) Completely stumped. Windows 7, same setup, works w/o a hitch. The primary monitor appears to run in some unknown, but pretty low, resolution, as the mouse pointer only moves onto it at about half-height. But, w/o CCC or display properties, I cannot check. And, obviously, not change anything. Hope this was not too long-winded. And I'm sure I still forgot essential stuff. :P

    Read the article

  • Campus VLAN Segmentation - By OS?

    - by Moduspwnens
    We've been thinking through re-arranging our network and VLAN configuration. Here's the situation. We already have our servers, VoIP phones, and printers on their own VLANs, but our problem lies with end user devices. There are just too many to lump on the same VLAN without being hammered with broadcasts! Our current segmentation strategy has them split into VLANs like this: Student iPads Staff iPads Student Macbooks Staff Macbooks Gaming devices Staff (Other) Student (Other) *Note that our network has many more iPads and MacBooks than most. Since the primary reason we're splitting them is just to put them in smaller groups, this has been working for us (for the most part). However, this required our staff to maintain access control lists (MAC addresses) of all devices belonging in these groups. It also has the unfortunate side effect of illogically grouping broadcast traffic. For example, using this setup, students on opposite ends of campus using iPads will share broadcasts, but two devices belonging to the same user (in the same room) will likely be on completely separate VLANs. I feel like there must be a better way of doing this. I've done a lot of research and I'm having trouble finding instances of this kind of segmentation being recommended. The feedback on the most relevant SO question seems to point toward VLAN segmentation by building/physical location. I feel like that makes sense because logically, at least among miscellaneous end users, broadcasts will typically be intended for nearby devices. Are there other campuses/large-scale networks out there segmenting VLANs based on end-system OS? Is this a typical configuration? Would VLAN segmentation based on physical location (or some other criteria) be more effective? EDIT: I've been told that we will soon be able to dynamically determine device OS without maintaining access lists, although I'm not sure how much that affects the answers to the questions.

    Read the article

  • Closing laptop lid and using external display causes mouse to move on it's own

    - by PolishHurricane
    I plugged my ASUS Taichi Laptop into an external monitor via the VGA Adapter that it comes with. It was working fine, but then I configured a power option so I could shut the lid without it going to sleep (so I could use just the external monitor with an external keyboard/mouse). Problem is though, now, when the lid is closed and I move the mouse, the mouse is moving around on it's own, but when I open the laptop lid back up, the mouse is fine. I looked under the lid when it was closed and the display properly shuts off on the laptop. It's not the external mouse/keyboard because I completely unplugged them and it was still happening. Nobody is hacking me or anything, I totally went into airplane mode/pulled the wire. I have a touchscreen, but I put a piece of paper over it and it wasn't doing it. I was thinking it might be the trackpad being touched somehow by the screen when the lid is closed? But I went into windows 8 control panel options and I couldn't find anywhere to disable it (it sees the USB mouse I think).

    Read the article

  • creating metapackages

    - by olpanis
    got some problems while creating metapackages tried to create a meta package containing forensic toolkits, on ubuntu i even got problems creating an .deb file. dpkg-source: error: can't build with source format '3.0 (quilt)': no orig.tar file found dpkg-buildpackage: error: dpkg-source -b forensics-0.1 gave error exit status 255 later i tried it using debian 5, creating the deb works, but there i got into some problems with Depends debian:/home/matthias/Desktop/meta# dpkg --install *.deb Wähle vormals abgewähltes Paket meta. (Lese Datenbank ... 96897 Dateien und Verzeichnisse sind derzeit installiert.) Entpacke meta (aus meta_0.1-1_i386.deb) ... dpkg: Abhängigkeitsprobleme verhindern Konfiguration von meta: meta hängt ab von python (>= 2.6.6-2); aber: Version von python auf dem System ist 2.5.2-3. dpkg: Fehler beim Bearbeiten von meta (--install): Abhängigkeitsprobleme - lasse es unkonfiguriert Fehler traten auf beim Bearbeiten von: meta the file control looks like this: Source: meta Section: unknown Priority: extra Maintainer: root <[email protected]> Build-Depends: debhelper (>= 7) Standards-Version: 3.7.3 Homepage: <insert the upstream URL, if relevant> Package: meta Architecture: any Depends: python (>= 2.6.6-2) Description: short description forensic toolkits any ideas? kind regards

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Installing Joomla on Windows Server 2008 with IIS 7.0

    - by Greg Zwaagstra
    Hi, I have been spending the past while trying to install Joomla on a server running Windows Server 2008. I have successfully installed PHP (using Microsoft's web tool for installing PHP with IIS) and MySQL and am now trying to run the browser-based installation. Everything comes up green, I fill in the appropriate information regarding the site name, MySQL information, etc. and no errors are thrown. However, when I get to the step that asks me to remove the installation directory, I am unable to do so as Windows states it is in use by another program (I cannot fathom how this is true). Also, there is no configuration.php file that is created so if I were to manage to delete this folder I have a feeling that there would be problems. I was thinking there was some kind of a permissions issue and have set the permissions for IIS_IUSRS to have read, write, and execute permissions for the entire folder that Joomla resides in but this has not helped. Any help in this matter is greatly appreciated. ;) Greg EDIT: I decided to try and manually install Joomla by manually editing the configuration.php file. This has worked great and now I am certain there is some kind of a permissions issue going on because I am able to do everything that involves the MySQL database (create an article, edit menu items, etc.), but anything that involves making changes to Joomla installation's directory does not work (install plugins, edit configuration settings using the Global Configuration menu within Joomla, etc.) I have granted IIS_IUSRS every permission except Full Control (reading on the Joomla! forums shows that this should be enough for everything to work). This is confusing to me and I am quite stuck on this problem. EDIT 2: The bizarre thing is that in the System Info under Directory Permissions, everything turns up as Writable but then whenever I try to actually use Joomla to, for example, edit the configuration.php file using the interface, it says it is unable to edit the file.

    Read the article

  • DriveImage XML fails with a Windows Volume Shadow Service Error

    - by ssvarc
    I'm trying to image a SATA laptop hard drive, using DriveImageXML, that is attached to my computer via a USB adapter. I'm running Win7 Ultimate 64 bit. DriveXML is returning: Could not initialize Windows Volume Shadow Service (VSS). ERROR C:\Program Files (x86)\Runtime Software\Drivelmage XML\vss64.exe failed to start. ERROR TIMEOUT Make sure VSSVC.EXE is running in your task manager. Click Help for more information. VSSVC.EXE is running in Task Manager, as is VSS64.exe. Looking at the FAQ on the Runtime webpage this turned up: Please verify in Settings-Control Panel-Administrative Tools-Services that the following services are enabled: MS Software Shadow Copy Provider Volume Shadow Copy Also make sure you are able to stop and start these services. Possible reasons for VSS failures: For VSS to work, at least one volume in your computer must be NTFS. If you use only FAT drives, VSS will not function. The required NTFS volume does not need to be identical with the volume you want to image. You should make sure that VSSVC.EXE is running in your task manager. If the problems persist, registering "oleaut.dll" and "oleaut32.dll" using "regsvr32" might help. Both of those services are running and can be started and stopped without issue. Using "regsvr32" to register ""oleaut32.dll" returns successful, but "oleaut.dll" returns: The module "oleaut.dll" failed to load. Make sure the binary is stored at the specified path or debug it to check for problems with the binary or dependent .DLL files. The specified module could not be found. Some other information that might be relevant. Browsing to the drive is successful, but accessing certain folders returns an "access" error. Windows runs a permissions adder that adds the current user profile to the NFTS permissions. Could this be the cause of the issue? DriveImage XML is running as Administrator. Thoughts?

    Read the article

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >