Search Results

Search found 926 results on 38 pages for 'nokia lumia 900'.

Page 22/38 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Ballooning Mac OS X kernel_task and Wired memory usage. How to diagnose / fix?

    - by user28930
    I have a very strange issue, which I'm having a hard time diagnosing as to the root cause. I have a Mac Pro (2008, 8-core 2.8 GHz, 8800GT) with 14 GB of RAM (recently upgraded because of this issue!). When I boot my system and log in, vm_stat / top / Activity Monitor will show that kernel_task has about 150 MB allocated, and the machine has about 800 MB of Wired memory being allocated. Even initially, 800 MB seems an awful lot of wired memory to be allocated with no applications running - but, it gets worse. (NB: Wired is locked, unswappable memory) After a very short time, sometimes triggered by something as simple as launching a terminal, kernel_task will balloon to 8-900 MB of Real Mem (RSIZE), and Wired Memory will accelerate to 1.6 GB (implying that all the extra memory requests are for wired RAM in the kernel). If I quit everything (I.E: no running applications, bar an activity monitor or terminal to view top), there is no appreciable reduction in either kernel_task RSIZE, or Wired Memory usage. Going the opposite way, and loading the system with tasks also shows that wired memory does not get reduced - and that importantly it is not reduced in preference to heavy swapping. If I log out and log back in again, it reduces a bit (450 MB kernel_task, 1.28 GB Wired), but not back to the start. I'm not running any wacky kexts - and futhermore, kextstat shows no huge memory allocations there; the largest being com.apple.nvidia.nv50hal at about 4 MB of Memory. The machine feels overall more sluggish when this has happened - unsurprisingly because such a huge amount of RAM has been marked as non-pageable. So I have a few questions: 1) Is there a good way to diagnose what has allocated all of this wired memory? It's often over 2 times the kernel_task size, running no applications. The real memory total doesn't seem to add up - it seems that there is a bunch of RAM that isn't being accounted for anywhere. 2) What is happening to cause the kernel to suddenly require 6 times as much memory?

    Read the article

  • Odd IIS FTP Failure

    - by Monkey Boson
    We're running a script on our production box that zips up our database and FTPs it to a backup box every night. Our production box is running Redhat Enterprise 5. Our backup box is running Windows XP Pro / IIS 5.1. Both machines are on the same VLAN (not sure if this is imporatant). The backup file usually clocks in at around 3GB. Every now and again (~5% of the time), the backup script fails. The shell script on the "client side" - which looks at return codes - never identifies any problem since ftp always returns 0. On the "server side", IIS writes out a log that looks like this: #Software: Microsoft Internet Information Services 5.1 #Version: 1.0 #Date: 2009-08-08 07:04:25 #Fields: time c-ip cs-method cs-uri-stem sc-status sc-win32-status 07:04:25 192.168.111.235 [15]USER backup 331 0 07:04:25 192.168.111.235 [15]PASS - 230 0 07:05:54 192.168.111.235 [15]created backup_20090808.zip 426 10035 07:06:16 192.168.111.235 [15]QUIT - 426 0 Now, I know that 426 means "Connection closed, transfer aborted", which is sort-of a catch-all for "IIS was not happy". The real puzzler is the wincode: 10035 (WSAEWOULDBLOCK -- Resource temporarily unavailable). My understanding is that this code is normal when using non-blocking socket calls - which would almost certainly be used by any FTP Server implementation. My first guess that it might be a timeout issue doesn't make sense, since we're only talking about a few minutes here and the timeout was left at the default 900 s. Does anybody have any ideas about what is causing this problem, and how it may be fixed? Thanks!

    Read the article

  • Dual Monitor support rdp 7 to win 7 on esxi

    - by rphilli5
    I am trying to RDP from a Windows 7 Professional dual monitor physical machine to a Windows 7 Professional VM hosted on esxi 4.0. I can get the spanning option to work to both monitors, but I have tried 3 different methods of connecting but have not been able to use true multiple monitors. At different times, I tried checking the "use all monitors" option, command line mstsc /multimon and added the line use multimon:i:1 to the .rdp file. None of these worked. Any ideas? The physical machine can connect to other Windows 7 physical machines with true multi monitor access. I also have the same issue when going from a 32bit RC1 machine to a Windows 7 Professional x64, but not when going in the reverse direction. Here's the .rdp: screen mode id:i:2 use multimon:i:1 desktopwidth:i:1440 desktopheight:i:900 session bpp:i:16 winposstr:s:0,1,341,118,1139,568 compression:i:1 keyboardhook:i:2 audiocapturemode:i:0 videoplaybackmode:i:1 connection type:i:1 displayconnectionbar:i:1 disable wallpaper:i:1 allow font smoothing:i:0 allow desktop composition:i:0 disable full window drag:i:1 disable menu anims:i:1 disable themes:i:1 disable cursor setting:i:0 bitmapcachepersistenable:i:1 full address:s:192.168.1.5 audiomode:i:0 redirectprinters:i:1 redirectcomports:i:0 redirectsmartcards:i:1 redirectclipboard:i:1 redirectposdevices:i:0 redirectdirectx:i:1 autoreconnection enabled:i:1 authentication level:i:2 prompt for credentials:i:0 negotiate security layer:i:1 remoteapplicationmode:i:0 alternate shell:s: shell working directory:s: gatewayhostname:s: gatewayusagemethod:i:4 gatewaycredentialssource:i:4 gatewayprofileusagemethod:i:0 promptcredentialonce:i:1 use redirection server name:i:0 drivestoredirect:s:

    Read the article

  • PHP output buffer settings ignored by server

    - by Ecom Evolution
    I have been trying to flush the output of certain scripts to the browser on demand, but they do not work on our production server. For instance, I tried running the "Phoca Changing Collation tool" (find it on Google) and I don't see any output until the script finishes executing. I've tried immediately flushing the buffer on other scripts that work fine on any server but this one using the following code: echo "something"; ob_flush(); flush(); Setting "ob_implicit_flush(1);" doesn't help either. The server is Apache 2.2.21 with PHP 5.2.17 running on Linux. You can see our php.ini file here if that will help: http://www.smallfiles.org/download/1123/php.ini.html This isn't the only problem we are having with the server ignoring in-script directives. The server also ignores timeout coding such as: ini_set('max_execution_time', 900*60); AND set_time_limit(86400); Script always times out at the php.ini default. Doesn't seem to matter if script is executed in IE or Firefox. Tried "ini_set('zlib.output_compression_level', 'Off');" and checked that it is "Off" in the php.ini file. The code "apache_setenv('no-gzip', 1);" causes a fatal error so tried uploading a .htaccess file with the "mod_gzip_on No" directive. Neither helps. Tried running Apache as fcgi and suphp, but same results. Server is NOT in safe mode. Pullin ma hair out!

    Read the article

  • Openldap with ppolicy

    - by nitins
    We have working installation of OpenLDAP version 2.4 which is using shadowAccount attributes. I want to enable ppolicy overlays. I have gone through the steps provided at OpenLDAP and ppolicy howto. I have made the changes to slapd.conf and imported the password policy. On restart OpenLDAP is working fine and I can see the password policy when I do a ldapsearch. The user object looks like given below. # extended LDIF # # LDAPv3 # base <dc=xxxxx,dc=in> with scope subtree # filter: uid=testuser # requesting: ALL # # testuser, People, xxxxxx.in dn: uid=testuser,ou=People,dc=xxxxx,dc=in uid: testuser cn: testuser objectClass: account objectClass: posixAccount objectClass: top objectClass: shadowAccount shadowMax: 90 shadowWarning: 7 loginShell: /bin/bash uidNumber: 569 gidNumber: 1005 homeDirectory: /data/testuser userPassword:: xxxxxxxxxxxxx shadowLastChange: 15079 The password policy is given below. # default, policies, xxxxxx.in dn: cn=default,ou=policies,dc=xxxxxx,dc=in objectClass: top objectClass: device objectClass: pwdPolicy cn: default pwdAttribute: userPassword pwdMaxAge: 7776002 pwdExpireWarning: 432000 pwdInHistory: 0 pwdCheckQuality: 1 pwdMinLength: 8 pwdMaxFailure: 5 pwdLockout: TRUE pwdLockoutDuration: 900 pwdGraceAuthNLimit: 0 pwdFailureCountInterval: 0 pwdMustChange: TRUE pwdAllowUserChange: TRUE pwdSafeModify: FALSE I do not what should be done after this. How can the shadowAccount attributes be replaced with the password policy.

    Read the article

  • Connection reset by peer: mod_fcgid: error reading data from FastCGI server Issues

    - by user145857
    Help is greatly needed for our server. We are experiencing random "Connection reset by peer: mod_fcgid: error reading data from FastCGI server" errors which cause a 500 internal server error. If the page is then reloaded it loads normally as it should. We are running MPM Worker with mod FCGID to handle PHP. We had APC cache enabled but disabled it recently to see if it would fix the problem, but the random mod FCGID errors are still continuing. No other opcode cache is active now. Our settings are below: <IfModule worker.c> MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 ThreadLimit 100 ServerLimit 700 MaxClients 700 MaxRequestsPerChild 0 </IfModule> <IfModule mod_fcgid.c> FcgidMaxRequestLen 1073741824 FcgidMaxRequestsPerProcess 2000 FcgidMaxProcessesPerClass 100 FcgidMinProcessesPerClass 0 FcgidConnectTimeout 300 FcgidIOTimeout 900 FcgidFixPathinfo 1 FcgidIdleTimeout 300 FcgidIdleScanInterval 120 FcgidBusyTimeout 300 FcgidBusyScanInterval 120 FcgidErrorScanInterval 12 FcgidZombieScanInterval 12 FcgidProcessLifeTime 3600 </IfModule> The server is a 64 core 2.1 GHZ 94 GB RAM so it has some power. Some of the fcgid timeout settings are higher because we run large reports which take up to 15 minutes. Any help is greatly appreciated! Just to clarify, the random fcgid errors are occurring when a user clicks a page on our site and the 500 error page loads instantly. This is random and occurrs less than 1% of the time but it is still an issue.

    Read the article

  • How to set up TightVNC Java viewer index.html on web server?

    - by penyuan
    I've got the Java TightVNC viewer applet set up with the provided index.html on my Mac OS X 10.6.3 with web sharing enabled. Using a remote computer I was able to get to the webpage but I only see a white box with an X (for error?) that represents where the viewer is supposed to be. Any ideas on how to get this to work? I've tried to set the port (in index.html) to 5900 and 5901, none worked. Are any of these the default VNC port for Mac OS X 10.6.3? Also, I've activated Screen Sharing and Remote Login in System Preferences, allowing VNC viewers to connect. Here is the code for my index.html: <HTML> <TITLE> TightVNC desktop </TITLE> <APPLET CODE="classes/VncViewer.class" ARCHIVE="classes/VncViewer.jar" WIDTH="1440" HEIGHT="900"> <PARAM NAME="PORT" VALUE="5900"> <PARAM NAME="Scaling factor" VALUE="50"> </APPLET> <BR> <A href="http://www.tightvnc.com/">TightVNC site</A> </HTML> Again I can get to this page, but the applet doesn't seem to work, the Java console also doesn't say anything. Thanks in advance for your help!

    Read the article

  • Hard drive had reallocated sectors...but now it magically doesn't! Can I trust it?

    - by rob
    Last week my SMART diagnostics utility, CrystalDiskInfo, reported that the external hard drive that I was saving my backups to had suddenly reported 900+ reallocated sectors. I double-checked to confirm, then ordered a replacement drive. I spent all of this week copying data from that drive to the new drive. But toward the end of the copy, something peculiar happened. CrystalDiskInfo popped up an alert that the reallocated sector count had gone back down to 0. I know that when SMART detects a read error on a block, it adds that block to the current pending reallocation list. If it subsequently is successfully written or read later, it is removed from the list and assumed to be fine, but if a subsequent write fails, it is marked bad and added to the reallocated sector count. What concerns me most is that I've never read anywhere that a sector can be recovered as "good" after it has been marked as a bad sector and remapped. I've just finished running an extended SMART diagnostic, and it found no surface errors. Now I'm doubtful that the manufacturer will honor a warranty claim if the SMART info does not report any problems. Has anyone had this happen? If so, then is the drive, indeed, okay, or should I be concerned about an imminent failure?

    Read the article

  • How to get gigabit network speeds on Windows XP?

    - by JB
    We've just installed gigabit switches at work, and things on the Linux side are going well. Our linux boxes, which use a Intel Corporation 82566DM-2 Gigabit nic (according to lspci), consistently get over 900 mbits/sec: iperf -c ipserver ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.40.9 port 39823 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec We have a bunch of Windows XP 64-bit machines that use Broadcom NetXtreme 57xx cards. I spent around a day trying to get equivalent speeds on them, but couldn't get above 200 Mbits/sec. I noticed the Windows iperf tests said that the TCP window size was 8 Kb by default (as opposed to 16 Kb on Linux, so I modified my test to reflect that. Still no love. I went to Broadcom's site, downloaded the latest drivers for the card and installed. Still no love. However, finally, I tried a 64 Kb window size with the new drivers, and finally an improvement! $ iperf -c ipserver -w64k ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ 3] local 192.168.40.214 port 1848 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 933 MBytes 782 Mbits/sec Much better, but still not really taking advantage of the full capabilities of the network. If the Linux box can reach 950 Mbits/sec consistently, this box should be able to as well. Also, if you're wondering about the medium, this is over the same cable...I'm switching back and forth. Any suggestion or ideas would be really welcome. Thanks!

    Read the article

  • Access 2007: How can I make this EXPRESSION less complex?

    - by Mike
    Access is telling me that my new expression is to complex. It used to work when we had 10 service levels, but now we have 19! Great! My expression is checking the COST of our services in the [PriceCharged] field and then assigning the appropriate HOURS [Servicelevel] when I perform a calculation to work out how much REVENUE each colleague has made when working for a client. The [EstimatedTime] field stores the actual hours each colleague has worked. [EstimatedTime]/[ServiceLevel]*[PriceCharged] Great. Below is the breakdown of my COST to HOURS expression. I've put them on different lines to make it easier to read - please do not be put off by the length of this post, it's all the same info in the end. Many thanks,Mike ServiceLevel: IIf([pricecharged]=100(COST),6(HOURS), IIf([pricecharged]=200 Or [pricecharged]=210,12.5, IIf([pricecharged]=300,19, IIf([pricecharged]=400 Or [pricecharged]=410,25, IIf([pricecharged]=500,31, IIf([pricecharged]=600,37.5, IIf([pricecharged]=700,43, IIf([pricecharged]=800 Or [pricecharged]=810,50, IIf([pricecharged]=900,56, IIf([pricecharged]=1000,62.5, IIf([pricecharged]=1100,69, IIf([pricecharged]=1200 Or [pricecharged]=1210,75, IIf([pricecharged]=1300 Or [pricecharged]=1310,100, IIf([pricecharged]=1400,125, IIf([pricecharged]=1500,150, IIf([pricecharged]=1600,175, IIf([pricecharged]=1700,200, IIf([pricecharged]=1800,225, IIf([pricecharged]=1900,250,0)))))))))))))))))))

    Read the article

  • Why does the screen resolution of 1440x900 suddenly disappear from Intel GMA Control Panel?

    - by GeneQ
    I'm using a Vostro 1200 laptop with the Mobile Intel(R) 965 Express Chipset powering its graphics and running Vista 32-bit SP2. I've been using the Vostro with a Dell SE198WFP LCD Monitor as the external display since day one for about two years without any problems. Recently, I plugged the Vostro into a couple of other monitors. The problem is, now the native resolution for my main monitor's (the SE198WFP) resolution of 1440x900 @ 60 Hz is no longer available. (See below) I've tried everything from uninstalling and reinstalling the Intel drivers as well as the monitor drivers to no avail. I've goggled this problem and it appears that this has happened to other people but all the answers involve people giving up in frustration or reinstalling; both terrible outcomes. Has anybody ever figured why this happens and have a good solution? UPDATE: This dude has a complicated solution, which I haven't tried yet. His explanations for the problem was After an exausting search for an answer to the matter of why my brand new 19? widescreen monitor’s native resolution (1440×900) was unavailible (sic) in the display properties, I finally stumbled upon an article a person posted on Intel’s forums that basically explained what shannanigans Intel had been up to with their GMA 950 line of onboard graphic solutions. Not very comforting.

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Cannot increase Datastore

    - by k4w4zz
    Hello, We have an ESX 4.0 cluster with 2 hosts, EMC Clarion SAN storage with 10 LUNs. We have added 2 new 400 GB LUNs. All the LUNs are visible from both hosts. I have extended an existing 500 GB datastore with one of these 400 GB LUNs - the new datastore size is now 900 GB. I'd like to do the same operation with the second 400 GB LUN to extend another existing datastore but I'm not able to do it. The LUN is available to create a brand new datastore but is not visible to extend an existing one. I don't understand why everything was fine with the other one and why can't I do the same exact operation with this LUN. The result is the same on both hosts. The SAN admin have erased and re-created several times this LUN. I have rescan the HBA each time. In attachment you can find the result of the esxcfg-mpath -l and fdisk -l commands on both servers. Does somebody have an idea please?

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • How can I make the XAnalogTV xscreensaver fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Alternatively, as I believe the problem is aspect-ratio related, is there any way I can get the screensaver to render larger than my display, and simply discard the extra pixels? Relevant screenshots (windowed, maximized, and full-screen, respectively): You can see in the last two that the scrollbar from Firefox is clearly visible, even though this is a full-screen screensaver.

    Read the article

  • Get the desktop/viewport of a window in enlightenment?

    - by Zorf
    Okay, so, given a XID of a window I need to get its desktop or viewport as well as the currently active one. Enlightenment does not seem to properly respond to wmctrl which leads to: ***@note:~ > wmctrl -lG 0x01e00002 -1 21 395 310 146 note Conky (note) # it places conkey wndows on -1 for some reason? 0x01c00002 -1 65 655 230 158 note Conky (note) 0x01a00002 -1 25 215 230 182 note Conky (note) 0x01800002 -1 25 550 310 110 note Conky (note) 0x01600002 -1 685 145 230 120 note Conky (note) 0x01400002 -1 1120 245 280 206 note Conky (note) 0x01200002 -1 1095 35 230 186 note Conky (note) 0x01000002 -1 1145 470 250 266 note Conky (note) 0x00c00002 -1 40 34 230 182 note Conky (note) 0x00e00029 0 0 0 1440 900 note ~ : bash – Konsole # desktop 2, fullscreen 0x03a00060 0 505 231 899 642 note Downloads – 'Dolphin' # destkop 0 0x0480001a 0 206 222 958 526 note Lifelover - Kärlek - becksvart melankoli #desk 2 0x034000e6 0 116 32 984 767 note clemctrl – Kate #desk 0 0x02c01b78 0 309 314 549 520 note ************* # desk 1 0x04e00062 0 104 31 990 619 note XChat: *** @ Free / #*** (+Ccnt) #desk 1 0x05c00112 0 22 35 1396 834 note StarCraft on Reddit - Chromium #desk 3 0x02c0f292 0 453 356 549 520 note *** #desk 1 0x02c000c0 0 860 216 557 645 note Buddy List # desk 1 As can be seen, all windows are on desk 0 in wmctrl except conky windows. Furthermore the geometry-viewport trick also doesn't seem to work that works in some wm's, are there any other tricks to get on which viewport/desktop a window is? There has to be some way to get it right?

    Read the article

  • Can I use CNAME with ip address? Why If works (sometimes)?

    - by Maciek Sawicki
    I believe that the easiest answer for the first question is "No, You have "A" for this", but I accidentally setup some subdomain using CNAME pointing to ip address and it worked on few computers in my office. I wonder how it was possible? Now, when I'm checking it from home I have following error: beast:~ viroos$ host somesubdomain.somedomain.com Host somesubdomain.somedomain.com not found: 3(NXDOMAIN) I'm 100% it used to work at my office (currently it looks like it doesn't, but I'm checking it on different machine). Therefore I'm not 100% if it worked due to some special network setup or because I tested it just after adding DNS entry. I know this story sounds, a little crazy/incredibly, but can someone help me solve this puzzle. //edit: I'm adding dig output ; <<>> DiG 9.6-ESV-R4-P3 <<>> somesubdomain.somedomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 60224 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;somesubdomain.somedomain.com. IN A ;; ANSWER SECTION: somesubdomain.somedomain.com. 67 IN CNAME xxx.xxx.xxx.xx1. ;; AUTHORITY SECTION: . 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012040901 1800 900 604800 86400 ;; Query time: 72 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Tue Apr 10 00:11:01 2012 ;; MSG SIZE rcvd: 136

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • Internet Troubles - PPPoE vs PPPoA?

    - by AkkA
    I have been having some internet troubles at home (ADSL2+ connection in Australia). We get random drop-outs from the authentication connection. It will keep the connection to the DSL service, but we lose authentication and either have to restart the router/modem (its combined, a Belkin one, not sure on model number) or unplug the phone cable, wait about 30 seconds and plug it in again. I've called the ISP (Telstra) a few times, but they only offer limited support when we dont use their supported hardware. Apparently something had happened on their side, they checked the box again (at least it sounded that simple), and told me it would be fine. It wasnt. I've replaced all the filters around the house, but that didnt help either. We do live a little bit away from the exchange (get a sync speed of about 3000/900), so I thought it could be due to line noise but that hasnt helped. Telstra allow both PPPoE and PPPoA connections (which I'm configuring through my router, dont have software on the PC side). I've been running PPPoA the whole time, would it make any difference changing it to PPPoE? If not, are there any other theories as to why we would be experiencing these drop-outs? It has been fine for at least 12 months, then suddenly started about 2 months ago.

    Read the article

  • How can I make XAnalogTV fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Relevant screenshots (windowed, maximized, and full-screen, respectively):

    Read the article

  • How does VirtualBox's memory usage work?

    - by DrFredEdison
    I've been running several VM's with VirtualBox, and the memory usage reported from various perspectives, and I'm having trouble figuring how much memory my VMs actually use. Here is an example: I have a VM running Windows 7 (as the Guest OS) on my windows XP Host machine. The Host Machine Has 3 GB of RAM The Guest VM is setup to have a base memory of 1 GB If I run Task Manger on the Guest OS, I see memory usage of 430 MB If I run Task Manger on the host OS, I see 3 processes that seem to belong to VirtualBox: VirtualBox.exe (1), using 60 MB of memory (This one seems to have the most CPU usage) VirtualBox.exe (2), using 20 MB of memory VBoxSvc.exe, using 11.5 MB of memory While running the VM, the Host OS's memory usage is about 2 GB When I shut down the VM, the Host OS's it goes back to memory usage goes down to about 900 MB So clearly, there are some huge differences here. I really don't understand how the GuestOS can use 400+ MB, while the Host OS only shows about 75 MB allocated to the VM. Are there other processes used by VirtualBox that aren't as obviously named? Also, I'd like to know if I run a machine with 1 GB, is that going to take 1 GB away from my host OS, or only the amount of memory the Guest machine is currently using? update Somene expressed distrust over my memory usage numbers, and I'm not sure if that distrust was directed at me, or my Host OS's Task Manager's reporting (which is perhaps the culprit), but for any skeptics, here is a screenshot of those processes on the host machine:

    Read the article

  • Torrent upload ratio not updated on Synology DS212+

    - by user179271
    I have a Synology DS212+ NAS running DSM 4.2-3211 (current version). I use it for several purposes including torrent download using Download Station and a tracker that needs authentication. My problem is that my download/upload ratio isn't updated, so it constantly falls down. My NAS is behind a router, and I configured the NAT to forward ports 6890 to 6999 to the internal IP address of the NAS. Here are the Download Station settings : TCP port : 6990, Sharing ratio : 900%, Sharing time : infinite, max download speed : 0 (no limit), max upload speed : 0 (no limit), BT protocol encryption : checked, max numbers of peers allowed by torrent file : 4000, DHT : checked, with port 6889. When the DHT option is not checked, the NAS doesn't upload any files. I don't know what is this option for. Can someone help me to solve this problem ? Did I miss any step, or does it come from the NAT ? How is the authentication managed by Dowload Station ? (Sorry for my english) Thanks.

    Read the article

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

  • Why does the screen resolution of 1440x900 suddenly disappears from Intel GMA Control Panel?

    - by GeneQ
    I'm using a Vostro 1200 laptop with the Mobile Intel(R) 965 Express Chipset powering its graphics and running Vista 32-bit SP2 . I've been using the Vostro with a Dell SE198WFP LCD Monitor as the external display since day one for about two years without any problems. Recently, I plugged the Vostro into a couple of other monitors. The problem is, now the native resolution for my main monitor's (the SE198WFP) resolution of 1440x900 @ 60 Hz is no longer available. (See below) I've tried everything from uninstalling and reinstalling the Intel drivers as well as the monitor drivers to no avail. I've Goggled that this problem and it appears that this has happened to other people but all the answers involve people giving up in frustration or reinstalling; both terrible outcomes. Has anybody ever figured why this happens and have a good solution? Thanks. UPDATE: This dude has a complicated solution, which I haven't tried yet. His explanations for the problem was After an exausting search for an answer to the matter of why my brand new 19? widescreen monitor’s native resolution (1440×900) was unavailible (sic) in the display properties, I finally stumbled upon an article a person posted on Intel’s forums that basically explained what shannanigans Intel had been up to with their GMA 950 line of onboard graphic solutions. Not very comforting.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >