Search Results

Search found 18151 results on 727 pages for 'upside down'.

Page 510/727 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • Lenvo B450 won't boot on battery only?

    - by Mywiki Witwiki
    We bought a Lenovo B450 laptop almost a year ago. It comes with a NVIDIA GEFORCE with CUDA graphics and so the battery life is terrible. It will only last 1:30 hours max. We try to run it on battery as much as possible but because the battery life is short sometimes we can't notice that the battery is so low until the computer blacks out. Because of the short battery life, the laptop is always plugged on AC power. One night the computer froze. Because it was already late, I just reset the laptop my pressing the power button for 10 seconds. The laptop shut off but I did not bother restarting it. The next morning, the laptop won't turn on on battery only. It will only turn on on AC power. The computer instantly shuts down(improperly) once the adapter is removed. But the battery was at 100% then. Now it is slowly losing charge (currently at 74%). The battery indicator says, "Plugged in, not charging". I want to bring the laptop to school but I can't because it won't be portable at all. Just to summarize it all: 1) The laptop suffered some blackouts already. 2) The laptop was on AC power most of the time. 3) When the computer froze, it was reset (hard shutdown). 4) The laptop won't boot with battery only since then. 5) The laptop will shutdown instantly when AC adapter is removed. 6) The battery won't charge and is gradually losing charge. ======================= UPDATE ============================= We got the battery replaced. Unfortunately, it delivers only 2 hours max of power.

    Read the article

  • Very poor SCSI hd performance on IBM x336 with LSI 1030 RAID1

    - by David Tschoepe
    I'm experiencing very poor performance on an IBM x336 server with dual 73GB 15k hard drives on a U320 controller, LSI 1030. We're getting maybe 3.5MB/sec max (per HD Tune utility). It should be over 100MB/sec at least, I would think (another x335 box is running 70-80MB/sec). The server was recently setup and didn't really notice the problem, but may have been there from the beginning, so not sure. I have installed the IBM ServerRAID Windows utility. The server is running Windows 2008 R2 Web edition (if that matters). I thought maybe one of the drives was bad, so far I have removed one of the drives out of the array and tested again, but still the same results. I'm waiting for the RAID1 to resync and I will try pulling the other drive next. I've also used the ServerRAID utility but haven't noticed anything in there that might indicate a problem. Not sure if I'm on the right path here. So looking for some advice to track this down.

    Read the article

  • Run command remotely on Windows computer

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. Also, I cannot use psexec on the source computer. I have figured out how to do this to a remote, domain-joined computer using WMI. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: The remote computer is not part of my domain, hence no Kerberos The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • Explorer.EXE ordinal 423 not found in urlmon.dll after updates/IE8 install

    - by Zoot
    Setting up a brand new Dell Optiplex 980 with Windows XP SP3, and everything started up fine on the first boot. My first task was to install system updates, including IE8 and WGA. After the required reboot after installing updates, I now get this error message: Explorer.EXE Ordinal not found. The ordinal 423 could not be located in the dynamic link library urlmon.dll Per my cursory Google search, this forum thread places the blame squarely on IE8. The solution provided is to enter safe mode and remove IE8. Unfortunately, when I press F8 to choose to boot safe mode, I only have the option of "Windows XP SP3 Professional" and no safe mode options. Any other ideas? Thanks in advance. FYI, I can get to the Windows Task Manager by holding down Control-Alt-Delete, but programs don't seem to run properly if you select them. I tried chatting with Dell Support, and we tried to initiate the system restore at c:\windows\system32\restore\rstrui.exe, but that had a similar "ordinal 423 not found in urlmon.dll" error.

    Read the article

  • A fatal exception 0E occured at 0028:xxxxxx in VxD IOS(01)

    - by winlin
    I get a blue screen of death in my windows 98 machine every time I boot it. I can't reach to my desktop. The error is like this: A fatal exception 0E occured at 0028:C003CC2F in VxD IOS(01) + 0000156B This was called from 0028:C0082E60 in VxD VKD(01) + 000001D0 I have to then give it a three finger salute to restart the system. There is no other way to shut down the system at this point except pressing the CPU power button. What could be the problem? My windows system.ini is: [boot] oemfonts.fon=vgaoem.fon shell=Explorer.exe system.drv=system.drv drivers=mmsystem.dll power.drv user.exe=user.exe gdi.exe=gdi.exe sound.drv=mmsound.drv dibeng.drv=dibeng.dll comm.drv=comm.drv mouse.drv=mouse.drv keyboard.drv=keyboard.drv *DisplayFallback=0 fonts.fon=vgasys.fon fixedfon.fon=vgafix.fon 386Grabber=vgafull.3gr display.drv=pnpdrvr.drv [keyboard] keyboard.dll= oemansi.bin= subtype= type=4 [boot.description] system.drv=Standard PC mouse.drv=Standard mouse keyboard.typ=Standard 101/102-Key or Microsoft Natural Keyboard aspect=100,96,96 display.drv=Standard PCI Graphics Adapter (VGA) [386Enh] ;device=tddebug.386 ;device=D:\TC\TASM\BIN\WINDPMI.386 ebios=*ebios woafont=dosapp.fon mouse=*vmouse, msmouse.vxd device=*dynapage device=*vcd device=*vpd device=*int13 keyboard=*vkd display=*vdd,*vflatd ConservativeSwapfileUsage=0 Paging=on [NonWindowsApp] TTInitialSizes=4 5 6 7 8 9 10 11 12 13 14 15 16 18 20 22 [power.drv] [drivers] wavemapper=*.drv MSACM.imaadpcm=*.acm ;msvideo.STV680=STV680sg.drv midi=mmsystem.dll wave=mmsystem.dll MSACM.msadpcm=*.acm [iccvid.drv] [mciseq.drv] [mci] cdaudio=mcicda.drv sequencer=mciseq.drv waveaudio=mciwave.drv avivideo=mciavi.drv videodisc=mcipionr.drv vcr=mcivisca.drv MPEGVideo=mciqtz.drv MPEGVideo2=mciqtz.drv [vcache] [MSNP32] [DISPLAY] BusThrottle=1 [network] SSID=1438661605 [vicax] msacm711=74603 msacm811=148933 msacm911=42405 [Sessew] VideoManufacturer=Standard VGA VideoBoard=Standard Display Adapter (VGA) MouseType=0 VidType=0 Mono=0 Ddraw=1 [drivers32] msacm.lhacm=lhacm.acm VIDC.IV50=ir50_32.dll msacm.iac2=C:\WINDOWS\SYSTEM\IAC25_32.AX VIDC.YUY2=msyuv.dll VIDC.UYVY=msyuv.dll VIDC.YVYU=msyuv.dll msacm.msaudio1=msaud32.acm msacm.vorbis=vorbis.acm msacm.l3acm=C:\WINDOWS\SYSTEM\L3CODECA.ACM msacm.sl_anet=sl_anet.acm VIDC.TSCC=tsccvid.dll VIDC.IV41=IR41_32.AX vidc.mpg4=mpg4c32.dll vidc.mp43=mpg4c32.dll msacm.voxacm160=vct3216.acm MSACM.msadpcm=msadp32.acm [TTFontDimenCache] 0 4=2 4 0 5=3 5 0 6=4 6 0 7=4 7 0 8=5 8 0 9=5 9 0 10=6 10 0 11=7 11 0 12=7 12 0 13=8 13 0 14=8 14 0 15=9 15 0 16=10 16 0 18=11 18 0 20=12 20 0 22=13 22

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • Make Google Chrome's address bar prefer page titles to domain names when offering completions?

    - by Ryan Thompson
    I've recently switched from Firefox to Chrome, and the thing I miss most from Firefox is the "Awesome Bar" that suggests completions for what I type primarily based on page titles, and then secondarily based on domain names. Chrome offers both matching URLs and titles, just like like Firefox, but Chrome seems to always prefer a matching domain name over a matching page title or a match to another part of the URL (besides the domain), no matter how many times I pass over the former for the latter. In fact, Chrome also prefers to suggest a search rather than matching anything other than a domain name. So is there any hidden preference I can change to tell Chrome that I care more about page titles than domain names? Example: I want to go to Google Reader, so I press Control+L and begin typing "reader". The URL for google reader is http://www.google.com/reader/view/#overview-page, so the domain name is www.google.com, which does not contain the word "reader". So the first option that Chrome suggests is either another site that has "reader" as part of the domain, or a search for "reader" with the default search engine. No matter how many times I scroll down and select Google Reader, Chrome never "learns" that that's what I want.

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • Game login server

    - by Tar
    I have a setup like this: A website, with a database. This database houses accounts and all details. Password hashes/salts/join dates/etc. What I want to do is to be able to use this same database for our game database. The game will be on servers in the United States while the web server and web server database is in the Netherlands. I know there is a big problem with using remote SQL and we really don't want to do that as operation of the website is just as vital as operation of the game server. We had one solution that involved sending account details to another database hosted on the same server that the gameserver is hosted on, but that was incredibly unreliable because if the website was down, no new people could register to play the game. The solution that we want is to have a log in server that is used to check credentials for everything. Is this possible/viable and could anyone point in the right direction? So, in summation: 2 game servers 1 web servers 1 central database used for authorization. The game accounts and website accounts need to be one in the same.

    Read the article

  • Maxtor 500GB external hard drive not being detected but power is going to it?

    - by ClarkeyBoy
    I have 2 * Maxtor Onetouch 4 Lite 500GB external hard drives (part no. 9NT2A4-500). They both used to work fine on my old laptop (an Acer) but I have not used them for about a year, since my laptop was stolen and I got this one (also an Acer [Aspire 7738G]). I have one plugged into the mains with one of the leads I believe was supplied with them. It appears to be receiving power as it is warm and the power light (on the unit itself) is on; also the mains adapter is fairly warm. I also have it plugged into my laptop with a USB lead which I have tested on my mp3 player (so I know it works). However my hard drive is not showing on my computer. I have tried checking for new hardware, installing the software that was supplied with it, checking drive letters in case it is registered as C: or something stupid, checking for problems etc... I can't find any cause for it to do this. It does appear to be starting up and, possibly, shutting down and restarting constantly (that's what it sounds like altho I can't be certain). I have had both hard drives stored in different places for the last year and they're both doing the same thing.. if it was only one then I'd guess it had got damaged or corrupted or something but since it is both I doubt this is it. The only things in common with both of them are the leads and the laptop, however I know the USB lead works and guess the mains lead works as there is power going to the unit. Has anyone come across this before or does anyone have any idea what the cause / solution to the problem is? Any help would be greatly appreciated. Regards, Richard

    Read the article

  • Unable to connect to FTP server using Filezilla with router in-between

    - by pkswatch
    While connecting to my web server using filezilla, i am getting this error: Status: Resolving address of ftp.mysite.org.in Status: Connecting to 199.199.199.18:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 1 of 150 allowed. Response: 220-Local time is now 17:58. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 5 minutes of inactivity. Command: AUTH TLS Response: 234 AUTH TLS OK. Status: Initializing TLS... Error: GnuTLS error -9: A TLS packet with unexpected length was received. Status: Server did not properly shut down TLS connection Error: Could not connect to server I use a cradlepoint CTR35 wifi router to connect to the wired internet connection. When i connect to the same server without this router, the connection works flawlessly. So i guess there is some problem with my router firewall settings, but i dont know what! Can somebody help me out please? Note: The server requires EXPLICIT FTP OVER TLS and does not work with plain FTP sessions. And i can connect to other servers using plain FTP with the router in between.

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Router failover not detecting outside interface link lost

    - by Matt
    Suppose I have two routers configured in master/slave configuration. They look something like this (addresses are not real ones) 123.123.123.10 <===> [eth0] Router 1 (10.1.1.2) [eth1] ===> +----------+ | 10.1.1.1 | ===> LAN 172.123.123.10 <===> [eth0] Router 2 (10.1.1.3) [eth1] ===> +----------+ The 10.1.1.1 is the default route for the Network (10.1.1.0). What's slightly different in this config to other's I've seen is that I don't have an external virtual IP. Also, the 10.1.1.1 addresses are in real life, public IP's (not private ones shown here). This is more of a router setup than a firewall setup so I'm not using NAT here. Now the issue that I'm having is that I can't see any way to configure UCARP or VRRP to monitor both eth0 & eth1 and fail over to the backup router should either of them go down. What I'm seeing is that if Router1 is the master and I unplug eth0 on router1, it doesn't fail over to router 2. However, it will if instead I unplug eth1 of router 1. In VRRP I see there is a cluster group, but it seems that for this to work you need to have virtual ip's or vrrp instances rather than actual interfaces assigned to it. I hope my explanation is clear. How do I get around this?

    Read the article

  • Enabling mod_rewrite on Amazon Linux

    - by L. De Leo
    I'm trying to enable mod_rewrite on an Amazon Linux instance. My Directory directives look like this: <Directory /> Order deny,allow Allow from all Options None AllowOverride None </Directory> <Directory "/var/www/vhosts"> Order allow,deny Allow from all Options None AllowOverride All </Directory> And then further down in httpd.conf I have the LoadModule directive: ... other modules... #LoadModule substitute_module modules/mod_substitute.so LoadModule rewrite_module modules/mod_rewrite.so #LoadModule proxy_module modules/mod_proxy.so ... other modules... I have commented out all the Apache modules not needed by Wordpress. Still when I issue http restart and then check the loaded modules with /usr/sbin/httpd -l I get only: [root@foobar]# /usr/sbin/httpd -l Compiled in modules: core.c prefork.c http_core.c mod_so.c Inside the virtual host containing the Wordpress site I have an .htaccess containing: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress The .htaccess is owned by apache which is the user apache runs under. The apachectl -t command returns Syntax OK What am I doing wrong? What should I check?

    Read the article

  • What speed are Wi-Fi management and control frames sent at?

    - by Bryce Thomas
    There are a bunch of different 802.11 Wi-Fi standards, e.g. 802.11a, 802.11b, 802.11g, 802.11n etc. that all support different speeds. Wi-Fi frames are generally categorised as one of the following: Data frames - carry the actual application data Control frames - coordinate when its safe to send/reduce collisions Management frames - handle connection discovery/setup/tear down (e.g. AP discovery, association, disassociation) My question is about whether all these frames, and specifically management frames, are transmitted at the fastest supported speed available, or whether certain classes of frames are transmitted at some lowest common denominator speed. I have noticed that when I put an 802.11b/g only device into monitor mode and capture traffic over the air, I still see management frames (e.g. association/disassociation) being transmitted between my phone and AP which are both 802.11n, even though 802.11n has a higher transfer rate. So I am imagining one of two possibilities: My 802.11n phone/AP had to negotiate a slower speed for some reason and that's why I can see their frames on my 802.11b/g monitoring device. Management frames (and perhaps control frames also?) are sent at a lower speed, and it's only data frames that are transmitted faster with newer 802.11 standards. The reason I would like to know which one of these two possibilities (or perhaps a third possibility) is the case is that I want to capture management frames, and need to know whether using an 802.11b/g card is going to lead to me missing some frames sent at higher speeds than the monitoring card can observe. If management frames are indeed sent at a slower rate, then it's all good. If I just happen to be seeing the management frames because my phone/AP have negotiated a slower rate though, then I need to reconsider what card I use for packet capture.

    Read the article

  • Which Linux distribution for vehicle LCD instrument panel

    - by Brent
    I will be designing an instrument panel for a vehicle to display the common gauges that you would find in a car - (speedometer, rpm, fuel level, oil pressure, etc.). We have selected a 7" LCD and are in the process of narrowing down the hardware (This will use an ARM processor). The idea is to read these values off of the CAN Bus and update the UI with those values. This needs to have a fairly quick boot time, 5-10 seconds would be acceptable from the time the ignigtion is turned on to the time the UI is running. I have been doing a lot of research on which linux distribution to use, but I wanted to ask the question here to get the community's suggestions. I have been a .NET programmer for years, so linux is a new world to me. Here is what I have found so far... Tizen is geared for In-Vehicle Infotainment (IVI) (plus some others). However, this project is not an IVI, and I do not need the phone dialer, navigation, etc. Meego is dead, and Tizen seems to be the replacement Angstrom, Debian... would either of these be useful? I am not tied to a particular programming language or IDE. Any help and direction is appreciated!

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • Slow performance on VMWare Linux server after Tomcat install

    - by Loftx
    We have a VMWare ESXi 4.1 server hosting a number of Linux and Windows guests. Recently a new Linux guest was added to this server and seemed to be performing well. Tomcat and some other applications on this server were then installed which seem to have caused the server to run really slowly without any obvious resource issues. Slow performance include: The time taken to bring up the password prompt over ssh takes a few seconds when it was previously instantaneous. The time taken to unzip a zip file which was previously a few seconds now takes around 30 seconds The time taken to compile vmware tools has increased by similar factors Both the VMWare console and monitoring commands don't report any issues with high CPU or memory usage but something is obviously slowing the server down somehow. Does anyone have any ideas what may be causing this issue and how it can be resolved? Thanks, Tom Edit As per your questions I’ve looked at some of the performance indicators on both the VM host and VM guest indicated. Firstly I tried reserving the full amount of memory (3gb) for this VM – no other machines on this server have any memory reservation. The swap in rate and swap out rate for the VM host and guest are now both zero. Balloon memory on the guest is zero and on the host is 3.5gb (total memory on the host is 12gb) The swap rate for the guest is also zero. Swap used by the host is 200mb on average. Compression and decompression rates for the host and guest are zero. Command aborts for the host are zero. Read latency is very low – maximum 10ms average 0.8ms. Write latency is higher – a few spikes to 170ms but mostly around 25ms – is this bad? Queue command latency is zero . Physical disk read latency averages 5ms but often 10ms Physical disk write latency averages 15ms but is often 20ms I hope this helps - let me know if you need any more information.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • How to troubleshoot Hyper-V VSS writer causing backup failure on Server 2008 R2

    - by Tim Anderson
    I have a Windows Server 2008 R2 machine running Hyper-V. Backups using Windows Server Backup fail with the error: The backup operation that started at '?2011?-?01?-?02T10:37:01.230000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. I have traced this to a problem with the Hyper-V VSS writer. vssadmin list writers reports: Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {fcf0dd79-d282-4465-88ae-7b6857e055c2} State: [8] Failed Last error: Inconsistent shadow copy However I can't get any further. A few relevant facts: I get the error even if all the VMs are shut down If I disable the Hyper-V VSS Writer by stopping the Hyper-V Management Service backup completes OK There are no errors in the Hyper-V-VMMS application log I tried to set tracing for VSS but can't get any output for some reason. I set the correct registry entries but no trace log is generated. Tim

    Read the article

  • Virtual PC duplication process

    - by Toddintr
    This is the process I use for duplicating a Virtual PC (on Windows 7): 1 - Create a new VPC. 2 - Install Windows 7 on the new VPC. 3 - Configure the new Windows 7 installation (install Windows updates, install applications, etc). 4 - Run Sysprep on the new VPC. 5 - Shut down the new VPC. 6 - Make a copy of the new VPC's VHD file. 7 - Create a new VPC, specify "use existing VHD file" in the wizard and provide the name of the copied VHD file. Above works fine but there is one point that threw me off: During the OOBE for the duplicated VPC, when asked for a user name, I had to specify a different user name than the one I had specified for the base VPC. This makes sense because the copied VPC already has that user name. But what I did not understand is why I was asked for a new user name at all? Is it because it is part of the OOBE process and when the OOBE was designed by Microsoft, they did not think of the fact that base OS images could be copied? Thanks - -Todd

    Read the article

  • Exchange 2003: Unrestrict send mail size for specific users / groups?

    - by Kip
    Good (insert appropriate time of day here) SF folks, I have the following situation; We have a message size limit for sending set at 20mb in Global Settings | Message Delivery. We have a limit of 50mb set at an external 3rd party spam vendor. I need to enable some users to be able to send messages that are upwards of around 40mb in size. However, when I set the Sending Message Size Maximum to 50mb within the delivery restrictions of a users exchange properties, it would appear that this does not win. It seems that the lowest value wins for this situation. I need to be able to allow certain users to send messages larger than the 20mb limit, but to have everyone else have the 20mb limit in place. How can I do this? The only way I could see was to raise the limit set in Global Settings | Message Delivery to 50mb and then set everyone elses (bar the people who need increased limit) delivery restrictions max size down. But I cannot see an easy way to do the last bit hence my post here looking for advice. There are valid reasons we need to send mail this size and whilst we are putting together other mechanisms for delivery this data, we still need to get this put in place. Thanks in advance Kip

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >