Search Results

Search found 16427 results on 658 pages for 'brendan long'.

Page 135/658 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • prevalent, recurring hardrive failure intel macbook from 2006/2007

    - by SimonSalman
    Hi, Long story: My MacBook's hard drive failed one year ago, just a month after its warranty ended or: a year and a month after I bought it. After about ten phone calls to Apple's service, they agreed to extend the warranty for another year, so that I got it replaced free of charge. In the mean time, I got to know that many MacBook users that experience/report hard drive failures. Every reported crash was preceded by a slowdown of system performance, an increased occurrence of the spinning beach ball wait courser, and frequent crashes of applications that used to run very robust until then. It happened (as far as I know) with MacBooks from 2006/2007. All these MacBooks additionally suffer from a recurring wearing down of their "top case". Many heavy users had to replace their HDDs three time since 2006/2007 resulting in an head crash, making it impossible to recover data (diagnosis of recovery specialists) in most but not all cases HDD was Seagate (doesn't necessarily have to be the cause, if majority of the MacBook charge contained Seagate drives) And right now (one year after my first disk crash), these symptoms are cumulating on my system, again ... Short version: prevalent hard drive failure on MacBook charge from 2006/2007 (i.e. 2.16 GHz Intel Core 2 Due) I am looking for any (preferable open source) tool for checking the hard drive condition, especially to detect the known "MacBook problem". So, that I can replace the disk on time. If any Mac user found a solution to prevent the repeated failure of heir hard drive, I would be very glad to get to known it. I really enjoy my old MacBook, but I hate to get interrupted every year by an HDD crash. BTW, the issue is already in discussion for a long time, but there seems to be no solution, so far. Thanks, Simon

    Read the article

  • Painfully slow login to AD bound Mac OS X Leopard machine when off home network

    - by GeeBee
    Dear all Just looking for a little help with this problem that seems to trip a lot of people up and is causing me no end of grief. I have a number of fully patched OS X Leopard machines that are bound to my AD (Server 2003). When on the home network, logging in seems swift and works as expected. When users take the machines off site, login can take 5 minutes or more. The user adds correct credentials but the desktop does not appear for a very long time. Outside the office, I have tried logging in using a local Admin account, switching off Airport and then logging in using an AD account. In this situation login is immediate again. It all seems as if Leopard is finding a suitable wireless network, spending far too long looking for the Domain before eventually giving up and using the cached credentials instead. I have read that disabling Bonjour on the machine will stop this problem (i have not yet tested) http://www.macwindows.com/leopardAD.html#111607z ...but I am reluctant to use this "Solution" as I would like to be able to use Bonjour on the local network as well as having AD-bound machines. However, is disabling Bonjour really the only answer? Is there not some time-out setting somewhere that could be amended to stop Leopard spending forever looking for home? Any help would be very gratefully received Thanks Gordon

    Read the article

  • Macbook Pro - 15" with i7 processor - Any problems with heat?

    - by webworm
    You may have already heard about the review done by the folks at PC Authority in Australia, where they had an i7 MacBook Pro that got up to 100 degrees Celsius during benchmarking. Here is the URL in case you have not read it. http://www.pcauthority.com.au/News/172791,macbook-pro-helps-core-i7-hit-100-degrees.aspx In any case, I was considering purchasing a 15" Macbook Pro with the i7 processor and the NVIDIA GeForce GT330M with 512 video memory. Having read how hot the computer got I started to become hesitant about purchasing. My main concern is long term damage to the computer due to excessive heat. I plan to use the MacBook Pro as a development machine where I will be running Windows 7 within VMWare Fusion or Virtual Box. Within the VM I will be running IIS, SQL Server, Visual Studio and SharePoint Server. Hence why I would like to have the power of the i7 processor. That is why I wanted to check with actually owners of the MacBooks with the i7 processor and see what their experiences have been. Have you noticed excessive heat? How does your Macbook handle process intensive apps over long periods of time? Thank you!

    Read the article

  • How to set CA cert file for LDAP backend server in smbpasswd configuration

    - by hayalci
    I am having a problem with smbpasswd, an LDAP backend server and SSL/TLS certificates. The client machine that I run smbpasswd on is a Debian Etch machine, and the Ldap server is Sun DS running on Solaris. All the following occurs on the client. When I disable SSL, by setting "ldap ssl = no" in smb.conf, the smbpasswd program works without errors. When I set "ldap ssl = start tls", the following messages are printed by smbpasswd and there is a long timeout period before any password is asked by it Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! ..... long delay ..... New SMB password: Retype new SMB password: Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! smbpasswd: /tmp/buildd/openldap2-2.1.30/libraries/liblber/io.c:702: ber_get_next: Assertion `0' failed. Aborted I conducted some tests with "ldapsearch -ZZ". It was not working at first, but after I added the TLS_CACERT line to /etc/ldap/ldap.conf, /etc/libnss-ldap.conf and /etc/pam_ldap.conf, it started working. So relevant TLS sections in all those files are: ssl start_tls tls_checkpeer no tls_cacertfile /path/to/ca-root.pem TLS_CACERT /path/to/ca-root.pem But the smbpasswd program continued giving the error. I tried creating /etc/smbldap-tools/smbldap.conf file with following content (after consulting debian docs for smbldap-tools package) But as I see, smbpasswd comes with samba-common package and does not use the configuration for smbldap-tools utilities. verify="optional" cafile="/path/to/ca-root.pem" My question is: How can I set which SSL CA Certificate is used by smbpasswd program ?

    Read the article

  • How do hdparm's -S and -B options interact?

    - by user697683
    These two options seem confusing. For example: according to the man page -B 254 "does not permit spin-down". However, testing with -B 254 -S 1 the drive does spin down after 5 seconds. -B Query/set Advanced Power Management feature, if the drive supports it. A low value means aggressive power management and a high value means better performance. Possible settings range from values 1 through 127 (which permit spin-down), and values 128 through 254 (which do not permit spin-down). The highest degree of power management is attained with a setting of 1, and the highest I/O performance with a setting of 254. A value of 255 tells hdparm to disable Advanced Power Management altogether on the drive (not all drives support disabling it, but most do). -S Put the drive into idle (low-power) mode, and also set the standby (spindown) timeout for the drive. This timeout value is used by the drive to determine how long to wait (with no disk activity) before turning off the spindle motor to save power. Under such circumstances, the drive may take as long as 30 seconds to respond to a subsequent disk access, though most drives are much quicker. The encoding of the timeout value is somewhat peculiar. A value of zero means "timeouts are disabled": the device will not automatically enter standby mode. Values from 1 to 240 specify multiples of 5 seconds, yielding timeouts from 5 seconds to 20 minutes. Values from 241 to 251 specify from 1 to 11 units of 30 minutes, yielding timeouts from 30 minutes to 5.5 hours. A value of 252 signifies a timeout of 21 minutes. A value of 253 sets a vendor-defined timeout period between 8 and 12 hours, and the value 254 is reserved. 255 is interpreted as 21 minutes plus 15 seconds. Note that some older drives may have very different interpretations of these values.

    Read the article

  • Looking for an application to record audio and video on a linux "embedded" device

    - by Luke404
    I am working with a linux x86 device with limited CPU resources (as a prototype we just use a pentium-m netbook). We'd like to record video from one V4L2 device (we'll probably end up using just USB Video Class devices like all modern webcams) and one audio stream from an ALSA source. The thing will not have screen and keyboard, and obviously no X11 environment. Goals are: do as little work as possible to cope with little cpu resources - for example I'd like to record video in the native MJPEG I get out of the UVC devices encoding audio to MPEG3 Layer-2 (aka mp2) is ok since it let us save a lot of space (compared to raw pcm samples) and does use little cpu power I don't mind loosing some video frames here and there (UVC devices do that) as long as I can get audio and video streams syncronized not require user input to start the thing (a python script takes care of initialization, startup, shutdown, etc...) be able to open the resulting files for postprocessing without too much effort (ie, if mplayer or vlc can play it, it's fine) So far the only app I found that could be started from command line and record V4L2 video + ALSA audio is mencoder but I'm having some difficulties with it. It should be able to do that but I cannot record audio and video together - just one of the two. And if I use two different processes to record to two different files I have no means to get them in sync (audio is more or less always correct, but video framerate will vary over time and it seems to lack timestamps to correctly play it back to the correct time). Long story short, how do you record an unconverted MJPEG stream (from an UVC device) and an audio stream (from an ALSA device, possibly encoding to any standard format) using a command line tool, to a single file (MPEG or any other container), keeping audio and video in sync?

    Read the article

  • How can I recover a Fedora 12 installation that is showing signs of disk errors?

    - by Bob Cross
    I am currently overseas (i.e., very far from my normal library of tools) and my primary machine that would normally act as the data server in the performance test that we're trying to run is failing to boot to Fedora 12 properly. This is a machine that, as of yesterday, was booting fine. However, this morning, very strange portions of the boot process were complaining with messages such as "unexpected 0x0 in rpcbind" and "bad file descriptor" (I don't have the error in front of me - scavenged a windows installation to get onto serverfault). Eventually, the boot hung for a long time at the NFS service and then brought up what looked like the KDE login screen but neither the mouse nor keyboard functioned. In olden days, I would try to get to a point where I could manage to run fsck and pray that the bad sectors would come back into alignment just long enough for me to scrape the critical data off of the machine. However, now that we live in the future, it seems like our options in situations like this should be a little more varied. Is there a way to recover a Fedora 12 installation with bad disk sectors that won't boot properly? For completeness, I am comfortable working with bootable recovery distros-on-CD and such but I don't know which one is likely to work best with modern Fedora. In the absence of guidance, I'm frantically torrenting the Fedora 12 Live CD and DVD, hoping to try rescue mode before tomorrow morning.

    Read the article

  • What to look for in a reliable backup hard disk?

    - by Senthil
    I want to buy an internal hard disk and use a docking station along with it for backing up important data. The size will be around 500GB to 1TB. I have a budget and several models fit into it. So far, they only seem to vary in size, speed and brand. These are the only things I can compare from the specs. I guess asking for which brand is best is completely subjective so I won't do that. I want my disk to have long life and be reliable. Doesn't matter if it is somewhat slow. Size: Should I go for the one with highest size within my budget? Will higher density cause problems? Or should I go for a moderately sized one? Does the number of platters have an impact? Speed: I do not want high performance. I want it to be reliable and last long. I am definitely not going to choose the expensive 10,000 rpm ones. Should I go for 5400 or 7200? Do these numbers affect longevity and reliability? Are there any other technical and objective factors that I should look for?

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Two parts: linux startup script to connect to bluetooth and cron to keep it connected

    - by D.R.
    I have a mini bluetooth keyboard and a Raspberry Pi running a Debian-based distro. I know the MAC address of the keyboard but for this question, let's just use AA:BB:CC:DD:EE:FF. Right now I have to have a wired keyboard connected as well as my bluetooth dongle for the mini-keyboard and on the wired keyboard, I have to run the following when the device boots up: sudo hidd --connect AA:BB:CC:DD:EE:FF If the device goes idle for too long, then the bluetooth disconnects and I have to pull out my wired keyboard and retype that same command. What I'm looking for it a way to have that command run at startup and a way to sense if it gets disconnected so that it will auto reconnect. The annoying thing is that they keyboard has to be in pairing mode (even though it has already been paired) when I run that command, otherwise it tells me the host is down. So perhaps the script needs to prevent it from disconnecting due to inactivity, otherwise I'll have to put it back in pairing mode to reconnect. So to recap: - A script to connect at startup (I can make sure to put the keyboard into pairing mode before turning it on) - A script to prevent it from disconnecting (maybe some sort of signal to send to it every 60 seconds or something?) Any help with this is greatly appreciated! StackOverflow is always the best place to find answers to weird questions! I've been searching long and hard for an answer, but finally had to resort to coming here! Thanks!

    Read the article

  • What parts of a motherboard age, and how can I choose one with the longest possible life?

    - by Robert Harvey
    I have a home-built computer that's probably about four years old. I realize this probably seems ancient to some folks, but computers have no moving parts (except the fans), so theoretically they should last a long time, if I still have software to run on them. A few weeks ago, it began blue-screening and freezing up, with various error messages. It almost always happened about five minutes after startup. I assumed that the video card was overheating, since the cheap little fan on the heatsink died, so I replaced it. Long story short, after upgrading the video drivers a couple of times and performing some other troubleshooting, I remembered that the last time this happened, I took out the memory SIMS and cleaned the contacts with a gum eraser, so I did that again (noting that the SATA cables were very close to the chips on the SIMS). I re-routed the cables and reinstalled the SIMS. So far, so good; the machine has been trouble-free since. But blue-screens are distressing; I never know what bits are being chewed up in my OS installation when something like this happens. So I'm wondering if I'm choosing my components properly. If it matters, it's an Intel D915GAG motherboard and Corsair memory, but what I'm wondering is, should I be looking for certain characteristics when I choose these parts for my next computer, so that I can avoid this problem in my next build?

    Read the article

  • SSD seems dead after wakeup from Windows Sleep, BIOS stalls but doesn't find it anymore

    - by Abel
    The morning, the following scary scenario happened: I woke up my Windows system Typed in my username and got an error (something like "could not load security xxx", but unsure of exact wording) System auto-restarted after cliking OK It didn't boot up anymore to the SSD with Windows 7 OS (I have another disk I can boot to, but that doesn't see the disk either). Obviously, this happened right after I instantiated a backup procedure, which hasn't succeeded either. The BIOS can't find the drive when I connect to SATA. And it can't find the drive when I connect it to SAS. I have a Dell Workstation T7400, most recent BIOS (version A06), version of SAS Host Bus Adapter BIOS (HBA) is MPTBIOS 6.14.10.00 (2007.09.29) from LSI Logic Corp. Other findings: When connecting to SATA, the DELL Logo screen stays really long (5 minutes) and then at the end of POST it says that a drive is not found When connecting to SAS, the SAS HBA initializing phase takes long (2 minutes, against normally 15 seconds) When running Dell Diagnostics, it doesn't finish and gives the error Exception occurred in module MPCACHE.MDM file "IOAPICSP.ASM" line 1645. I contacted Dell. On their advice I tried different slots and different cables to no avail. I use an APIC battery power, spikes in the power are thus unlikely. My conclusion so far: the disk is dead. I need this disk very badly because it contains the last few days of important development of which not all code was checked in the moment this happened. Are there any ways to recover dead SSD drives? The drive is a new X25-M G2 160GB model SSDSA2M160G2GC 2.5" in an extension bay and has been running without issues for 3 months on SAS.

    Read the article

  • mysql INNODB inserts very slow

    - by 133794m3r
    The database's schema is as follows. CREATE TABLE `items` ( `id` mediumint( 8 ) unsigned NOT NULL AUTO_INCREMENT , `name` varchar( 45 ) NOT NULL , `main_type` tinyint( 4 ) NOT NULL , `rarity` tinyint( 4 ) NOT NULL , `stack_size` smallint( 6 ) NOT NULL , `sub_type` tinyint( 4 ) NOT NULL , `cost` mediumint( 8 ) unsigned NOT NULL , `ilvl` smallint( 6 ) unsigned NOT NULL DEFAULT '0', `flavor_text` varchar( 250 ) NOT NULL , `rlvl` tinyint( 3 ) unsigned NOT NULL , `final` tinyint( 4 ) NOT NULL DEFAULT '0', PRIMARY KEY ( `id` ) ) ENGINE = InnoDB DEFAULT CHARSET = ascii; Now, doing an insert on this table takes 0.22 seconds. I don't know why it's taking so long to do a single row insert. Reads are really really fast something like 0.005 seconds. With using the example configuration from here dev mysql innodb it averages ~0.002 to ~0.005 seconds. Why it takes more than 100x more time to do a single insert makes no sense to me. My computer is as follows. OS:Debian Sid x86-x64, Mysql 5.1, RAM:4GB ddr2, cpu 2.0Ghz dual core, HDD 7200RPM 32MB cache 640GB. Why it's taking almost 100x as much time for a SELECT * FROM items; vs INSERT INTO items ...; will never make any sense to me. It's still a small table at only 70 rows, and took that long even when it had 0 rows.

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • raid 0 data recovery?

    - by Fred
    HI All, I have two identical seagate 7200.9 500Gb drives confiured as a RAID 0 spanned disk in windows. One of the drives has lost power and wont spin up at all. I know this normally means death for the data on both drives but i have a cunning plan.. DISK 1 - NO POWER RAID 0 DISK DISK 2 - FULLY FUNCTIONAL RAID 0 DISK DISK 3 - FULLY FUNCTIONAL SPARE DISK Copy the working drive (disk 2) data to a third 500GB DISK (disk 3), remove the logic board from the working disk (disk 2) and replace it with the non working logic board on the broken drive (disk 1) , then hopefully recreate the RAID 0 with disk 1 and disk 3, just long enough to get the data off it. Hope this makes sense, here are my questions: Windows disk manager atm recognises disk 2 but wont let me access it in anyway, therefore copying the data off it (or getting a disk image) cant be done in windows. Does anyone know of any software (in linux or self booting) that would allow me to access this disk? Anyone know of any software that will recreate the spanned drive off two disk images Am i missing any key information that means i definitely shouldn't even bother starting this, i know its a long shot anyway but its worth a try unless i definitely cant do it. The irritating thing is that i am sure its a logic board failure on disk 1 as it simply wont power up at all, suddenly no signs of life, so i am sure the data is intact! Any help would be really appreciated! Thanks

    Read the article

  • iSCSI performance questions

    - by RyanLambert
    Hi everyone, apologies for the long-winded post in advance... Attempting to troubleshoot some iSCSI sluggishness on a brand new vSphere deployment (still in test). Layout is as such: 3 VSphere hosts, each with 2x 10GB NICs plugged into a pair of Nexus 5020s with a 10gig back-to-back between them. NICs are port-channeled in an active/active redundant fashion (using vPC-mac pinning for those of you familiar with N1KV) Both NICs carry service console, vmotion, iSCSI, and guest traffic. iSCSI is on a single subnet/single VLAN that is not routed through our IP network (strictly layer2) Had this been a 1gig deployment, we probably would have split the iSCSI traffic off onto separate NICs, but the price/port gets rather ridiculous when you start throwing 4+ NICs to a server in a 10gigabit infrastructure, and I'm not really convinced it's necessary. Open to dialogue/tech facts re: this, though. At this point even a single VM guest will boot slowly to iSCSI storage (EMC CX4 on the same Nexus 5020 10gig switches), and restores of VMs from iSCSI take about twice as long as we'd expect them to. Our server folks mentioned that if we split the iSCSI off onto its own NIC, performance seems significantly better. From a network perspective, I've run through the variables I can think of (port configuration errors, MTU problems, congestion etc.) and I'm coming up dry. There really is no other traffic on these hosts other than the very specific test being performed at the time. Important thing to note is that guest traffic works just fine... it seems storage is the only thing affected by whatever gremlin exists. Concluding that we're not 'overutilizing' the network infrastructure since we're doing hardly anything, I'm just looking for some helpful tips/ideas we can use to resolve this... preferably without hurling extra 10gig NICs that are going to sit around 10% utilization while we've got 70+% left on our others.

    Read the article

  • My internet drops and will only reconnect if I troubleshoot it or restart my computer.

    - by Paul
    My internet loses its connection and at the same time, and my speaker seems broken if that happens and my mouse will not be smooth as it was before it happens. It already happened before and I didnt mind it. But now, it's happening literally all the time. After startup, my connection seems okay but after some few minutes, even without me doing something, my laptop will just lose its connection. It says that it is not connected and connections are available, but if I check if there are connections where I can connect it shows nothing. Even if, there is actually a wi-fi connection where my phone and other laptops connect. They dont seem to have problem with it. Just my laptop. After that happens, I usually restart my laptop but I discovered that troubleshooting it would somehow repair the problem, but not all the time. It says that restarting my wireless adapter do the job. Are there any long term fix? I mean that could last long or totally erase the problem? I have Atheros AR5B97 Wireless Network Adapter and Microsoft Virtual WiFi Miniport Adapter, both enabled. On the services, I have Wired AutoConfig and WLAN AutoConfig on Automatic. Aspire 4750, Windows 7 (x86)

    Read the article

  • Nginx : Proper use of limit_req_zone and limit_req

    - by xperator
    I have 2 website running on VPS. Their purpose is sharing music files and publishing news. Both of them use wordpress. What I am trying is that I want to prevent little hackers from flooding the webserver and putting stress on the server to make it crash. The problem is that after using limit_req_zone and limit_req my website became very slow. Browsing Wordpress control panel takes a long long time. I tried changing values but it didn't improve much. I guess the problem is Wordpress because it's the only script I am using on both front and back end. Here is the last setting which seems to be more responsive than others : limit_req_zone $binary_remote_addr zone=flood:5m rate=10r/m; location ~ \.php$ { limit_req zone=flood burst=100 nodelay; } What are the optimal values that should be used in my case (wp) ? I want the website have it's normal behavior, On the other hand stopping lifeless people from flooding. Another question, Is it safe and enough to use limit_req only on php files ?

    Read the article

  • How to make project auto-estimate duration based on work?

    - by Bruno Brant
    This one has bothered me for a long while. I like to do estimates thinking on how much time a certain task will take (I'm in TI business), so, let's say, it takes 12 hours to build a program. Now, let's say I tell Project that my beginning date is today. If I allocate one resource to this task, it means that the task will last 1,5 days, implying that it will end tomorrow. But right now, that is not what it's doing. I say that the task will take 1 hour, and when I add a resource to it, it allocate the resource at [13%] basis, which means that the duration is still fixed... project is trying to make the task last for a day. I have, on many occasions, accomplished this. What I do is build a plan based on these rough estimates for effort, then I allocate tasks to resources. Times conflict, so I level resources and then Project magically tells me how long, in days, will it take. But every time I have to start estimating again, I end up having trouble on how to make project work like that.

    Read the article

  • Wireless very slow on one laptop on network, all other machines normal?

    - by th3dude19
    My new laptop (Acer Aspire Timeine 3810TZ running Windows 7 Home Premium 64bit) is acting very strange on my wireless network. Below are the issues I'm noticing... The connection frequently drops. I see the icon change from 'full bars' to 'empty bars with yellow star (meaning no connection)' occasionally. Almost every website I visit (Firefox) hangs for a long time on 'Looking up www.amazon.com' for example. After a long pause, it finally starts loading the website. Neither of these problems exist on any other machines on my network. I also have a desktop running the same OS wirelessly and it works fine. I've run several Speedtest.net tests and the speeds are great (20MBit down/4 up). Results from pingtest.net are as follows: Line quality: D Ping: 46ms Jitter: 65ms Packet Loss: 9% These results are to a server that is less than 10 miles from my residence. The results on the other machines in my house are normal. Any suggestions? This is becoming very annoying as I purchased this machine primarily for browsing.

    Read the article

  • Laptop Most Likely to Have Good Driver Support

    - by ShabbyDoo
    Through numerous bad experiences, I have learned that the most likely cause of laptop "failure" is the lack of updated drivers for new operating systems. As an example, I have a perfectly good Thinkpad T42 at home which runs Windows 7 just fine for my purposes except that no compatible ATI video drivers are available, and the generic drivers have flicker effects. I recently saw an ASUS laptop which looked quite nice except that I would be beholden to them to release ATI video driver updates customized for it. And, I can't trust them to do that for more than six months. What laptops (manufacturer/line) should I consider so that I could expect at least a couple years of frequent updates? I plan on running Windows 7 and installing whatever successor comes out. I like Intel components (especially WiFi) because I can install their drivers directly from them, and they have a long history of providing updates for years after shipping a particular component. More generally, components from companies which are likely to update drivers frequently are good as long as I can install the component manufacturer-provided drivers without laptop-specific customization (like the ATI drivers). Also, if a component can be replaced easily, I am less concerned. For example, Dell stopped pumping out updated drivers for one of its mini-PCI WiFi cards. The solution was to buy an Intel replacement on eBay for $12! That's fine. I can deal with that. So, what laptops should I consider so that I'm not likely to be stuck between a rock and a hard place?

    Read the article

  • Win7 - Opening "Programs and Features" as Admin from command line (logged in as regular user)

    - by user1741264
    We have Win7 machines on a domain that we'd like to open the "Programs and Features" control applet via the command line while a regular user is logged in. Heres the catch: I know how to do this using runas from command line BUT after "Programs and Features" opens, I dont truly have the ability to remove a program. I am told that I need to be an Admin to do so. Here are the commands I have tried: runas /user:%computername%\administrator cmd.exe then in the new cmd window running: control appwiz.cpl runas /user:%companydomain%\%domainadminacct% cmd.exe then in the new cmd window running: control appwiz.cpl runas /user:%computername%\administrator cmd.exe then in the new cmd window running: rundll32.exe shell32.dll,Control_RunDLL appwiz.cpl runas /user:%companydomain%\%domainadminacct% cmd.exe then in the new cmd window running: rundll32.exe shell32.dll,Control_RunDLL appwiz.cpl I have also tried all of the above as one long line of code instead of launching a cmd.exe as Admin As you can see, I have tried running the command using both a local admin account (Administrator) AND a domain admin account. I have alos tried launching the runas command as one long command (opening the "programs and features") AND 1st launching a cmd.exe with admin rights and THEN launching the "Prgrams and Features" window. The result is the same: The "Programs and Features" windows opens but when I try to perform an uninstall, I am told I need Admin rights. Thus I am lead to believe that this instance of "Programs and Features" is not truly being run as an admin I am trying to avoid logging the regular user out. I am also aware that every program has its own uninstaller, I do not want to uninstall that way. I want to use the uninstaller in "Programs and Features". Any help is appreciated.

    Read the article

  • libvirt's dnsmasq does not respond to dns queries or provide dhcp

    - by Jeremy
    This is on Ubuntu 10.04 server, using KVM to run Ubuntu guests. This system has been working for a long time and I have not changed anything (other than applying security updates), but today I found dnsmasq no longer responds to requests. I cannot say how long this has been broken for me because I don't frequently use the NAT'd guests. So it could have started just after the last updates or some other event and I just now found it. I can connect to port 53 with telnet at 192.168.122.1. I've flushed ip-tables to be sure it wasn't firewall rules and that is not the problem. dnsmasq is running, virsh reports default network as stared. I can't find ANY information on troubleshooting libvirt dnsmasq except that it won't play well with other instances of dnsmasq, which is not the problem. I cannot even find where log entries might be for this service. Any ideas on where to look for more information? edit to add: I added another network and that one works fine. I guess I have a workaround but would still like to figure out how to troubleshoot this problem.

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >