Search Results

Search found 1757 results on 71 pages for 'jeremy smith'.

Page 21/71 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Losing partitions after every reboot

    - by Winston Smith
    I have an Acer laptop with one hard disk, which up until yesterday had 4 partitions: Recovery Partition (13GB) C: (140GB) D: (130GB) OEM Partition (10GB) I read that the OEM partition has all the stuff needed to restore the laptop to the factory settings, but since I'd already created restore disks and I needed the space, I wanted to get rid of it. Yesterday, I used diskpart to do that. In diskpart, I selected the OEM partition and issued the delete partition override command which removed it. Then I extended the D: partition into the unused space using windows disk management. Everything worked fine, until I rebooted my laptop, at which point the D: drive vanished. Looking in windows disk management again, I can see that there's an OEM partition of 140GB, which is obviously my D: drive. So I used EASEUS Partition Master and assigned a drive letter to the 'OEM' partition and I was able to access my files again. However, every time I reboot, it reverts back. How do I fix this permanently?

    Read the article

  • Odd domain switching behavior in Firefox and Chrome

    - by Jeremy Detrempe
    We have different development severs and a production server. Testing is done in the development servers. As a QA engineer, I'm switching between these servers quite often throughout the day. In Chrome, sometimes I need to reload a page a few times to get it to pull from the newly switched server. In Firefox, sometimes I need to quit the browser in order to get it to pull from the newly switched server. (We have small tags that indicate which server you are pulling from, which is how I know in-browser.) Why does that happen? I'd love to know how that happens (maybe what it's called?) and what the best way to deal with it is. (I know that Firefox has an extension for domain switching; is that the best solution?)

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • Powershell script to append an extension to a file, input from CSV

    - by Jeremy
    Hi All, All I need is to have an Excel list of file paths and use Powershell to append (not replace) the same extension on to each file. It should be very simple, right? The problem I'm seeing is that if I go input-csv -path myfile.csv | write-host I get the following output: @{FullName=C:\Users\jpalumbo\test\appendto.me} @{FullName=C:\Users\jpalumbo\test\append_list.csv} @{FullName=C:\Users\jpalumbo\test\leavemealone.txt} In other words it looks like it's outputting the CSV "formatting" as well. However if I just issue import-csv -path myfile.csv, the output is what I expect: FullName -------- C:\Users\jpalumbo\test\appendto.me C:\Users\jpalumbo\test\append_list.csv C:\Users\jpalumbo\test\leavemealone.txt Clearly there's no file called "@{FullName=C:\Users\jpalumbo\test\leavemealone.txt}" and a rename on that won't work, so I'm not sure how to best get this data out of the import-csv command, or whether to store it in an object, or what. Thanks!!

    Read the article

  • libvirt's dnsmasq does not respond to dns queries or provide dhcp

    - by Jeremy
    This is on Ubuntu 10.04 server, using KVM to run Ubuntu guests. This system has been working for a long time and I have not changed anything (other than applying security updates), but today I found dnsmasq no longer responds to requests. I cannot say how long this has been broken for me because I don't frequently use the NAT'd guests. So it could have started just after the last updates or some other event and I just now found it. I can connect to port 53 with telnet at 192.168.122.1. I've flushed ip-tables to be sure it wasn't firewall rules and that is not the problem. dnsmasq is running, virsh reports default network as stared. I can't find ANY information on troubleshooting libvirt dnsmasq except that it won't play well with other instances of dnsmasq, which is not the problem. I cannot even find where log entries might be for this service. Any ideas on where to look for more information? edit to add: I added another network and that one works fine. I guess I have a workaround but would still like to figure out how to troubleshoot this problem.

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • Search selected text in Firefox

    - by Jeremy Rudd
    What are the different Firefox extensions that can start a search with the selected text? Firefox has an inbuilt feature to search using the currently selected engine. Select any text Right click the selection Search Google for ... I'm looking for something that will let me choose which search engine I want to search with, from my current list of installed search engines.

    Read the article

  • Windows 7 hangs with 100% disk activity but only when online

    - by jeremy
    I have the same problem as seemingly many other people here, and I think we might all be experiencing the same issue: a compatibility issue in Windows 7 between hard drive and network controller or drivers. I've tried firmware updates of my entire board, wiping my drive and reinstalling from scratch. And yet the problem persists, which suggests it is an operating system error, as the hard drive checks out 100% physically. Additionally, the only time it does not occur is when in safe mode WITHOUT networking. With networking, there are spikes in disc access every so often and a huge flow of processes accessing the disc simultaneously that literally "stick" the disc, and physically jolting my computer unsticks it. Again, this has been tested for hours in a professional service environment, and without network access on, things are fine. As soon as there's network access available, the disc access occasionally cranks up to 100% and sticks everything. I'm using Microsoft Security Essentials, but this also happened under Norton, then McAfee. Again, this happened again after a complete wipe, so the likelihood of malware causing it seems low. I don't visit unsecure sites anyway, as far as I know. This, to me, narrows it down to a Windows 7 process that is somehow repeatedly corrupted, perhaps a corrupt .dll or driver, causing a conflict at the operating system level and temporary hard drive failure. I would encourage anyone who knows more about this stuff (which is probably most people!) to take a shot at this one, and I would encourage anyone else with a sticking hard drive in windows 7 64-bit to check on whether it occurs during safe mode without networking.

    Read the article

  • If i make a mail server can i send bulk email?

    - by Jake Smith
    I work for a small company and we have fallen into the fad of "email campains" a.k.a Junk mail. So far the company has gotten a subscriber list from our website, and paid a good chunk of change for a emailer program. The problem is, Our list has close to 4,000 people on it and growing. with gmail only allowing 100 emails per account through on SMTP and I am on a tight budget so I cant hire anyone else. I was thinking of doing a dedicated mail server off of the website server we have running in the office. Is it possible? to make emails on your own server, and then send it through your own SMTP? if it is, what software would I need and is if free or low cost at least. We run a WAMP server, i set it up just for information, but i could switch it to lamp or whatever if need be. Thank you for your time and youre answers

    Read the article

  • How can I remove the DRM from books I have purchased for my Kindle? [closed]

    - by Jeremy Banks
    How can I the remove DRM from books I have purchased for my Kindle? A solution that works on Mac OS X and/or Linux would be ideal, but a Windows-only solution is also acceptable. Moral aside: I do not plan to share or pirate books; I am very happy with the Amazon Kindle store and have purchased more books from there since getting my Kindle a month ago than I had in the previous two years. However, I do not feel comfortable keeping them in an encrypted format that only a single company's software will allow me to access. I may also want to convert them into a format for other readers in the future, which is not possible without first removing the DRM.

    Read the article

  • Is it really necessary to call /bin/sync twice before an unmanaged power-off?

    - by Jeremy Friesner
    Hi all, My company sells an "embedded device" which is implemented as a headless Linux box with ext4 on an internal SSD. Some of our users have a habit of doing a "save current settings" on this box, and then cutting power to the unit as soon as the unit reports that the save completed (ie two seconds later). This was causing occasional corruption of the saved files, as the data wouldn't always get flushed to the SSD before the power went out. So I tweaked my software to run /bin/sync immediately after writing the file (after closing the file handle but before notifying the user that the save completed). This appears to fix the issue, but my coworker says that one call to /bin/sync isn't sufficient, and that to be really safe I ought to run /bin/sync twice in a row. That sounds like paranoia to me... Perhaps a habit from earlier versions of Linux or unix whose sync utility didn't work reliably. Does his advice have merit, or should one call to /bin/sync suffice?

    Read the article

  • Squid - Selective reverse proxy and forward proxy

    - by Dean Smith
    I'd like to setup a squid instance to do selective reverse proxy for a configured list of URLs whilst acting as a normal forward proxy for everything else. We are building new infrastructure, parallel live as it where, and I want to have a proxy that people can use that will force selective traffic into the new platform whilst just acting as a forward proxy for anything else. This makes it very easy for people/systems to test the portions of the new platform we want without having to change too much, just use a proxy address. Is such a setup possible ?

    Read the article

  • My new favourite traceflag

    - by Dave Ballantyne
    As we are all aware, there are a number of traceflags.  Some documented, some semi-documented and some completely undocumented.  Here is one that is undocumented that Paul White(b|t) mentioned almost as an aside in one of his excellent blog posts. Much has been written about residual predicates and how a predicate can be pushed into a seek/scan operation.  This is a good thing to happen,  it does save a lot of processing from having to be done.  For the uninitiated though: If we have a simple SELECT statement such as : the process that SQL Server goes through to resolve this is : The index IX_Person_LastName_FirstName_MiddleName is navigated to find the first “Smith” For each “Smith” the middle name is checked for being a null. Two operations!, and the execution plan doesnt fully represent all the work that is being undertaken. As you can see there is only a single seek operation, the work undertaken to resolve the condition “MiddleName is not null” has been pushed into it.  This can be seen in the properties. “Seek predicate” is how the index has been navigated, and “Predicate” is the condition run over every row,  a scan inside a seek!. So the question is:  How many rows have been resolved by the seek and how many by the scan ?  How many rows did the filter remove ? Wouldn’t it be nice if this operation could be split ?  That exactly what traceflag 9130 does. Executing the query: That changes the plan rather dramatically, and should be changing how we think about the index seek itself.  The Filter operator has been added and, unsurprisingly, the condition in this is “MiddleName is not null” So it is now evident that the seek operation found 103 Smiths and 60 of those Smiths had a non-null MiddleName. This traceflag has no place on a production system,  dont even think about it

    Read the article

  • Why does a redirect from a local IP address assume localhost?

    - by Jeremy
    I am developing a web application on my desktop and it is running on port 80. I am able to access the application from my laptop connected to the LAN by entering my desktop's LAN IP address 192.168.1.8. Now, my application sends a redirect after login, but my laptop assumes the final address is localhost/login. If I manually type in the IP address and URI for any page, it shows that I am logged in, so it works as expected. So, why does the redirect assume localhost? Both of my machines are linux-based. The laptop being Chrome OS. I am running nginx which proxies non-static file requests to jetty on port 8080.

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • Purchasing Laptop Case Online

    "Laptops are meant to be carried around but to achieve the ultimate ease of carrying it from one place to another and to protect the computer as well as precious information on it you need a quality ... [Author: Jeremy Mezzi - Computers and Internet - May 29, 2010]

    Read the article

  • How to reliably recieve message from AWS that my instance was rebooted / terminated / stopped?

    - by Andrew Smith
    I have Nagios, and I want it to stop monitoring instances when they are stopped from the console. The requirements are: The message passed from AWS is 100R% reliable, e.g. when Nagios is down, and the message cannot be delivered, it will be re-delivered promptly when Nagios is up The message will pass quickly There is no need to scan status of all instances via EC2 API all the time, but only once a while Many thanks!

    Read the article

  • Stress test a server for simultaneous connection

    - by weston smith
    I am trying to figure out a practical way to stress test a server for 300 to 600 simultaneous connections. Any advice? Thank you everyone for the help. To be more specific (sorry I wasn't before) this is a Flash Media Server on AWS that will be streaming live video. I've been having problems with the video freezing/buffering for everyone and I need to verify if its on the user end, upload end, or server end. I mainly need help with stress testing the server with 300-600 multiple request before going live.

    Read the article

  • deploying AV via GPO only to workstations

    - by jeremy
    We have a small (100 machines) Windows domain running Server 2008R2. We use Symantec Endpoint Protection 12.1 I want to have GPO deploy the AV software to client machines automatically, but only to client workstations, not to servers, which run a different software. I've set it up before using a GPO linked to the domain mycompany.local and it works, but it deploys the AV software to ALL machines on the domain, including my servers. I can create an OU in active directory for Servers, and perhaps create one for client machines too, but I'd rather not have to go and move new domain members from the default under Computers into a different folder. How can I use GPO to deploy this AV software only to workstations on our network, and not to servers?

    Read the article

  • How do I set "MaxPermSize" for Atlassian Fisheye/Crucible running as service on Win2k3?

    - by Jeremy
    I have been trying to setup Atlassian Fisheye/Crucible as a service on Win 2K3 R2 for two weeks. I keep getting various "java.lang.OutOfMemoryError: PermGen space" errors, which crash Fisheye and force me to restart the service. I've followed the example on the Atlassian support site to configure MaxPermSize within the service wrapper. However, when I check SysInfo inside the Fisheye Admin pages and the debug log, I don't see any confirmation. The Java Heap info is in both places, so I'd expect the MaxPermSize setting to show up in both places. The error is persisting and Atlassian support has been little help. I appreciate any help.

    Read the article

  • Setting up phpMemCacheAdmin on CentOS 5.5

    - by Bill Smith
    I have been able to setup phpMemCacheAdmin (http://code.google.com/p/phpmemcacheadmin/) on CentOS and am able to view the localhost MemCache statistics however whenever I add other MemCached nodes the config is never changed. I am fairly certain it has something to do with permissions however I am unable to track down what exactly needs to be done, or how to do it. The install was pretty straightforward: wget http://phpmemcacheadmin.googlecode.com/files/phpMemCacheAdmin-1.1.3r161.tar.gz tar xvzf phpMemCacheAdmin-1.1.3r161.tar.gz chmod +w Config/Memcache.ini But, it also states that Apache has rw right in the temp file folder (default : Temp/) and the entire config directory (Config/) -- that is the part I am unsure of. Help!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >