Search Results

Search found 5161 results on 207 pages for 'jonathan low'.

Page 165/207 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Troubles Installing Windows 7 via USB. Flat install?

    - by Brian
    Hi friends, I've been struggling with this for a while. Windows 7 64-bit Enterprise edition just will not install on my Shuttle K45 system via a USB key. It hangs out during the install while copying files or while creating the partitions. The system is pretty standard and low-tech: IDE hard drives, no CD, 2G RAM. I am not sure what of the problem. Other than the Shuttle, I have a Apple MacBook Pro. On the MPB, I am running OS X, and Mint Linux and Window XP over Parallels. I have an ISO of Win7 that works (I installed it as a Parallels VM to make sure). I have used UltraISO and MS Windows 7 USB/DVD Download Tool to write it to the 8G USB key. Both seem to copy all the files correctly (with UltraISO, I asked it to verify). It boots via USB and the install looks just fine. Until it hangs, most of the time with a copying error of 0x80070241. So now I am trying to figure out if there are other ways I can install Windows 7 on this Shuttle system that has no CD drive. I've heard about a flat installation, however those all seem to be doing something from within Windows. I do have access to a command prompt from the Windows 7 install. Does anyone know if/how I can prep the Shuttle hard drive with Windows 7 installation and have Windows 7 install from the hard disk. I do not have an external enclosure for the IDE HD and I do not have any other system I can hook up to the hard drives. I do have an external Maxtor OneTouch drive available.

    Read the article

  • Measure Upload Speed between a client and our server

    - by tresstylez
    We host a SAAS application specially customized for multiple clients. For one customer in particular -- they are reporting sporadic performance issues from various locations on their network, in particular UPLOADING documents through a form on our website. The client claims they have "bandwidth to spare" and that utilization of their "pipe" is so low that it MUST be our application, but our application has MANY clients and all features are working fine for all other clients. Interestingly enough -- DOWNLOADS (ie. just accessing the website, or downloading documents) is working fine. Speed test shows that they should get 1.2Mbps UP. So, a 3MB file should take 20 secs to upload. It takes 60+ seconds on their network. Sometimes even small files take OVER 10 minutes to upload or they timeout. Pings and Traceroutes don't show any abnormally long hops or response times. They claim other SAAS applications they use allow them to upload just fine. Both IT teams are working together to resolve this issue. What kind of data can I request from the clients to begin ruling things out. Seems like we need to somehow measure LATENCY of the networks involved or even at the switch level, we need to understand if packets are getting dropped somewhere and why. Where should I start? Any help is appreciated. I'll provide more info upon requests

    Read the article

  • CPU operating temperature ranges

    - by osij2is
    I have an AMD Phenom II 960T with 2 cores unlocked for a total of 6 cores. I don't overclock at all. I have a Arctic Cooling ACALP64 Heatsink/Fan installed. I'm currently running ESXi 5.0 so I have to go into the BIOS to read the CPU temperatures, which at idle seem to be in the 71-74C range. To me, this is pretty high, but I cannot find any official temperature ranges that AMD says the CPU can work well within. There seems to be a lot of questions on superuser and numerous forums around CPU temperatures but no one seems to have a clear consensus as to what the manufacturer temperature ranges are for specific CPUs. I've tried searching through AMDs site to no avail. At this point, I'd be willing to shut off the 2 extra cores if it keeps the heat down but until I get some sort of tolerance or range for temperature, I have no idea if the CPU is being damaged or not. Can anyone point to a direct source, article, FAQ from AMD that specifically states their CPUs temperature range? Or are CPU temperature ranges so varying that there's no possible baseline? Am I being too paranoid about this? To me, anything over 65C is a bit much and if I'm in the low-mid 70s range with NO VMs running, what can I expect if I have several VMs running?

    Read the article

  • Log and debug/decrypt an windows application's HTTPS traffic

    - by cweiske
    I've got a proprietary windows-only application that uses HTTPS to speak with a (also proprietary, undocumented) web service. To ultimately be able to use the web service's functionality on my linux machines, I want to reverse-engineer the web service API by analyzing the requests sent by the application. Now the question: How can I decrypt and log the HTTPS traffic? I know of several solutions which don't apply in my case: Fiddler is a man-in-the-middle HTTPS proxy which I cannot use since the application doesn't support proxies. Also, I do not (yet) know if it works with self-signed server certificates, which I doubt. Wireshark is able to decrypt SSL streams if you have the server's private certificate, which I don't have. any browser extension since the application is not a browser If I remember correctly, there have been some trojans that capture online banking information by hooking into/replacing the window's crypto API. Since the machine is mine, low level changes are possible. Maybe there is a non-trojan (white-hat) network log application out there which does the same? There is a blackhat presentation with some details available to read. They refer to Microsoft Research Detour for easy API hooking.

    Read the article

  • Many different BSOD

    - by Exa
    I'm experiencing multiple bluescreens for a couple of months now. Their error code is as diverse as their time of occurence... Sometimes it happens during gaming, sometimes when watching videos, sometimes when the computer is idle. These are the bluescreens I see most often: PAGE_FAULT_IN_NONPAGED_AREA KMODE_EXCEPTION_NOT_HANDLED IRQL_NOT_LESS_OR_EQUAL SYSTEM_SERVICE_EXCEPTION SYSTEM_THREAD_EXCEPTION INTERRUPT_EXCEPTION_NOT_HANDLED DRIVER_IRQL_NOT_LESS_OR_EQUAL DRIVER_OVERRAN_STACK_BUFFER Responsible drivers (according to the memory dumps): hal.dll tcpip.sys dxgmms1.sys ndis.sys mouhid.sys atikmdag.sys dump_atapi.sys and of course: ntoskrnl.exe My first thought was a driver incompatibility because I am using Windows 8 and some of the bluescreens seem to come from driver issues. All drivers are up to date. I'm afraid that my memory is broken or the mainboard or both. I used the windows integrated memtest which didn't find any errors. Memtest86 found some errors. Does it make sense to buy new memory? Couldn't it be a problem of the board as well? I also read that my memory could run at a too low voltage. But it's set to 1.5V as recommended. Another guess would be to set the memory's latencies manually, but how do I know which ones to try? Here is a screenshot of bluescreenview showing the latest bluescreens. Maybe someone has faced the same behavior before and found a solution. Any ideas or suggestions? Current setup on which the bluescreens occur. Windows 8 RTM (6.2.9200) Asrock 970 Extreme4 AMD FX-8150 ATI Radeon HD5850 16 GB RAM (DDR3-1800) Latest drivers for all devices

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • which is best smart automatic file replication solution for cloud storage based systems.

    - by TORr0t
    I am looking for a solution for a project i am working on. We are developing a websystem where people can upload their files and other people can download it. (similar to rapidshare.com model) Problem is, some files can be demanded much more than other files. The scenerio is like: I have uploaded my birthday video and shared it with all of my friend, I have uploaded it to myproject.com and it was stored in one of the cluster which has 100mbit connection. Problem is, once all of my friends want to download the file, they cant download it since the bottleneck here is 100mbit which is 15MB per second, but i got 1000 friends and they can only download 15KB per second. I am not taking into account that the hdd is serving same files. My network infrastrucre is as follows: 1 gbit server(client) and connected to 4 Nodes of storage servers that have 100mbit connection. 1gbit server can handle the 1000 users traffic if one of storage node can stream more than 15MB per second to my 1gbit (client) server and visitor will stream directly from client server instead of storage nodes. I can do it by replicating the file into 2 nodes. But i dont want to replicate all files uploadded to my network since it is costing much more. So i need a cloud based system, which will push the files into replicated nodes automatically when demanded to those files are high, and when the demand is low, they will delete from other nodes and it will stay in only 1 node. I have looked to gluster and asked in their irc channel that, gluster cant do such a thing. It is only able to replicate all the files or none of the files. But i need it the cluster software to do it automatically. Any solutions ? (instead of recommending me amazon s3) S

    Read the article

  • Configure linux machine as bridge/switch and end device

    - by leemes
    At my home, I have two desktop PCs in two rooms. The router / DSL modem is in one of these rooms. Now I want to configure a home server (having 2 LAN ports, running 24/7) in the corridor between the two rooms, using only one LAN cable at each door. This gives me the following physical configuration: (door) (door) .----/-/----. .-----/-/----------._ FritzBox | | | .----´´ DSL Router PC1 Server | PC2 As just said, the server has 2 network interfaces and is running Ubuntu. What I need now is a network configuration which enables both the server and PC1 to connect to the router. I think the server needs to serve as a bridge or switch. Currently, all computers are configured having static IP addresses. If I'm understanding it correctly, a bridge / switch doesn't have its own IP address, but as the server needs to be configured as an own end device, it needs to have one. My first question is, do I have to configure both interfaces separately, giving both the same static IP address? My next question is, how do I bridge the two physical networks into one? I have basic understanding (but am always confused again and again) of bridges and switches, but I don't know how to configure it in software. I only know that it's possible to do so :) The third question is: Is it possible to configure this in a way that network packets from/to PC1 to/from the router only go through hardware or only consume low CPU in the server? Can you help me? Thanks in advance!

    Read the article

  • P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up

    - by Mike Soule
    We have a CentOS 5.4 server (build 2.6.18-164.el5xen). We went to P2V this server so we can have redundancy, the physical only has one PSU. The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority. I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post. Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications. My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot? Thanks for any input. Edit: I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg Edit2: echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 resume /dev/VolGroup00/LogVol01 echo Creating root device. mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00 echo Mounting root filesystem. mount /sysroot

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • Gigabit network limited to 25MB/s by CPU. How to make it faster?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The other computers in the network can usually transfer at 50+MB/s without problems. And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Is it a good Idea to switch to a SSD to use less battery?

    - by Walter Maier-Murdnelch
    I am thinking of buying a SSD for my laptop, mainly for the purpose of extended operating time when running on battery. At the moment I use a Hitachi HTS545032B9A300 (320GB) (Datasheet) as main drive and a Seagate Momentus 5400.3 120GB as secondary drive. I dualboot Windows and Linux but I don't need the windows partition any longer, a 120GB SDD would be more than sufficient space-wise. Speed is not an issue for me, I make heavy use of tmpfs (ramdrive) within Linux and transfers of bigger files are mainly through some network filesystem anyways, thus a cheaper SSD should do. For the purpose of comparison I chose the OCZ Vertex Plus 120GB. Power consumption always is a big promotional thing the industry uses to make me want to buy their SSDs, some sheet on the OCZ page provides an astonishing comparison of desktop HDDS and SSDs. The numbers I got comparing my laptop HDD and their SSD were not really astonishing any longer. Hitachi 320GB HDD: Startup (W, peak, max.) 4.5 Seek (W, avg.) 1.7 Read / Write (W, avg.) 1.4 Performance idle (W, avg.) 1.3 Active idle (W, avg.) 0.8 Low power idle (W, avg.) 0.5 Standby (W, avg.) 0.2 Sleep 0.1 OCZ 120GB SSD: 1.5W active 0.3W standby I see that there are differences, but actually they don't seem that high as I though they were. And compared to the power consuption of the rest of my system I wonder if it makes a difference at all. Have I just taken the wrong look at the whole thing or may I be better off to buy another battery for my laptop?

    Read the article

  • Why am I missing 4GB of RAM on Windows Server 2008 R2 64bit?

    - by Nick G
    I noticed today that a server was very low on memory. It physically has 8GB installed and runs Windows 2008 R2 Standard 64bit. It also hosts 2 virtual machines using HyperV. Server is Dell Poweredge R510. However the host OS reports in task manager that it only has 4GB of RAM, despite actually having 8GB and it being a 64bit OS. Computer properties shows Installed memory: 8.00GB (3.99GB usuable). Why would "usable" be half the real RAM installed under a 64bit OS? Additionally nearly all of the 4GB of visible RAM on the host OS is being used by something without anything showing up in task manager (presumably HyperV as it's allocated 3.6GB to the virtual machines its hosting). However that doesn't explain where the other 4GB has gone which Windows can't even see. Where is my missing 4GB of RAM? Update: Dell OpenManage says this: Total Installed Capacity 8192 MB Total Installed Capacity Available to the OS 4096 MB So looks like Nathan's suggestion of memory mirroring might be correct. I'll have to reboot to check this (I think?) Update 2 OK. So I reboot and I get a message saying "the amount of system memory has changed" (despite not having touched the hardware in a year). Once Windows has booted, all 8GB is visible again. Looks like I probably have a hardware RAM issue (I'll perhaps try reseating it whenever I can chuck everyone off the server next). Thanks for your answers and comments. I was hoping it was going to be the mirrored-RAM option but it seems not - that's not even mentioned in the BIOS.

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Extend Linux Desktop to another X Windows Display

    - by unknown (google)
    Hello, I am a long time Linux user of the Xinerama and other technologies for extending a desktop to multiple monitors. However when I travel with my laptop I miss the multi-monitor support I enjoy at home. Recently I acquired a second laptop for a low price. Both laptops are running Fedora (versions 10 and 11 respectively). I use Gnome as my primary desktop environment. I know about synergy. I use synergy all the time to control the screen of other Windows / Linux systems I use. I would like to know, can I sit both my primary and secondary laptops together and achieve a Xinerama-like extended desktop environment? Ideally I would like to start a GNOME session on my primary laptop. And then start a X-Windows Desktop on my secondary laptop and extend my primary laptop's desktop onto it. I would like to be able to move Windows from the primary desktop to the secondary laptop desktop. Would I need to use synergy to do this with some other bit of X-Windows technology? Or is there X-Windows technology that will do all this for me? I am familiar with X Windows ability to display applications remotely. I am also familiar with Nomachine's NoX.

    Read the article

  • Recover data from hard drive with partitions (but not most data) overwritten

    - by Macha
    I have a 500GB hard drive I've been keeping around to recover data from that I removed from a failing NAS drive that got sort of... erratic at the end. I finally got rid of the NAS when during a firmware update it removed the partition table. Fast forward to a week ago, when I was building a new PC, and a mixup resulted in me placing the hard drive in question in the new PC and installing Windows XP on the first 100GB. I'm presuming any data on that first 100GB is now gone, but for the rest of it, is there any way I can recover it at home, as professional data recovery is currently too expensive? I have a blank 1TB HDD if I can store any images of that hard drive on. The problem was definitely with the NAS and not the hard drive, as the hard drive had a successful install of Windows when mistakenly place in the new PC, and there were capacitors in the NAS's circuitry clearly broken. The data I want to recover (in order of priority) is: High: Some jpgs of family photos. Medium: Some RAW files. (There are also jpg versions of all of these) Low: Some mp3s, avis and ISOs, I can re-rip most of these if need be, but it'd be handy not to have to. (I don't need a backup lecture, and if you can hold it in from nagging Jeff Atwood for it, you can hold it in from nagging me for it) In short: The partition tables are gone and overwritten. The data is not overwritten, except for an amount equal to the size of a Windows XP SP3 installation.

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • Block SMTP connections from mail domains which don't themselves accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the original sender refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • High disk I/O activity in CentOS server

    - by triiim
    I have about 16 websites in a CentOS dedicated, and I am having some problems on high traffic hours, it seems to be a high disk I/O activity causing a general slowdown. I've installed atop and this is what I see on the bottom (the server has been restarted thats why the values are so low): *** system and process activity since boot *** PID RDDSK WRDSK WCANCL DSK CMD 1/18 2176 1.7G 7.3G 854.4M 39 mysqld 671 1248K 3.0G 0K 13 flush-8:0 566 0K 1.1G 0K 5 jbd2/sda2-8 2401 124.2M 529.1M 22408K 3 crond 2032 2.2G 502.0M 0K 12 nginx 2360 425.8M 115.3M 4188K 2 httpd flush-8:0 and jbd2/sda2-8 are the processes I see with iotop using 99% on the IO column, and they are the processes that write the most on the hdd (after mysql). From what I saw in google this could be caused by some ext4 related bug, the current kernel is: Linux srvr.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux I asked the hosting support to update the kernel and they tried but they now say that the server wont boot with the new installed kernel and they had to go back to the previous, they are not helping very much. Does someone has any idea how could I solve the high disk usage caused by flush-8:0 and jbd2/sda2-8 processes?

    Read the article

  • How do I send e-mails with attachments to a Microsoft WebTV user?

    - by Petr 'PePa' Pavel
    my friend uses Microsoft WebTV (e-mail address ends with @webtv.net) and I'd like to send him an e-mail with a picture attached to it. We went through a series of attempts one of which ended up a success, all others a failure. He just can't see my e-mail in his mailbox, when it contains an attachment. E-mails without attachments always go through all right. What seemed to help in the first successful case, was that he added my e-mail address to his address book and my e-mail suddenly showed up. Seemed to have been delivered before but hidden. He kept my address in his address book however, it didn't help with the following trials. He did look into his junk folder, nothing there. I made sure the file name contains no spaces. It's a regular jpeg, named something-like-this.jpg I downsized it to have only about 50k, as I've read somewhere that that's a limit. I actually doubt this piece of information, because I think the successful attempt was larger. webtv.net contains zero information. I watched their video demo for the e-mail client, so I at least know how the user interface looks like. I've never laid my hands on the real thing. I'm an advanced user myself (a programmer) but I can't wrap my mind around this. He on the other hand, is a very technically inexperienced user and because he's half way across the globe, I can't come and look over his shoulder. He doesn't have a computer, afaik there's no way I could see what he sees. Any ideas on how to debug this? Thanks for your time, guys. P.S. I can't tag this "webtv" because such tag doesn't exist yet and my reputation is too low, sorry.

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Slow DB Performance. Seems to be memory related.

    - by David
    I am seeing a pooorly performing web app with a SQL 2005 backend. The db is on a w2k3 machine with 4GB RAM. When I run perfmon on it I see the following. Page life expectancy is low. Consistently under 300 while the Buffer cache hit ratio is always 99% +. The target server memory is always 1618304 and the total server memory is always a number just below that. So it seems that it isn't grabbing enough of the available memory. I have AWE enabled, with the lock pages right for the SQL service account and have set a maximum of 2.25Gb... but it doesn't go near that. When I restart the SQL service the page life expectancy goes much higher, 1000+, and the total target memory starts at 0 and slowly works its way back up to the previous limit. Then it hits the limit and the page life expectancy goes back down massively to <300. So I'm guessing there is something limiting the amount of memory. Any ideas on what that would be and how I can fix it?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >