Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 298/405 | < Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • SQL queries break our game! (Back-end server is at capacity)

    - by TimH
    We have a Facebook game that stores all persistent data in a MySQL database that is running on a large Amazon RDS instance. One of our tables is 2GB in size. If I run any queries on that table that take more than a couple of seconds, any SQL actions performed by our game will fail with the error: HTTP/1.1 503 Service Unavailable: Back-end server is at capacity This obviously brings down our game! I've monitored CPU usage on the RDS instance during these periods, and though it does spike, it doesn't go much over 50%. Previously we were on a smaller instance size and it did hit 100%, so I'd hoped just throwing more CPU capacity at the problem would solve it. I now think it's an issue with the number of open connections. However, I've only been working with SQL for 8 months or so, so I'm no expert on MySQL configuration. Is there perhaps some configuration setting I can change to prevent these queries from overloading the server, or should I just not be running them whilst our game is up? I'm using MySQL Workbench to run the queries. Here's an example.... SELECT * FROM BlueBoxEngineDB.Transfer WHERE Amount = 1000 AND FromUserId = 4 AND Status='Complete'; As you can see, it's not overly complex. There are only 5 columns in the table. Any help would be very much appreciated - Thanks!

    Read the article

  • troubleshooting postifx -> exchange connection issues

    - by Systemspoet
    I have three linux-based mail routers that run postfix and relay mail to our on-premise exchange server as well as to outlook.com, splitting the mail based on ldap atttributes. What I've observed sporadically since upgrading this spring from Exchange 2007 to 2010 is that all three of the mail relays will, for about 20 minutes, fail to connect to exchange. Postfix logs it as "lost connection with exchange.contosso.edu" ; this problem almost always occurs to all three mail relays at the same time, and lasts for slightly under 20 minutes. If I can catch it while it's occuring, and I manually do "telnet exchange.contosso.edu 25" from one mail relay and force a message through (helo, mail from, rcpt to, data, etc), then it clears that relay up. The exchange "server" is actually two machines with the HT role on them, load balanced via windows NLB. I've worked pretty hard to figure out what's happening from the postfix side and I can't see any evidence of any misbehavior. My question is, how do I attack the problem from the exchange side? Is there a connection log, or a debug setting, or something I can do to log all of the inbound connections and tell me what's causing exchange to drop them?

    Read the article

  • Touchpad scroll slow and jumpy

    - by IR
    I have a laptop with a synaptics touchpad running on win7 x64. When i use the scrolling region of the touchpad in some applications, for example in Visual Studio 2008, Notepad and Windows Media Player 12, the scroll is very slow. If i pull the edge of the touchpad slowly the program will scroll one row at a time(regardless of the number of lines-settings in the mouse configuration). If i pull the edge quickly though, the program will instantly jump like 20 rows making it way too fast. In some applications, like Firefox, the scrolling work as expected. Changing the scrollspeed-setting for the touchpad does not help. If you make it slower it doesn't do the 20-row jump but then it's horribly slow and if you try to make it faster it will do the jumps all the time. I have tried both synaptics generic drivers and the "special" drivers that HP provides but they both have the same problem (except that the generic one can't adjust the scrolling speed, even though that didn't help anyway). With windows generic drivers the scrolling region doesn't even work. Other mice i've tried with scrollwheel work as they should do.

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • Skipping nginx PHP cache for certain areas of a site?

    - by DisgruntledGoat
    I have just set up a new server with nginx (which I am new to) and PHP. On my site there are essentially 3 different types of files: static content like CSS, JS, and some images (most images are on an external CDN) main PHP/MySQL database-driven website which essentially acts like a static site dynamic PHP/MySQL forum It is my understanding from this question and this page that the static files need no special treatment and will be served as fast as possible. I followed the answer from the above question to set up caching for PHP files and now I have a config like this: location ~ \.php$ { try_files $uri =404; fastcgi_cache one; fastcgi_cache_key $scheme$host$request_uri; fastcgi_cache_valid 200 302 304 30m; fastcgi_cache_valid 301 1h; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/example$fastcgi_script_name; fastcgi_param HTTPS off; } However, now I want to prevent caching on the forum (either for everyone or only for logged-in users - haven't checked if the latter is feasible with the forum software). I've heard that "if is evil" inside location blocks, so I am unsure how to proceed. With the if inside the location block I would probably add this in the middle: if ($request_uri ~* "^/forum/") { fastcgi_cache_bypass 1; } # or possible this, if I'm able to cache pages for anonymous visitors if ($request_uri ~* "^/forum/" && $http_cookie ~* "loggedincookie") { fastcgi_cache_bypass 1; } Will that work fine, or is there a better way to achieve this?

    Read the article

  • 2.6.9 Kernel on virtual server (non upgradable) - any expected problems?

    - by chris_l
    Hi, I'm considering to rent a virtual server (for me personally). The product I'm currently looking at offers IMO fair pricing, very good hardware etc. The only problem is, that I won't be able to do an upgrade to a newer kernel than 2.6.9 (running Debian Etch). Also, I can't install my own kernel modules. (The server runs with Virtuozzo, so as far as I understand it, it just does some chroot instead of a real virtualization (?)) I want to run GlassFish, Postgres, Subversion, Trac and maybe some other things on it. It will also have to employ a firewall, and provide OpenSSL for https. Ideally, it would also be able to do AIO (asynchronous IO), which could speed up some server I/O. Should I expect problems with that old kernel version, in conjunction with the software I want to install (I'd like to use current versions of the software)? One thing I already found out, is that you can't do everything with iptables, since some kernel modules are missing/things are not build into the kernel. GlassFish v3 appears to run fine at first glance. I was able to test the server for a few hours. Installing my whole setup wasn't feasible in that time, but what I can say is, that it's amazingly fast for an entry-level vserver, especially hard disk and network performance (averaging at ca. 400MBit/s). So if the kernel won't be a problem, I'd really like to take it. Thanks, Chris PS Exact kernel version: 2.6.9-023stab051.3-smp

    Read the article

  • Partitioning of Ubuntu server which will use OpenVZ and encrypted partitions (unlocked through SSH l

    - by DeletedAccount
    Hi, I'm about to install a server. Some context: My HDD is 1 TB and I have 2 GB RAM Ubuntu Server Lucid Lynx AMD 64 I will use OpenVZ and have most functionality separated into containers. To support disk quotas I need to use ext3 (not ext4) for the container partition. Each time I reboot the server I want to be forced to login through SSH and mount the encrypted partitions by typing my password (if someone steals the server, no critical data should be available). I want to have as much as possible encrypted. Yet I want to be able to login through SSH as I don't have a monitor or keyboard at the server. I am not sure how big I need my partitions to be. Being able to resize them later would be nice. I guess it implies using LVM? But the manual partition mount using SSH is also very important (in fact it's more important, if I have to pick one). How do you recommend that I partition the HDD? If I have daemons which needs the encrypted partitions, will they fail and can I just restart them after mounting the needed partitions?

    Read the article

  • How to fix Windows 7 when System Recovery Options hangs?

    - by seansand
    The battery power ran out on my HP G60 laptop and it shut down. Even after recharging, Windows 7 will now not start up. After any attempted startup, it bluescreens and takes me to the "Startup Repair (recommended)" / "Start Windows Normally" console screen. "Startup Repair (recommended)" appears to be the right choice, but when I choose it, I get taken to a screen which appears to be System Recovery Options (it's the same wallpaper as the screenshots here: http://www.sevenforums.com/tutorials/668-system-recovery-options.html). However, I just get a cursor with nothing else; no "System Recovery Options" window ever pops up. (A black console screen does pop up for a split-second but too fast to be able to read the text.) The empty screen with cursor hangs indefinitely. System Recovery Options normally runs off of a partition on the laptop hard drive. When I got the laptop, I also created a System Repair Disc (in fact I have more than one) and when I try use any of them; they all result in the same wallpaper and empty screen with lone cursor. Ctrl-Alt-Del does nothing. The computer did not come with a Windows 7 installation disc, so there's no obvious way to reinstall Windows 7. Safe mode does not work; startup fails and I just get sent back to the "Startup Repair (recommended)"/"Start Windows Normally" console screen. "Start in last good state" does not work either, same result as above. Running a memory & hard disk check found no errors. Do I have any options at all? "System Recovery Options" seems to be what I want, but the screen that is supposed to take me to them just hangs.

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • Exchange 2007 Backup - For a newbie

    - by mew3900
    I am trying to setup an exchange 2007 backup solution. After doing a lot of reading, Microsoft have decided in server 2008 unless you are willing to spend a great deal on a 3rd party solution you are pretty stuck! Essentially what I have been asked to do is perform an off-line file backup of our current exchange server and replicate this onto a new 2nd server. The reasoning behind this is that we need to upgrade our current installation of exchange 2007 to SP2 so that the exchange plug-in for windows server backup will be available to us. From this I can then actually take an exchange aware backup weekly and take it off site. Ideally then also we can migrate to this new server and keep the old one as a fail over. Is there a way I can copy across the files required onto a second server, although I doubt very much it is that simple. I may be barking up completely the wrong tree, however I have very limited knowledge with Exchange and any help and advice on how I would resolve this would be much appreciated. Thanks in advance

    Read the article

  • Which scripting language to use to asynchronously ssh into equipment, run several commands, parse the output, and save to a file on my computer?

    - by Fujin
    There are several points I'd like to stress in my question. I'd like to login by asynchronously ssh'ing into our infrastructure equipment. Meaning, I do not want to connect to only one device, do all the tasks I need, disconnect, then connect to the next device. I want to connect to several devices at once in order to make the process as fast as possible. By equipment I mean 'infrastructure equipment' and not servers. I say this because I will not have the luxury of saving files to the device then transferring them to myself with scp or another method. The output of the scripts that are run will have to be saved directly to my computer. The output of the commands that are run will need to be cleaned up and parsed. Also I want the outputs of each device to be combined into one nice and neat file, not a separate file for each device. This will all be done from a linux box, using ssh, into devices that all use linux'ish proprietary OSes. My guess is the answer to my question will either be a Bash, Perl, or Python script but I figured it wouldn't hurt to ask and to hear the reasons why one way is better than another. Thanks everyone. EXTRA CREDIT: With you answer, include links to resources that will help create the script I described in the language that you suggested.

    Read the article

  • How to run a restricted set of programs with Administrator privileges without giving up Admin acces (Win7 Pro)

    - by frLich
    I have a shared system, running Windows7 X64, restricted to a 'standard user' with no password. Not everyone who has access to the system has the administrator password. This works rather well, except for some applications - specially the unlock-applications for encrypted hard drives/USB flash drives. The specific ones either require Administrator access (eg. Seagate Blackarmor) or simply fail without it -- since these programs are sending raw commands to a device, this is to be expected. I would like to be able to add the hashes of these particular programs to a whitelist, and have them run as administrator without needing any prompts. Since these are by definition on removable media, I can't simply use a filename or even a path. One of the users who shares the system can be considered 'crafty', so anything which temporarily grants administrator rights to an user account is certain to cause problems. What i'd like to be able to do: 1) Create an admin account that can only run programs from a whitelist (or, failing that, from a directory) I can't find a good way to do this: As far as I can tell, SRP applies equally to ALL users? Even if I put a "Deny" token on all directories on the system, such that new directories would inherit it, it could still potentially run things from the mounted USB devices. I also don't know whether it's possible to create a new directory that DOESN'T inherit from the parent, that would lake the deny token, and provide admin access. 2) Find a lightweight service that will run these programs in its local context Windows7 seems to block cross-privilege level communication by default, and I haven't found such for windows 7. One example seems to be "sudo" (http://pages.cpsc.ucalgary.ca/~nfriess/sudo/) but because it uses a WLNOTIFY hook, it won't work under Vista nor Windows7 Non-Solutions: - RunAs: Requires administrator password! (but everyone calls it "sudo" anyway) - SuRun: From Google: "Surun uses its own Windows service that adds the user to the group of administrators during program start and removes him automatically from that group again"

    Read the article

  • Windows 8 not shutting down properly

    - by Patrick
    Since installing Windows 8, the computer hasn't been shutting down properly. When selecting to power down, the PC quickly displays the shutting down screen, the monitor powers off, and the computer remains on but unresponsive. After about 5 minutes, the computer will turn off. Upon booting into windows again, I am informed that Windows didn't properly shut down. I'm running a fast SSD, and it's a clean install of Windows 8, so there's no way Windows is taking that time to do some sort of hibernate on shutdown or whatever - not to mention the error when entering Windows the next time. This happens on every shut down. Restart works as expected. EDIT: Formatting again didn't work. Fails regardless of drivers installed. Event viewer Always these two messages in close succession: Error (event ID 6008): The previous system shutdown at 7:45:21 PM on ?27/?10/?2012 was unexpected. Critical (kernel power, event ID 41): The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.

    Read the article

  • Sharepoint web part fails intermittently

    - by pringly
    I have a MOSS 2007 environment, 2 web servers and a DB server, load balanced between the two web servers. I deployed a web part recently, which worked fine for a while, but failed on web server 2 after a day. When it fails, it gets the error message: 'A Web Part or Web Form Control on this Page cannot be displayed or imported. The type could not be found or it is not registered as safe’ Once it has failed, it will stay that way until an IIS reset is done. The other web server never fails, I tried to force the second web server to fail to recreate the issue and have been unable to do it. I tried placing it under heavy http traffic and it handled it fine. Put it back in the pool and it failed again after about 7 hours. So, if i remove the .dll for the webpart from the affected web server, the webpart doesnt stop working. Is this normal behavior? I checked the bin directory for the site and the global assembly and it there is no other copy of the .dll anywhere else on the server. Also, when checking the web part gallery, if the web part has failed it will appear in the gallery, but by trying to add a new webpart, the .dll wont be listed. I have no idea how to continue troubleshooting from here or even fix it, any ideas?

    Read the article

  • Setup Apache with IPv6

    - by mrz
    I have two virtual machine on my computer. I have Apache installed on one of them (would be referred to as "server" after this), and I have set the Apache to listen to an IPv6. And when I enter the IPv6 into web browser on the "server" I see my index.html file. so far so good ... I want to be able to open a web browser on the other virtual machine("client") and see the index.html. But, when I try entering the IPv6 of the "server" in a web browser on the "client" I get an Unable to establish a connection to the server error. I can ping6 "client" from "server" and vice verse. There is only one thing to mention. ifconfig of the "server" shows 3 different IPv6 which two of them are scoped Global and there is one Link scope IPv6. On the "client" there is only one Link scope IPv6 though. I only can ping the Link IPv6s. Pinging other IPv6s would result connect:Network is unreachable. And if I set Apache to listen to Link IPv6, The rcapache2 start will fail the job. Any thoughts on what I am probably missing/doing wrong?

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • Why is /dev/urandom only readable by root since Ubuntu 12.04 and how can I "fix" it?

    - by Joe Hopfgartner
    I used to work with Ubuntu 10.04 templates on a lot of servers. Since changing to 12.04 I have problems that I've now isolated. The /dev/urandom device is only accessible to root. This caused SSL engines, at least in PHP, for example file_get_contents(https://... to fail. It also broke redmine. After a chmod 644 it works fine, but that doesnt stay upon reboot. So my question. why is this? I see no security risk because... i mean.. wanna steal some random data? How can I "fix" it? The servers are isolated and used by only one application, thats why I use openvz. I think about something like a runlevel script or so... but how do I do it efficiently? Maby with dpkg or apt? The same goes vor /dev/shm. in this case i totally understand why its not accessible, but I assume I can "fix" it the same way to fix /dev/urandom

    Read the article

  • USB Hub vs. Dockinstation USB vs. Laptop USB

    - by Will
    I recently had thougts about my current setup in my office, especially about the UBS ports distribution. Here's my setup: I have a Lenovo T410 docked to a Lenovo Dockingstation Series 3, that providey me with 6 USB ports, which I use all (3 ext. drives, mouse, keyboard, USB Hub of monitor). The USB hub on my ext. monitor (most probably powered by the ext. monitor's power supply) provides me with 2 USB ports, where I use one for my webcam and another for USB sticks. On my T410 itself I have 4 USB slots, that are usually not used, as don't want to mess with USB plugs when undocking my laptop, now and then I plug my printer on one of these, just because I don't have any UBS ports left. Now I'm wondering how fast each of these slots are: I assume that all the 6 USB ports from the dockinstation somehow go through the docking connector on the bottom of my laptop. Does this connector has such a big bandwidth for all these 6 USB ports to perform like if they were dedicated ports as the 4 ones on my laptop? Also how is generally the performance of USB hubs (like the one on my ext. monitor?)?

    Read the article

  • External HDD connecting via USB disconnects wireless LAN connection

    - by Kensai
    Strange problem. I have this MEDION Akoya PC that has a dedicated bay to slide an external HDD sold separately. It's very handy indeed cause the slot is providing a fast USB 3 connection and power to the HDD unit, without extra cables. All works fine except this show-stopper behavior to disconnect me from the router once I slide in the unit and it powers up. The moment I connect the unit the (normally) three-four WiFi connections I see in my neighborhood disappear and my own to the router loses its signal strength (no Internet traffic is possible). After a while it throws me off that one as well, never to connect me again as long as the unit is powered. Once I disconnect the HDD the various signals come back and it automatically reconnects to my own. What takes? Are we in front of a serious design fault by MEDION here? Does the spinning of the HDD on top of the PC cause electromagnetic interference strong enough to throw off my WiFi connectivity? Is it a simple USB problem? Some kind of strange hardware conflict? Where should I look?

    Read the article

  • Compiz & Linux compositing: how does it fit into the X architecture?

    - by Latanius
    Not a really "how to solve stuff" question, but... I was wondering how the modern X architecture works, with compiz & all. What I know about it: in the beginning, there was the X server, clients connected (presumably on TCP), and then sent messages to the server to instruct it to show windows etc. because this didn't work (at all? or just fast enough?) for OpenGL & 3D acceleration, additional APIs were created for direct rendering (DRI? and, in addition to the X server, what things did the X clients talk to to render stuff and through what interfaces?) and, finally, enter Compiz: X clients end up (somehow) rendering to OpenGL textures, which is then put together to form a fancy-looking screen with translucent windows, and rendered to the screen. What I'm especially interested in is what components does the system have and how do they connect to each other? Like... if there is a box labelled "compiz" in the system... is it inside the X server? If it's not, how do the rendered images from the apps end up in it? And where does it render to? Is that another X server? Or DRI? Of course, I'd be equally happy if pointed to some docs capable of clearing up the confusion described above (conditional on they being significantly shorter than book-sized entities).

    Read the article

  • NGINX + PHP-FPM - Strange issue when trying to display images via php-gd / readfile - Connection wont terminate

    - by anonymous-one
    Ok, to get the details out of the way: The php script can be anything as simple as: <? header('Content-Type: image/jpeg'); readfile('/local/image.jpg'); ?> When I try to execute this via nginx + php-fpm what happens is the image shows up in the browser, here is what happens: IE - The page stays blank for a long period of time, and eventually the image is shown. Chrome - The image shows, but the loading spinner spins and spins for a long period of time. Eventually the debugger will show the image in red as in error, but the image shows up fine. Everything else on the server works great. Its pushing out about 100mbit steady serving static content. So this is definatly a php-fpm related issue. I THINK this may have something to do with the chunked encoding being sent back wrong? Also, I threw in a pause before the image was read, and got the pid of the fpm process, and it looks as tho its terminatly correctly (from strace): shutdown(3, 1 /* send */) = 0 recvfrom(3, "\1\5\0\1\0\0\0\0", 8, 0, NULL, NULL) = 8 recvfrom(3, "", 8, 0, NULL, NULL) = 0 close(3) = 0 The above was dumped long before ie/chrome decided to give up (even tho the image was shown) loading the image. Displaying HTML / text content is fine. Big bodies etc all load nice and fast and terminate right away (as they should). Doing something like: THIS IS THE IMAGE ---BINARY DUMP OF IMAGE--- Works fine too. Any ideas?

    Read the article

  • (squid): failed to find or read error text file.

    - by adam
    There is something in our ERR_NO_RELAY that is causing this error to be logged and for the squid process to fail on start up. I can't show you the exact content of the file but I can tell you It has several lines of JavaScript When we remove the JavaScript, the problem goes away. This same file does not cause any issues other 3 instances of squid that we have running internally. All instances of squid came from the same VM images so they should be the same. We are unable to reproduce this issue except on the one box and we are unable to debug more on this box right now because it is running in production. I know these files are interpreted so squid can replace certain values available in the session so it may be that a syntax error caused this issue. That does not explain why we cannot reproduce it on other (virtually the same) images. One difference is that the instance of squid that has the issue was under load when the issue occurred. Any suggestions/insight would be appreciated. thanks!

    Read the article

  • Connection reset to some websites

    - by user143271
    I'm using a 2wire 3600HGV modem/router. Starting around this afternoon, any time I try to access anything from i.imgur.com I get The connection to i.imgur.com was interrupted in chrome, and the actual error is Error 101 (net::ERR_CONNECTION_RESET). It's network wide (tested multiple browsers on multiple computers and phones). I can access imgur.com just fine, but nothing from its content server i.imgur.com. If I disable wifi on my phone and use its 4G connection, I can access it just fine, so obviously imgur isn't down. I haven't changed any configuration on the router, and I have tried changing DNS servers (I tried google and OpenDNS). It also seems that imgur is not the only site; howtogeek and a couple of others seem to have the same problem. It looks like they are all edgecast cdn content servers, but not all edgecast cdn servers fail. Tumblr, for instance, works just fine. Does anyone have any idea what would be causing this? EDIT: Related to the edgecast remark, it would appear that this is a specific edgecast server: gs1.wpc.edgecastcdn.net. Tumbler's content is on gs1.wac.edgecastcdn.net, so it might be on a different server. Edit #2: These sites all respond to ping just fine as well.

    Read the article

< Previous Page | 294 295 296 297 298 299 300 301 302 303 304 305  | Next Page >