Search Results

Search found 12768 results on 511 pages for 'little snitch'.

Page 351/511 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How to avoid sshfs freezing?

    - by Andreas Hagen
    So the issue is this: I've installed sshfs on Ubuntu 12.04 and I'm trying to connect to a couple of remote servers. So initially the mount seams successful. Sometimes Gnome even picks it up and displays the "new device found" box at the bottom of the screen. but from here on there is not much that works. Or at least not any more. The first couple of times i connected it seamed to work fine, and I was able to transfer some files, then i disconnected using fusermount -u <folder> and after reconnecting a little later the trouble started. Now after executing sshfs -o ServerAliveInterval=15 -o reconnect -C -o workaround=all -o idmap=user root@<host>:/ <folder>, when I change directory into the mount-point, the shell just freezes. Strangely ls -al <folder> works when listing just the root of the remote system, but nothing more. Also every file-explorer I've tried freezes just like cd <folder>. To me it seamed like there was some kind of zombie thread or something hanging around my system, due to the fact that it did work the first time, so I have tried rebooting but no luck. sshfs -V gives this: SSHFS version 2.3 FUSE library version: 2.8.6 fusermount version: 2.8.6 using FUSE kernel interface version 7.12 So yea, any ideas?

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Windows Server 2008 R2 running at a snail's pace

    - by Django Reinhardt
    Really weird problem here. Our main web server has started running at a snail's pace, for absolutely no reason we can discern. Even after restarting the machine, when there's no little or no ram usage and CPU usage is fluctuating between 0 and 30%, simple tasks, like opening Internet Explorer, or waiting for My Computer to open, take forever. There are no processes hogging system resources that we can see... the machine itself is just exhibiting extremely slow behaviour. I've never seen a machine do this. A lot of security updates had built up, so we decided to let Windows install them. When we looked through the history upon restarting, though, they had failed with error code 800706BA. I don't know if this could be related or not. Any help in this matter would be greatly appreciated. As mentioned in the title, we're running a Windows Server 2008 R2 machine. It's also running SQL Server and IIS. It has 16GB of RAM and a decent Quad Core processor. It's also been fine until now -- and we haven't changed a thing. Thanks for any help.

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • Improve wireless performance

    - by djechelon
    Hello, I have a Trust Speedshare Turbo Pro router, which is running on channel 6. I found that the wireless signal (and network performance) dramatically drops from my PDA (I can barely attach to the network, even if I set the PDA's energy settings to maximum wireless performance) when I even exit my room, and I don't have shielded walls or something like that. I can't even stream a SD video from my desktop (connected via LAN) to my laptop using WiFi, while via LAN it works fine. I read that changing router's channel could improve performance due to interference reducing. I found that almost all wireless networks around here run on channels 6 and 11. I tried to go to my router's settings page to change channel, but I found that the combo box only allows me to select 6!! I'm not sure, but I may have been able in the past to change channel, though not to all of the available channels. A few minutes ago I tried a firmware upgrade, but it didn't solve my problem. My question is Is it possible that my router is someway locked to its channel? I bought it on my own, I didn't receive it from my ISP Apart from boosting the antenna power to the maximum (which, by the way, increases the EM radiation my and my family's bodies absorb 24/7 and is little more environment-unfriendly), do you have any tips on getting high quality transmission up to 5 metres from the antenna? Thank you

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • Why isn't my phone charging with some micro usb cables?

    - by Jacxel
    I ordered 3 microUSB cables over ebay. My phone was at about 50% battery and I wanted to use it as a hot spot for some browsing on my laptop, so I plugged it in, a charging icon appeared on the phone, my laptop showed it as a connected usb device and so I went about my business. About 30 minutes later I checked the phone and to my dismay saw 45% battery. But ahh, I thought, I have been putting the poor little thing under too much pressure, acting as a WiFi hotspot must drain the battery quicker than it can charge via usb, perhaps even using my laptops usb port wouldn't output enough power. Unscathed I continued on and when I was going to bed I plugged the usb cable into a mains adapter and switched everything battery consuming off and content, went to sleep. The next morning I was awoken by my phones alarm which got cut off unexpectedly. I attempted to unlock my phone which showed no more signs of life. Why isn't my phone charging with these new USB cables? For clarity: They transfer data with no problems The phone appears to be charging, showing all the signs and lights it normally would, the cable that came with the phone works as you would expect, so its not a fault with the phone, I think they slow the discharging of the phone, but I could be wrong. Are these just bad quality cables? Is there a way to fix this issue?

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Using NFS for scalable PHP/MySQL web application

    - by Jeroen Moons
    Here's the situation: I have a PHP/MySQL web application that accepts user uploads (pdf files). From these pdf files' pages a preview image is made on the fly and presented to the web app's users. Some pdfs might be on the large side, most will be under 50 MB but some extreme cases could be as large as a few hundred MB. A little waiting for the preview image for large pdf files is acceptable but no more than a minute let's say. Everything is running on one server for now, but soon the app will hit the server's limit on both storage and processing power. My idea to solve the problem: To deal with this situation I had the idea of having one or more pdf processing servers as needed, and one or more file storage servers. These two types of servers are mounted to the server on which the actual app runs using NFS. The app could then use GearMan to delegate pdf processing tasks to these processing servers. The processing server can mount the storage server and read the file stored there, process it and write its output to that server. The servers I'm talking about will be amazon ec2 instances. The web app returns a link to the resulting pdf preview image on the storage server that was used which can then be used on the front end to show the image to the user. My question: I have zero experience with apps that use multiple servers, is this idea viable or is there a better way to do it? Is an NFS setup fast and reliable enough for this situation?

    Read the article

  • HP/Lenovo alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/Lenovo? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • Wireless router setup for 1-1 NAT

    - by Carlos
    What I have: A linksys router WAG160N with firmware version 2 A "pool" of 5 external static IP's provided by my ISP 213.xx.xxx.n All the required configuration values for the static IPs such as (Subnet Mask, Gateway and static DNS 1, 2, 3) Current WAN Configuration: Encapsulation: RFC 2364 PPPoA Multiplexing: VC QoS type: UBR DSL modulation: MultiMode What's connected to the network: 1 x Server (That I want to make available to the outside) 5 x Desktops with static internal IP's, such as 192.168.0.xx 2 x Network printers, also with internal static IP's 2 x Laptops 1 x NAS (Network Attached Storage) also on static IP What I want to do: I would like to make the server available from outside the network, for example from your house. The problem is that Im not really sure how to do this. I have tried following the steps on the instruction manual in Linksys but they do not seem to work, once I set it up as shown bellow, I loose internet and all hell breaks loose. Going into further detail, I would prefer if the network is changed as little as possible, by this I mean that all the computers stay networked within eachother and only the server is accessible from the outside the network. What I need HELP with: I have read around that it is possible to set a 1-1 NAT (I know where it is in the menu but have no clue what it does...) so that I can NAT a single public IP directly to a single private IP (in our case the server). But please, How do I do that? Or maybe an alternative?

    Read the article

  • Internet Problem: Wireless connected but no connection to internet

    - by Josh K
    Hey i have a interesting network setup on a laptop here and for some reason the internet isnt working. I am connected to a secure network via wireless router and taskbar says i am connected and with good signal strength but in my internet browser i cant connect to any websites, the error is: This webpage is not available. (Chrome) I am using Chrome, but websites dont work on IE either. Heres a little background on the setup i have. I have a Ethernet connected to the laptop with a static ip, and then i have the wireless setup with DHCP enabled. I am using the ethernet to connect to the network (for remote desktop) but the wireless for internet (to avoid the network firewalls). this set up has worked fine for a few months, but i cant figure out what is going on now. Might be worth it to note it is a Lenovo Thinkpad and i just uninstalled ThinkVantage Access Connections (as it was giving me ample problems prior to this one, which i consider a step up) Tried repairing connection as well, let me know if you guys have any ideas please! EDIT: Solved-Dead Modem in the server room.... Sorry guys didn't have access to that myself

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • Fedora installed in Legacy mode, how to make it work in UEFI?

    - by TryntaLearn
    I am trying to install a linux distribution on my new laptop. It's an MSI GE40 which comes preinstalled with windows 8. It's a UEFI machine. I have tried installing Ubuntu and Fedora with limited success. I've tried: running it in UEFI, UEFI with CSM mode, with secureboot enabled, ... with secureboot disabeled, ... with secureboot enabled but in user mode. I have had no success with any of these methods. With Ubuntu the grub loader shows up, but when I pick 'try ubuntu', or 'install ubuntu', it's just a blank screen(I've been using liveusb's btw). With Fedora, it'll show me the next screen on which it says 'binary authorised by vendor certificate' or 'Secure boot not enabled' and then stop doing anything. The closest thing to success I reached was switching to legacy mode to install Ubuntu, in which case I was able to get to the ubunutu installer but it wouldn't recognize windows 8 on my computer, so instead of continuing on I rebooted, and removed the USB pendrive to find my computer couldn't find windows 8. After a little dicking about I got it to find windows 8 again. Any ideas on how I should go about trying to install a distro on my computer? UPDATE:- So I ended up installing fedora using Legacy mode. To use both it and Windows at boot, I manually enter automatic repair so I can get to my UEFI settings and switch boot mode to UEFI to boot windows 8. I guess my question needs to be modified as to how do I get all of this to work in UEFI mode, so I can dual boot via selection through a bootloader, and not by repeatedly switching boot mode.

    Read the article

  • Laptop Overheating with Windows 8

    - by Dany Khalife
    I recently installed Windows 8 on my HP G62 Laptop and i have been noticing a very strange problem with it. Let alone, for lets say 5 minutes, without even touching it, it starts to heat up and it reaches about 60 degrees (Celsius) with absolutely no applications open (not just on desktop but overall). I dug in a little deep and found out that Maintenance was running when the computer was Idle, so i turned that off From the System's Task Scheduler, and while there i also turned off other services i did not need hoping that would solve the problem. So after a few days, i noticed that the average temperature of my laptop dropped from 55 to 48 degrees while working on Visual Studio. And when i thought the problem had disappeared, it still did show up, only not after 5 minutes, but more like 10 minutes... Here is what i have done so far: Replacing the thermal paste on the CPU and the fan and cleaning the fan (this was like 6 months ago) Using a laptop cooler Running a virus scan (i just formatted my laptop so it would be really weird if i already caught something but who knows) Right now, i believe it has something to do with my gfx driver (Even though it IS up to date, looking closely at the screen, i can see the pixels slowly refresh (kinda like watching static on TV) which i wasn't able to do on Windows 7. If you have any ideas, let me know. Thanks

    Read the article

  • Looking to get a small server – need web, PHP, PostgreSQL.

    - by Javawag
    Hi all! I'm looking to get a cheap (low end) server to serve web pages (xHTML/PHP), but I also need to be able to set up PostGreSQL on the system too. Ideally the server would have low power consumption, run Linux (I prefer Mac OS X but a Mac Mini, although the size I'm looking for, is too much money!) and be around £100 (~$160US). EDIT: Just to make it clearer, I'm looking to purchase the server hardware myself – but I want something about Mac Mini sized. I don't want to pay for hosting! Also, quick question – if it's to serve web pages from my home (standard ISP connection, no static IP!), what do I need in place to get this working. I'm guessing I would sign up with some service like no-ip, and register a domain to point to my no-ip address (then install the no-ip software on the server to update that with the current IP). I know the idea of running a server behind a normal ISP connection isn't very elegant, but I'd prefer to have the server where I can see it then pay over the odds for a hosting service where I have little to no control over what happens. Also, I could write my own server software for apps/etc to connect to as well. Anyways I'm rambling! What do you guys think?! Javawag

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • SamFS performance problem on file creation

    - by Gregor Longariva
    I have two samfs filesystems (samfs1 and samfs2), both on the same 6130, both with the same config/watermarks/timeouts etc. creating a file on samfs2 works as it should, on samfs1 not. A little simple script shows up, that every while and then the file creation needs between 11 and 28 seconds: stan 12:32 [scratch]# while ( 1 ) while? echo - while? time echo test file while? time mv file file2 while? echo + while? sleep 1 while? end 0.00u 0.00s 0:00.01 0.0% 0.00u 0.00s 0:00.00 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.03 0.0% + 0.00u 0.00s 0:23.71 0.0% 0.00u 0.00s 0:00.14 0.0% + 0.00u 0.00s 0:00.18 0.0% 0.00u 0.00s 0:00.13 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.04 0.0% + 0.00u 0.00s 0:00.04 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.01 0.0% + 0.00u 0.00s 0:26.05 0.0% 0.00u 0.00s 0:00.50 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.12 0.0% + Any idea where the problem could be?

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • Removing extended partition without deleting logical in it

    - by HisDudeness
    I'm running a Linux-based laptop, and in order to multi-boot several distros in it, I created an extended partition which contains a bunch of logical ones with GParted. Now, after quite a long time with this setup, I've changed my mind because of the consequent lack of storing space for my data partition. Now I want to keep one distro alone like it's normal, and eventually have some other operating systems stored in external supports to plug in and use if I want. Obviously, also this partition I want to keep (and to enlarge a little too) is just a logical inside the extended I want to keep. For what concerns the number I'm ok, meaning I currently have this big distro dedicated extended, the swap and the data partitions, so there's space for another primary before I delete the extended, but I don't know how to delete it without touching the logical in it, I don't want to reinstall the system losing all changes and settings, and I don't want to keep an extended partition for a logical alone. How can I do? Do I have to create a new primary, copy the logical content in it and then delete everything? Will the system boot and maintain exactly all the features it has now? Or is there a way to convert an extended into a primary once it contains just one logical? Or can I directly move a logical out of an extended turning it into a primary? Or, again, am I screwed?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >