Search Results

Search found 4077 results on 164 pages for 'throw'.

Page 101/164 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • What do I need to do to set my computer as Default Gateway?

    - by Vaibhav
    We are trying to put together a box with dual LAN cards (let's say Outer and Inner), where the Inner LAN card is supposed to act as a default gateway on the network it is connected to. This box is running Ubuntu. The basic purpose for this box is to take messages generated on the inner network, do some work with them and forward them out the Outer LAN card to a server. The inner network is completely isolated with simply a regular switch connecting the Inner LAN Card with two other boxes. These other boxes either throw out multi-cast messages (which the Inner LAN Card is listening to), or send out unicast messages meant for the server which is not on this inner network. So, we need the Inner LAN Card to act as a default gateway, where these unicast messages will then be sent, and the code on the dual-LAN Card box can then intercept and forward these messages to the server. Question: 1. How do we setup the LAN Card to be default gateway (does it need some configuration on Ubuntu)? 2. Once we have this setup, is it a simple matter of listening to the interface to intercept the incoming messages? Any help (pointers in the right direction) is appreciated. Thanks.

    Read the article

  • AWS Linux EC2: yum won't run with plugins

    - by Patrick
    Short Version: yum commands on my Amazon Linux EC2 AMI only work with --noplugins. Long Version: A couple of days ago, I ran yum update at the behest of the SSH Login MoTD telling me I had updates to install. About midway through the update (specifically while updating the kernel), the update abruptly ended (79 of 138 items completed). The website I host on EC2 got weird for a few minutes, but eventually seemed to stabilize back out (maybe EC2 restarted itself?), and I didn't have further issues (other than MySQL started running out of memory, but I think that's probably unrelated to this). Today, I went to install gcc-c++ (with yum install gcc-c++). When I did, I got the following message: Loaded plugins: priorities, security, update-motd, upgrade-helper Config error: Command "updateinfo" already defined and I get that for any command I can think to run using yum. However, If I throw in the --noplugins flag, then magically it seems to work. To be clear, when I installed a different package a week ago, it worked totally correctly, so the yum update is the only thing I can think of that changed. I could find nothing on Google with regard to "updateinfo" already defined (with and without quotes). I tried running yum update --noplugins which spit out a message telling me that I should have run yum-complete-transaction instead, but proceeded to try to update something on its own. When that completed, I tried yum-complete-transaction but that gave me a message about the transactions not lining up correctly, so it removed the old transaction (Probably since I should have completed the first transaction before trying to update again, if I had known). Based on the SF question "Linux EC2 Broken Yum", I've also tried yum clean all --noplugins (fails the same with plugins) which just gives me Cleaning repos: amzn-main amzn-updates rpmforge Cleaning up everything I also tried package-cleanup --problems Loaded plugins: priorities, update-motd, upgrade-helper No Problems Found and package-cleanup --dupes Gives a lot of dupes, so I pasted them here: http://pastebin.com/VVFQEkTT instead of inline. At this point, I'm not sure what else there even is to check.

    Read the article

  • Warning for "Reallocated Event Count" S.M.A.R.T. attribute with a new/unused drive. How serious is t

    - by Developer Art
    I've just looked at the health status of my old 2,5 inch 500 Gb Fujitsu drive with a popular "HD Tune" utility. It shows a warning for the "Reallocated Event Count" property. How serious is that? The thing is that the drive is practically new. I pulled it out of a new laptop over a year ago and never used it since. Right now it only has 53 "Power On" hours which sounds about right since I only had it running a few evenings overnight before switching it for something more performant. Does this warning indicate that the drive is likely to fail some time in the future? I'm somewhat perplexed since the drive is effectively unused. What is more, I have arranged with somebody to buy off this drive since I don't really need. It is 12,5 mm thick (with 3 plates) meaning it doesn't fit into an external enclosure which makes it quite useless to me. Can I give away the drive without having it on my conscience or better cancel the deal? In other words, can the drive be used safely for years to come or better throw it away? I'm running a sector test now to see if there are any real problems. Will post the results as soon as they're available.

    Read the article

  • Netbook thinks it is a desktop

    - by Narcolapser
    Question: Are, and if so what, there packages for download that I can get netbook to understand it is not a desktop and that it is a netbook. Info: I'm running an Acer Aspire One with ubuntu desktop 9.10. I tried Ubuntu Netbook Remix first but it has graphics issues with the aspire one. So I changed to Ubuntu Desktop. It was the only distro (after debian, centOS, Fedora, and Knoppix all failed me) that I managed to get working. The only thing is that it is having issues doing things that a netbook/laptop should be doing. most notably is that it will run it's battery dead if I close the screen and throw it into my back pack. It seems to just stay fully on and runs it's self to death. also it will lock up some times if I close the screen and come back to it 10 or 20 minutes later. It also won't retain volume settings when I reboot, as well as screen brightness. and just a couple of other things that I can't quite put my finger on, but just seem amiss. like I said, Essentially my netbook thinks it is a desktop, how can I fix this? ~N

    Read the article

  • Recommendations for hosting large videos

    - by Clinton Blackmore
    I recently created and put a 45-minute, 300 MB video file on my website and told a mailing list about it. Checking my site stats, I see that I've used 20% of my "unlimited" bandwidth for the month. As I want to be able to have several videos like this, clearly, I need to consider other options. The appeal to hosting files as my own site (aside from the supposedly unlimited disk space and bandwidth), is to be able to have control over the format, resolution, and quality of the video(s), as well as to ensure that it is clear that I'm the copyright holder (although the videos will be under a creative commons license). I find that for the screencasts I'm making, having a high resolution (say 3/4 of 1024 * 768) really makes seeing what is going on on the screen easier. It is also always a plus to not have the experience marred by advertisements. One more wrench to throw in is that while the videos are non-commercial, they do promote a club, and it seems that that falls afoul of some terms of services (especially for free services; while free is very nice, I will certainly consider putting up some money.) What recommendations do you have for (fairly) long, high-resolution videos? Should I look in depth at sites like YouTube and Vimeo, should I be considering a filesharing site [I have no qualms with someone downloading the entire video first -- I wouldn't want to watch 45 minutes in my browser!], hosting files with Bittorent (ugh -- I think that'd reduce my audience), or should I be looking into other web hosts (and if so, who?)

    Read the article

  • Problem with Windows Service and network printers.

    - by Mohammadreza
    I have a Windows Service application that every now and then should print some documents. As far as I know, to print those documents, my service must be run with a user account other than Local Service or Network Service. So i have created a user account and added that to the Administrators group and ran the service with it. With locally installed printers, I don't have any problems because those printers are automatically installed for all accounts. To be able to print with the network printers, I have created another application that syncs the installed printers of the currently logged in user with the user account that my service uses with the rundll32.exe printui.dll,PrintUIEntry command. In Vista and Windows7 I don't have any problems with the syncing of the printers because every time that a printer should be installed the authentication window will open and it asks for the appropriate user account to install that printer (The service user account is not created on the network printers computers) but in XP a find dialog with the "Connecting to {printername}" caption will appear and stops responding, or sometimes it installs the printer but every time service tries to print, a Win32Exception with "A StartDocPrinter call was not issued" message will throw and in the user account that runs the sync application a duplicate printer will be shown and I couldn't delete those printers unless with force (using registry). Am I doing the right thing for printing documents with Windows Services at all? If yes, how can I solve the above-mentioned problem? And if not, what the heck should I do? Thanks.

    Read the article

  • USB Device Not Recognized

    - by Franky Chanyau
    Ok this one gets a little bit complicated but bare with me :D A client brought her computer in to be fixed about a week ago, she says she tried charging a new phone she bought from china and immediately after her usb keyboard and mouse stopped working (typical). I had a quick look at it but because I did not have time, I did a simple system restore and it seemed as if the issue was fixed. I promptly sent it back to her but a few days back she called saying that the issue has returned. Turns out the computer was riddled with some virus that also corrupted her XP install so I had to format the whole thing(yes I tried repairing). I hoped that the format would fix the keyboard and mouse issue but the whole thing has escalated and the computer will throw the "USB Device not recognized" error when I plug anything into the many usb ports it has. I have installed all the drivers (including the chip set drivers) for the pc and even tried the unplugging from the power for a while trick, still no luck. I am sure it is not a hardware issue, but may be wrong. This is way over my head. Can anyone help? Computer: HP Compaq DC7100, Intel Pentium 4, 512mb RAM OS: Windows XP Professional SP2

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • Laptop hangs on POST and does not finish except on rare occasions

    - by user1049697
    I have an old Toshiba Satellite A100 laptop that hangs on POST when I try to start it. On rare occasions it does finish the POST and boots Windows successfully, but most times it just finishes it partially and continues to hang. I can enter the BIOS though when it has frozen, but I have to open the DVD-drive first for some reason. The keyboard is not quite right either, and I can't navigate the BIOS properly because the arrow keys doesn't work. I tried an external keyboard, but the problem persisted. I have tried to remove the memory, hard drive, and battery to see if any of these were the problem, but it did not solve it. The one logical thing left to do would be to remove the CMOS battery, but the "brilliant" engineers at Toshiba have place it such that a complete disassembly of the machine is necessary. What this all boils down to is basically the question of whether I can "save" this machine and get it to boot properly, or if I should just send it off to recycling. I suspect it might need costly repairs, but I can't bring myself to throw it away before I have made sure it's completely dead.

    Read the article

  • Tri-head linux system with Xmonad: is it possible to have HW acceleration

    - by progo
    What means there exists to have three monitors, all controlled by Xmonad and have hardware 3D acceleration as well? I had the pleasure of using three monitors earlier this year, and while Xmonad and Xinerama handle three monitors easily, I had to throw in an extra display driver, and also let go of Nvidia's own TwinView (which is a hack on Xinerama). This left me with no HW acceleration and some flickering as double buffering wouldn't work with certain applications. However, the three monitors handle so beautifully that I had hard time coming back to two. I understand the easiest way to achieve HW-accelerated tri-head combo is to split into two Xorgs. I wouldn't be able to switch windows between the Xorgs, so I'm not really into this solution. What's more, having a cheap and old PCI card along with even slightly better PCIe seemed to slow things down. Even if I occasionally disabled the third monitor from Xorg configure, I couldn't get HW acceleration to work. Only after I physically disconnected the old PCI card, I could get the games back in business. Would a Matrox Dual/Tri-head2go and a powerful Nvidia GPU do the trick? I understand Xmonad can be configured to "believe" that a "single" (as Dualhead2Go will merge) 3360x1050 display is actually two different ones? So that Xmonad's Mod-w and Mod-e would work properly there.

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Is UPS worthwhile for home equipment?

    - by Jon Skeet
    Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) One external hard disk still claiming to function, but corrupting data One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually running during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • Proxy settings in Java mail API

    - by coder
    I've written a piece of java code where user1 sends email to user2. I'm behind a proxy and hence I'm getting a javax.mail.MessagingException. How do I solve this problem? Here is the code- import java.util.Properties; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; public class Mail { public static void main(String[] args) { final String username = "[email protected]"; final String password = "abc"; Properties props = new Properties(); props = System.getProperties(); props.put("mail.smtp.auth", "true"); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.host", "smtp.gmail.com"); props.put("mail.smtp.port", "587"); Session session = Session.getInstance(props, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(username, password); } }); try { Message message = new MimeMessage(session); message.setFrom(new InternetAddress("[email protected]")); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("[email protected]")); message.setSubject("Testing Subject"); message.setText("Dear Mail Crawler," + "\n\n No spam to my email, please!"); Transport.send(message); System.out.println("Done"); } catch (MessagingException e) { throw new RuntimeException(e); } } }

    Read the article

  • Migrating away from LVM

    - by Kye
    I have an Ubuntu home media server setup with 4.5TB split across a few hard-drives (1x3TB, 2x1TB) and I'm using LVM2 to manage the volumes. I have recently added a 60GB SSD to my server, and I wish to use it to house the 'root' partition of my server (which is currently under the LVM group). I don't want to simply add it to the LVM volume group, because (afaik) there's no way to ensure that the SSD will be used for the root filesystem. If I just throw it at the VG, it may be used to house my media, which would defeat the purpose of having the SSD in the first place. I feel that my only solution is to somehow remove my root partition from the LVM setup and copy it across to the SSD. My boot partition is, of course, not part of the LVM group. My disk setup is as follows: 60GB SSD: EMPTY. 1TB HDD: /boot, LVM space. 1TB HDD: LVM space. 3TB HHD: LVM space. I have a few logical volumes. my root (/), a 'media' volume for my media collection, a backup one for my network backups.etc. Does anyone have any advice as to how to go about this? My end goal is to have the 60GB SSD used for my boot and root partitions, with everything else on the 3TB/1TB/1TB hard-drives.

    Read the article

  • WAMP server won't run with PHP 5.3.4 but will with PHP 5.2.11

    - by Ben Williams
    I have a 64bit Windows 7 Professional machine. I'm running WampServer Version 2.1 with Apache 2.2.4. It was installed on a clean machine. I'm using the default ini/conf files as they come. Wamp is installed in C:\wamp\, with php5.2 at C:\wamp\bin\php\php5.2.11 and php5.3 at C:\wamp\bin\php\php5.3.4. Both folders have the same permissions. When I run WAMP with 5.2.11 picked, it starts fine. When I run it with 5.3.4 picked, there are no errors in the Apache or PHP error logs, but I get The Apache service named reported the following error: httpd.exe: Syntax error on line 115 of C:/wamp/bin/apache/apache2.2.4/conf/httpd.conf: Cannot load C:/wamp/bin/php/php5.3.4/php5apache2_2.dll into server: The Apache service named is not a valid Win32 application. in my system application error logs. 5.2.11 calls C:/wamp/bin/php/php5.2.11/php5apache2_2.dll and that doesn't throw an error. What am I doing wrong?

    Read the article

  • Why does hiberfil.sys come back from the dead on Windows 7?

    - by Corey White
    I have Windows 7 running on a small (40GB) partition, with 4GB ram. This means that the hiberfil.sys file created by Hibernate takes up a significant portion of the available diskspace. I would like to remove it. I am aware that I can disable Hibernate and remove hiberfil.sys by entering powercfg -h off in an elevated command prompt. This works -- the file is immediately removed, and after doing so, the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key is (correctly) set to 0. However, the next time I reboot the PC, hiberfil.sys returns from the dead, Hibernate is reenabled, and that registry key has returned to 1. I'm pretty much at my wits' end with this. Almost everything I can find online related to removing the hiberfil.sys file simply suggests using powercfg to turn off hibernation, and that appears to work for just about everyone. But it just keeps coming back for me! (Like a vampire, sucking up my disk space.) I did find one other thread from someone who seems to have had the same issue, but none of the suggestions there worked for the original poster (or for me). Still, I have tried everything listed there, including: Disabling hybrid sleep Disabling Hibernate through the command prompt, through the Power Options GUI, and through both (in both orders) Manually changing the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key Pretty much everything else I can think of! I do want to reiterate that I have no problem removing the file -- that works great. It just comes back after every reboot. I'm about ready to throw in the towel and just run a script on login to disable Hibernate each time, even though that seems like a crazily hacky "solution" . . . but I was hoping someone here could suggest something else, first. Thanks!

    Read the article

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • What is the best way/Software to manage multiple short lived instances of virtual machines ?

    - by Newtopian
    Hi, We have a QA department that have to test our software on multiple combination of OS and DMBS. With Windows spewing out many different versions the combinatorial math of all this can be daunting. So we decided on visualizing our setups but so far it only displaces the problem. The cost of hardware is expensive and we need many different combination far exceeding your server capacity to deliver. Also, these instances are throw away, once the test is complete we no longer need it, furthermore to ensure proper test isolation we should start fresh from a new instance. Lastly we only need a small subset of these system online at any given time. What I am looking for is a way to manage inventory so that our QA staff can order instances to be put online as required and discarded once used. Instances are spawned from a pool of freshly installed systems with the appropriate combination ready to accept our software. It also should be possible for two or more people to start the same instance at the same time, though we could manage without this if it proves too complex to put in place. Finally our budget is pretty thin, we can probably make some purchases but ideally expenditures should be kept to a minimum. To summarize we should be able to : Bring instances online on demand. Ideally should offer queue and scheduling management Destroy instances on demand Keep masters in inventory but not online. Manage large inventory of VMs (30-100 maybe more) with small staff of users (5-10). Allow adding, deleting and changing instances from inventory (bring online, make changes and check back in, or create new and check in). Allow few long lived instances for support tools (normal VM server usage) Thanks for your answers

    Read the article

  • Juniper SSG 5 VPN

    - by Ethabelle
    I have a host who set up our Juniper SSG 5 VPN with Firmware version-6.2.0r5.0 I've been trying to set up VPN on it using this guide: http://kb.juniper.net/InfoCenter/index?page=content&id=KB4094 I've followed the steps and on my Mac, whenever I try to connect using L2TP over IPSec I get the following error; Summary of Steps: Create User (give them L2TP auth ability), Create Group, Place User in Group, Create VPN Gateway, Create VPN, create IP Pool, change default L2TP settings, create Untrust Trust Policy. The L2TP-VPN server did not respond. Try reconnecting. If the problem continues, verify your settings and contact your Administrator. I looked in my Firewall's logs, but I don't even see anything under Reports Logs Events. I'm.. obviously missing something, I just don't know what I'm missing at this point. I'm just starting networking and this is sort of Step 101 and I'm getting annoyed and just want to throw up OpenVPN, but I've read that has problems with Juniper Firewalls. Hooray.

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >