Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 268/365 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • Green flickering pixels that move with black images

    - by user568458
    Strange question... Occasionally, on my LCD screen, pixels that should be black flicker rapidly and constantly between black and green, about 4 flickers a second. The crazy part is, unlike dead/stuck pixels, they are relative to content on the screen and move with it. For example, I might be looking at a web page with a picture that has lots of black. There might be a couple of green flashing pixels in that black that shouldn't be there. I scroll the page, and the green flickering pixels move with the image. It seems that everyphysical pixel is fine, but somehow something interprets part of the image in a way that causes flickering green... It's not just in a web browser. My first thought was to blame a trolling blogger cunningly uploading an animated gif that simulates a failing pixel... but it happens in a wide range of applications. It seems to occur randomly, other than that it seems to only occur in areas of pure black, and it's always pure 100% green. It happens rarely enough that it's not a big deal, but it's such a strange problem it bugs me. I can't find any info on anything like this. I'm not even sure if it's hardware or software. Any ideas? (windows 7 laptop connected to LCD by DVI to HDMI cable)

    Read the article

  • Why is the Windows 8 recycle bin using more space than it is allocated?

    - by oldmankit
    I ran WinDirStat to scan the contents of my hard drive. I was surprised to see that the $RECYCLE.BIN folder on my D: drive takes 26 GB of space. I emptied the recycle bin, refreshed the folder in WinDirStat, but it still takes 26 GB of space. I reduced the Maximum size of the recycle bin for this drive to 10000 MB for the main user of this computer, and disabled the recycle bin for the other user, and refreshed the folder in WinDirStat, but it still takes 26 GB of space. I ran (in an elevated window) rd /s D:\$Recycle.bin, and refreshed the folder in WinDirStat, and finally it became empty. Why was it taking up space even after I emptied it? Why was it taking more space (26 GB) than the maximum allowed amount (10 GB)? Update: After six months of using Windows (no re-install and no changes of settings related to the Recycle Bin) I used WinDirStat to check how big D:\$RECYCLE.BIN has become. It is now 29 GB. In Recycle Bin Properties, I select drive D, and it is still a custom maximum size (10000 MB).

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • copy large LVM volume(14TB) from one server to another

    - by bruce
    recently,I have to copy a very large LVM volume()rom server A to server B. Below is the filesystem of server A and server B - server A [root@AVDVD-Filer ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_avdvdfiler-lv_root 16T 14T 1.5T 91% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/vg_avdvdfiler-test 2.3T 201M 2.1T 1% /test /dev/sr0 3.3G 3.3G 0 100% /mnt server B [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-LogVol00 20G 2.5G 16G 14% / tmpfs 3.0G 0 3.0G 0% /dev/shm /dev/cciss/c0d0p1 194M 23M 162M 13% /boot /dev/mapper/VolGroup00-LogVol00 16T 133M 15T 1% /xiangao/lv1 /dev/mapper/VolGroup00-LogVol01 4.7T 190M 4.5T 1% /xiangao/lv2 I want to copy LVM volume /dev/mapper/vg_avdvdfiler-lv_root on server A to LVM volume /dev/mapper/VolGroup00-LogVol00 on server B . The server A and server B is in the same IP segment. IN the LVM volume on server A , there is all average 500M avi wmv mp4 etc. I tried mount /dev/mapper/vg_avdvdfiler-lv_root on server A to server B through NFS , then use cp command copy. It is clear I faild . Because the LVM volume is too big , I do not have good idea . I hope a good solution here. I'm a chinese, my english is very pool. sorry thanks everyone!

    Read the article

  • How to share text, pictures and video to restricted set of users

    - by joaoc
    I want to share pictures and videos of my kid growing up but I don't want it to be open to the public. I just want me and my wife to be able to add material and then share with grandparents and friends who would be given a username/password or some other solution that authenticates them. This content would preferably be available remotely over the internet or downloaded periodically to the allowed viewers computers. The second option most likely means they would have to install a client, which I am not too fond of doing (I would have to ask them to install it, help configure for different platforms, ...) I'm at the start but I would like software that also allows export of the data (if one day I want to migrate to a different solution). Folder sharing I don't know if Windows folder sharing is reliable, robust and safe enough to share over the internet. Dropbox could be a software solution, but it's limited to 2GB and requires receivers to also install and configure software. A folder is just a collection of files, so it would be harder to keep them organized with text describing the pictures/video Email I could email the content every now and then but videos are almost always too big to go as attachments This would also not provide a history of updates to anyone just recently joining in Are there other solutions to achieve this? NOTE: I also thought that software to run a local or remote blog could be an option but we aren't allowed to ask those questions on superuser because one admin doesn't like cloud-computing questions. But if you do think it's the best approach and can suggest a solution, do state it in the answers anyway!

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • Are there compact external USB audio interfaces which are better than a on-board sound?

    - by rumtscho
    I am asking this for a friend. He loves his voice recognition software and dictates a lot of text using a headset. Now he has a new laptop, which only has a combined mic/headphones output, and wanted to buy an adapter. I told him to get an external USB sound interface instead, as the better sound quality will probably increase the hit rate of the voice recognition. He agreed, but when he saw a picture of the SoundBlaster X-Fi, he said that it is way too big, because he wants to carry the thing everywhere. He'd rather have one of these small things which are the size of a flash memory stick, with only one mic and one phones output, period. Now I am not sure whether these mini interfaces would produce a sound better than onboard sound. They all seem to come not from established audio interface manufacturers, but from electronic accessories manufacturers like Speedlink, or just noname brands. Is there a compact audio interface with good A/D quality (it is OK if the price is comparable to that of the bigger interfaces, even if there is no additional functionality like Chinch in-/output etc)?. And if there isn't, will the noname soundcardsticks offer any advantage over a simple adaptor for the onboard sound?

    Read the article

  • Reduce "Metafile" memory usage?

    - by Jay Conrod
    My work computer (Windows 7 64-bit) spends a lot of time swapping memory when I switch between programs. This surprises me since I have 4 GB of RAM, and the programs I use aren't particularly RAM hungry (Outlook, Emacs, p4win, Firefox, various build tools). I downloaded RAMMap, and it shows over a gigabyte of memory used by "Metafile". From the Sysinternals blog: Metafile is part of the system cache and consists of NTFS metadata. NTFS metadata includes the MFT as well as the other various NTFS metadata files. ... In the MFT each file attribute record takes 1k and each file has at least one attribute record. Add to this the other NTFS metadata files and you can see why the Metafile category can grow quite large on servers with lots of files. So I understand what the "Metafile" data is... I work on large builds comprising hundreds of thousands of files (none are that big, but they add up to several gigabytes). My question is how can I reduce the amount of memory used by "Metafile"? I'm not actively using all those files at once, so why does Windows need to keep info in RAM? Restarting my machine every time I sync a new build is really annoying.

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • Tunneling a public IP to a remote machine

    - by Jim Paris
    I have a Linux server A with a block of 5 public IP addresses, 8.8.8.122/29. Currently, 8.8.8.122 is assigned to eth0, and 8.8.8.123 is assigned to eth0:1. I have another Linux machine B in a remote location, behind NAT. I would like to set up an tunnel between the two so that B can use the IP address 8.8.8.123 as its primary IP address. OpenVPN is probably the answer, but I can't quite figure out how to set things up (topology subnet or topology p2p might be appropriate. Or should I be using Ethernet bridging?). Security and encryption is not a big concern at this point, so GRE would be fine too -- machine B will be coming from a known IP address and can be authenticated based on that. How can I do this? Can anyone suggest an OpenVPN config, or some other approach, that could work in this situation? Ideally, it would also be able to handle multiple clients (e.g. share all four of spare IPs with other machines), without letting those clients use IPs to which they are not entitled.

    Read the article

  • Someone from china wants kill my entry bandwidth??

    - by yes123
    Hi guys. Someoen from china with two different ip is downloading the same big file from my server. Their ip are: 122.89.45.210 60.210.7.62 They requesting this file and downloading more than 20 times per minute. What Can I do to prevent this? (I am on gentoo with root access) And WHY they do this to a site that doesn't have nothing to do with china ? ADD1: Other ips: 221.8.60.131 124.67.47.56 119.249.179.139 60.9.0.176 ADD2: the stupid thing is they are requesting only 1 single file lol. Or they want that file removed (tho i don't see why) Or they are pretty stupid ADD3: Situation is getting worse. IP are spreading from other countries too (usa and korea if www.geobytes.com/iplocator.htm it's right) And now they are requesting another file. ADD4: it seems after they realized i removed that file they stopped attacking me. I will monitor the situation. They started again after a sleep of 3-4 mintues with the same file (lucky me). Hard to say why this is happening

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Apache server rewrite rules: how to avoid "implicitly forcing redirect (rc=302)"?

    - by Olivier Pons
    Hi! I've got a very annoying problem: our webserver handles 2 (more actually but let's say 2 for a simpler example): pretassur.fr pretassuragentimmobilier.fr Here's what I want to do: change (whatever1).pretassuragentimmobilier.fr(/whatever2) to (whatever1).pretassur.fr(/whatever2)?theme=agentimmobilier So here's my rewriterule: RewriteCond %{SERVER_NAME} (([a-z]+\.)*)pretassuragentimmobilier.(fr|com) RewriteRule ^(.+) http://%1pretassur.fr$1 [E=THEME:pretassur_agent,QSA] # if THEME not empty, set it : RewriteCond %{ENV:THEME} ^(.+)$ RewriteRule (.*) $1?IDP=%{ENV:THEME} [QSA] The big (huge) problem is: let's have a look at the rewrite logs: [pretassurmandataireimmo.com] (5) => setting env variable 'THEME' to 'pretassur_mandataire' [pretassurmandataireimmo.com] => (2) implicitly forcing redirect (rc=302) with http://pretassur.fr/ Aaaaaaaaarg! "implicitly forcing redirect" = I don't want that ! I want to internally redirect to pretassur.fr, not to make a real redirect! Now if you type: http://pretassurmandataireimmo.com it is redirected to http://pretassur.fr/?IDP=pretassur_mandataire (try it) I don't want that! I want to display this page http://pretassur.fr/?IDP=pretassur_mandataire but without touching the original host! Any idea? Thanks a lot!

    Read the article

  • HP/Lenovo alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/Lenovo? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >