Search Results

Search found 1797 results on 72 pages for 'bandwidth measuring'.

Page 49/72 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Download the Anime Angels Theme for Windows 7

    - by Asian Angel
    Do you have a passion for all things anime? Then you will definitely want to have a look at the Anime Angels Theme for Windows 7. This cute theme will give your desktop that extra bit of fun and spunk to help bring a smile to your face. The theme comes with 21 Hi-Res wallpapers of the cutest Anime Angels from around the web, a wonderful set of anime icons, and great system sounds to round out the perfect anime theme. Anime Angels Theme For Windows (Anime Themes) [VikiTech] Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • My WiFi gets deauthenticated every few minutes or seconds (Reason: 7)

    - by dan
    My Wifi on my new Thinkpad W520 running Natty keeps dropping out and coming back on. Output from dmesg below. Any advice? [30493.687552] wlan0: authenticate with e0:91:f5:ef:7b:b2 (try 1) [30493.689127] wlan0: authenticated [30493.689144] wlan0: associate with e0:91:f5:ef:7b:b2 (try 1) [30493.693592] wlan0: RX AssocResp from e0:91:f5:ef:7b:b2 (capab=0x411 status=0 aid=4) [30493.693595] wlan0: associated [31631.172868] wlan0: deauthenticated from e0:91:f5:ef:7b:b2 (Reason: 7) [31631.211847] cfg80211: All devices are disconnected, going to restore regulatory settings [31631.211868] cfg80211: Restoring regulatory settings [31631.211873] cfg80211: Calling CRDA to update world regulatory domain [31631.215037] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [31631.215042] cfg80211: World regulatory domain updated: [31631.215044] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [31631.215046] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [31631.215049] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [31631.215051] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [31631.215053] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [31631.215055] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [31632.289638] wlan0: authenticate with e0:91:f5:ef:7b:b2 (try 1) [31632.291262] wlan0: authenticated [31632.291276] wlan0: associate with e0:91:f5:ef:7b:b2 (try 1) [31632.295119] wlan0: RX AssocResp from e0:91:f5:ef:7b:b2 (capab=0x411 status=0 aid=4) [31632.295123] wlan0: associated [31886.234836] wlan0: deauthenticated from e0:91:f5:ef:7b:b2 (Reason: 7) [31886.306735] cfg80211: All devices are disconnected, going to restore regulatory settings [31886.306740] cfg80211: Restoring regulatory settings [31886.306744] cfg80211: Calling CRDA to update world regulatory domain

    Read the article

  • Use two networks at the same time?

    - by Christopher
    I want to use Ubuntu 10.10 Server in a classroom, a computer lab whose bandwidth is provided by a local cable ISP. That's no problem, though the school network has an IP printer that I want to use. I cannot reach the printer through the cable Internet. But, I have two network cards. How is it possible to use both networks at once? eth0 (static 192.168.1.254) is plugged into a four-port router, 192.168.1.1. On the public side of the four-port router is Internet provided by the cable company. I also have the classroom workstations plugged into a switch. The switch is plugged into the four-port router. The whole classroom is wired into the cable Internet. The other NIC, eth1, could it be plugged into an Ethernet jack in the wall? It uses the school network, and I might receive by DHCP an IP address like 10.140.10.100, with the printer on maybe 10.120.50.10. I was thinking about installing the printer on the server so that it could be shared with the workstations. But how does this work? Can I just plug eth1 into the school network and access both LANs? Thanks for any insight

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Frequent disconnects using wlan AR9285

    - by John Neil
    I'm getting a large number of disconnects from my wireless when I switched to oneiric server (I did not see these happen with oneiric desktop) from my AR9285 wireless LAN device. Here is the syslog snippet: Oct 17 09:43:17 weather kernel: [ 1537.329138] wlan0: deauthenticated from 00:12:17:7a:8e:42 (Reason: 7) Oct 17 09:43:17 weather kernel: [ 1537.340409] cfg80211: All devices are disconnected, going to restore regulatory settings Oct 17 09:43:17 weather kernel: [ 1537.340423] cfg80211: Restoring regulatory settings Oct 17 09:43:17 weather kernel: [ 1537.340435] cfg80211: Calling CRDA to update world regulatory domain Oct 17 09:43:17 weather kernel: [ 1537.348571] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain Oct 17 09:43:17 weather kernel: [ 1537.348581] cfg80211: World regulatory domain updated: Oct 17 09:43:17 weather kernel: [ 1537.348586] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Oct 17 09:43:17 weather kernel: [ 1537.348594] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348600] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348607] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348613] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Oct 17 09:43:17 weather kernel: [ 1537.348620] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Here is the relevant lspci output: # lspci | grep Atheros 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) I have done quite a bit of searching and saw discussions for previous versions of ubuntu that recommended installing the linux-backports-modules package. However, this does not appear to be available for oneiric (just the headers are listed as a package). Any advice on how to achieve a stable wireless connection for this server? It's location mitigates against using a wired connection.

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • USB seems to pause system

    - by Marco van de Voort
    I've an application that does some simple measuring, for which it polls a few 100kbs several times a second (8-25 times) The behaviour is not really dependant on chipset (happens on several mobo's intel 965- P55) and OSes (XPsp3 and win7). Also the make of the USB keyboard doesn't seem to matter. I notice that sometimes when an USB kbd is plugged in, the system pauses for say 500-1000ms. (about 900-1000ms on disconnect, and 400-500 on the subsequent connect) It also happens for other USB devices (most notably mice and massstorage devices), but only the first time such device is connected to an installation. This disrupts the measurement and I really would like to get rid on this. I already tried to disable as much as possible. (powersave, teletubby mode (*) etc), and while this helped with the non-USB related disruptions of the measurement, it doesn't help with the USB related ones. (*) fyi, turning off themes (to resp. classic/non-aero), and turning off effects in system solved problems that occured when minimizing/maximizing the app. Any pointers to look into? I'm a bit stuck with this.

    Read the article

  • Sharing password-protected videos on social media

    - by PaulJ
    We are developing a site where users will be able to watch and download videos that they've recorded of themselves in a public event. The videos will be password protected, and will be available only to users who have paid for them at the event... ...But on the other hand, we also want users to share those videos on social media, since they will be an attractive publicity for our events. Having people log into our site with their password, download the video and then re-upload it to Youtube/Facebook will be too cumbersome, and I suspect that few users will be willing to do that. So the obvious alternative is to have one of those convenient "share" buttons, but the problem with that approach will be that: The video will be physically hosted (and linked to) in our site. What happens if those videos go viral and our bandwidth cost explodes? The video is password protected. The solution I've thought of for this is: Upload the user's video to our (password-protected site) and to Youtube at the same time, as an unlisted video. The user can access our site with his password and download his video (to watch on his TV or whatever). If the users hits the "share" button, we show him the Youtube link... and we turn the video into a listed one. This seems in line with the ideas in Using YouTube as a CDN, and there didn't seem to be any objections in that question. I'm posting this just to confirm that my idea doesn't violate any Youtube TOS, and also to see if it is a good one or there might be better alternatives.

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • What kind of hosting do I need?

    - by Robert Smith
    I migrated this question from serverfault. Hopefully this is the appropriate place. I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have a custom made forum (rather than a built-in forum like the ones you can find in plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. By the way, I was told that Linodo is not all that different to Amazon EC2 but this website is supposed to work 24/7, so I can't take advantage of Linodo's flexibility regarding creating and deleting instances. Thanks in advance.

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Packing up files on my machine, sending it to a server, and unpacking it

    - by MxyL
    I am implementing a feature in my application that sends all files in a specified folder to a server. I have the basic FTP transaction set up using Apache Commons FTPClient: it sets up a connection and transfers a file from one place to another. So I can simply loop over the directory and use this connection to transfer all the files. However, this could be better. Rather than transferring each file one by one, it makes more sense to pack it up in a compressed archive and then send the whole file at once. Saves time and bandwidth, since these are just text files so they compress nicely. So I would like to add automatic archive packing and unpacking. This is the workflow I have planned out, using zip compression: Zip all files in the folder Send the file over Unzip the files at its destination 1 and 2 are easy since the files are on the local machine, but I'm not sure how to accomplish the last step, when the files are now on a remote server. What are my options? I have control over what I can put and run on the server. Perhaps it is not necessary to do the packing/unpacking myself?

    Read the article

  • Oracle Solaris 11.1 available today

    - by user12611852
    Today Oracle is pleased to announce availability of Oracle Solaris 11.1. Download Solaris 11.1 Order Solaris 11.1 media kitExisting customers can quickly and simply update using the network based repository Highlights include: 8x faster database startup and shutdown and online resizing of the database SGA with a new optimized shared memory interface between the database and Oracle Solaris 11.1 Up to 20% throughput increases for Oracle Real Application Clusters by offloading lock management into the Oracle Solaris kernel Expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements to maximize network resource utilization and manage bandwidth in cloud environments 4x faster Solaris Zone updates with parallel operations shorten maintenance windows New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Learn more and share these valuable tools with your customers to enable them to move to Oracle Solaris 11.1 quickly. Many customers wait for the first update --now is the time to encourage them to install Oracle Solaris 11.1. Oracle Solaris 11.1 Data Sheet  What's New in Oracle Solaris 11.1 Oracle Solaris 11.1 FAQs Oracle Solaris 11 .1 Customer Presentation Oracle Solaris 11.1 is recommended for all SPARC T4 Systems and will soon be available preinstalled.

    Read the article

  • Free Webinar: A faster, cheaper, better IT Department with Azure

    - by Herve Roggero
    Join me for a free Webinar on Wednesday October 17th at 1:30PM, Eastern Time. I will discuss the benefits of cloud computing with the Azure platform. There isn’t a company out there that would say “No” to reduced IT costs and unlimited scaling bandwidth. This webinar will focus on the specific benefits of the Microsoft Azure cloud platform and will convince you on the sound business rationale behind moving to the cloud. From Infrastructure as a Service (Iaas) to Platform as a Service (Paas), Azure supports quick deployments, virtual machines, native SQL Databases and much more. Topics that will be discussed: - Why use Azure for your Cloud Computing needs - Iaas and Paas Offerings - Differing project approaches to Cloud computing - How Azure’s agility and reduced costs lead to better solutions Attendees of this webinar will also be eligible to receive the following: Free Two Hour Consultation which can include: - Review of Your Cloud Strategy - Cloud Roadmap Review - Review of Data-mart strategies - Review of Mobility Strategies Click Here to Register Now. About Herve Roggero Hervé Roggero, Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Hervé's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Hervé holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Hervé is the co-author of "PRO SQL Azure" from Apress. For more information, visit www.bluesyntax.net.

    Read the article

  • Any examples of fair mmo games with quick completion

    - by Keith Player
    I'm looking for some example games for inspiration that allow from 10 to a large number of players at a time and can be completed in 10 to 30 minutes. I'm looking for something that would have extremely low bandwidth and not be dependent on chance or luck (i.e one player can't gain an unfair advantage because the computer put them in a better position). Realized on the way home that more clarifications might have been helpful. I'm looking to develop a pay-to-play competition that would allow a large number of players to compete in a relatively short period of time. One way would be to have an mmo that can be completed in 30 minutes, another way would be if you could have 10 person games that finish in under 5 minutes and then have the winners compete against each other until a winner is decided. I'm interested in any genre that would make for a fun/interesting game that doesn't depend on luck, so all players should have the same choice/availability of activities/resources and follow the same rules. Some possible games that could possibly be modified into what I want, would be bztanks (too easy to create a bot), diplomacy (takes too long), risk, some chess like game. I was just wondering if there are other game types to the ones I have been considering.

    Read the article

  • Upgrading to 12.10 on an external hard drive

    - by Tom Childers
    I did some googling on this and didn't find anything specific for my situation. I currently have 12.04 installed on an external USB hard drive. It's working great. I want to upgrade it to 12.10. My bandwidth is very limited so I have a friend who will download 12.10 for me and put it on a flash stik. Then I can upgrade without having to do the download myself. Which particular version of the 12.10 download file(s) should I get? Are there alternate 12.10 downloads that have all the packages? How do I set it up so when I upgrade 12.04 I can specify that it look in some local repository for the 12.10 files? Can I just dump the 12.10 files in some local directory? Or do I have do go thru some complex commands to create a local repository? I'm pretty new to Linux so a long process of complex terminal commands will probably be a show stopper for me. Remember that my 12.04 install resides on an external hard drive. And I have a laptop with multiple USB ports. Thanks! Advait

    Read the article

  • Web hosting company basically forces me to use their domain name [closed]

    - by Jinx
    I've recently stumbled upon an unusual problem with one of hosting companies called giga-international.com. Anyway, I've ordered com.hr domain from Croatian domain name registration company, and my client insisted on using this host provider as couple of his friends already are hosted with them. I thought something was fishy when the first result on Google for Giga International was this little forum rant instead of their webpage. When I was checking their services they listed many features etc... space available, bandwidth etc. I just wanted to check how much ram do I get for my PHP scripts so I emailed them, and they told me that was company secret. Seriously? Anyway, since my client still insisted on hosting with them I've bought their Webspace package. During registration I had to choose free domain name because I couldn't advance registration without it. Nowhere was said, not even in general terms and conditions that I wouldn't be able to change that domain name. At least not for double the price of domain name per year. They said I can either move my domain name over to them (and pay them domain registration), or pay them 1 Euro per month for managing a DNS entry. On any previous hosting solution I was able to manage my domain names just by pointing my domain to their name servers, and this is something completely new and absurd for me. They also said that usual approach is not possible because of security and hardware limitations. I'd like to know what you guys think about this case, and should I report, and where should I report this case. In short. They forced me to register free domain name which doesn't suit my needs in order to register for their webspace package, and refuse to change domain name for my account until I either transfer domain to them or pay them DNS management which costs double the price of the domain name per year.

    Read the article

  • Web Hosting Checklist

    - by Chris
    I am a web developer that is starting to look into hosting his own website. I would like to showcase my programming skills (PHP, MySQl, C#, Wordpress). My knowledge of languages I am OK with but the actually hosting site is where my knowledge starts to get a little shaky. I know the basics (bandwidth, sub-domains, re-write rules) but I would love your input, to help me formulate a check list of certain web-hosting services that I should be on the look-out for. Also I was wondering if there were any reliable hosting providers who give you the option to host both c# code-behinds and PHP code. As I would like to have two versions of my site, one in C# and one in PHP the hope is that if I need to look for another job this website will help me show possible employers my server side knowledge. I hope this is enough info, I did some researching online but found a bunch of unless articles and I've always have had luck on the StackExchange sites. So hopefully you, can help me. Thanks alot.

    Read the article

  • What is involved with writing a lobby server?

    - by Kira
    So I'm writing a Chess matchmaking system based on a Lobby view with gaming rooms, general chat etc. So far I have a working prototype but I have big doubts regarding some things I did with the server. Writing a gaming lobby server is a new programming experience to me and so I don't have a clear nor precise programming model for it. I also couldn't find a paper that describes how it should work. I ordered "Java Network Programming 3rd edition" from Amazon and still waiting for shipment, hopefully I'll find some useful examples/information in this book. Meanwhile, I'd like to gather your opinions and see how you would handle some things so I can learn how to write a server correctly. Here are a few questions off the top of my head: (may be more will come) First, let's define what a server does. It's primary functionality is to hold TCP connections with clients, listen to the events they generate and dispatch them to the other players. But is there more to it than that? Should I use one thread per client? If so, 300 clients = 300 threads. Isn't that too much? What hardware is needed to support that? And how much bandwidth does a lobby consume then approx? What kind of data structure should be used to hold the clients' sockets? How do you protect it from concurrent modification (eg. a player enters or exists the lobby) when iterating through it to dispatch an event without hurting throughput? Is ConcurrentHashMap the correct answer here, or are there some techniques I should know? When a user enters the lobby, what mechanism would you use to transfer the state of the lobby to him? And while this is happening, where do the other events bubble up? Screenshot : http://imageshack.us/photo/my-images/695/sansrewyh.png/

    Read the article

  • Solaris 11 VNC Server is "blurry" or "smeared"

    - by user12620111
    I've been annoyed by quality of the image that is displayed by my VNC viewer when I visit a Solaris 11 VNC server. How should I describe the image? Blurry? Grainy? Smeared? Low resolution? Compressed? Badly encoded? This is what I have gotten used to seeing on Solaris 11: This is not a problem for me when I view Solaris 10 VNC servers. I've finally taken the time to investigate, and the solution is simple. On the VNC client, don't allow "Tight" encoding. My VNC Viewer will negotiate to Tight encoding if it is available. When negotiating with the Solaris 10 VNC server, Tight is not a supported option, so the Solaris 10 server and my client will agree on ZRLE.  Now that I have disabled Tight encoding on my VNC client, the Solaris 11 VNC Servers looks much better: How should I describe the display when my VNC client is forced to negotiate to ZRLE encoding with the Solaris 11 VNC Server? Crisp? Clear? Higher resolution? Using a lossless compression algorithm? When I'm on a low bandwidth connection, I may re-enable Tight compression on my laptop. In the mean time, the ZRLE compression is sufficient for a coast-to-coast desktop, through the corporate firewall, encoded with VPN, through my ISP and onto my laptop. YMMV.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >