Search Results

Search found 1018 results on 41 pages for 'galaxy s3'.

Page 7/41 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Samsung Galaxy S2 ROM compilation error?

    - by Bashir
    I'm running Ubuntu 12.04 X64 and have installed all the tools(Toolchain: arm-2010q1 and Samsung Galaxy S2 source as given Here) to compile a ROM. When I run make Command I'm getting this error: kernel/built-in.o: In function `cpufreq_table_show': cpu_pm.c:(.text+0x39b64): undefined reference to `cpufreq_frequency_get_table' kernel/built-in.o: In function `cpufreq_max_limit_store': cpu_pm.c:(.text+0x39cd4): undefined reference to `omap_cpufreq_max_limit' cpu_pm.c:(.text+0x39d04): undefined reference to `omap_cpufreq_max_limit_free' cpu_pm.c:(.text+0x39d24): undefined reference to `omap_cpufreq_max_limit_free' kernel/built-in.o: In function `cpufreq_min_limit_store': cpu_pm.c:(.text+0x39dd4): undefined reference to `omap_cpufreq_min_limit' cpu_pm.c:(.text+0x39e04): undefined reference to `omap_cpufreq_min_limit_free' cpu_pm.c:(.text+0x39e24): undefined reference to `omap_cpufreq_min_limit_free' make: *** [.tmp_vmlinux1] Error 1

    Read the article

  • Can a S3 mount be used as the document root for Apache?

    - by Hesse
    Has anyone been successful in having their DocumentRoot reside on an S3 mount (using s3fs)? I currently have a mounted bucket at /mnt/s3. I can read and write files to it no problem. In my httpd.conf I have DocumentRoot "/mnt/s3". When I restart Apache I get the error "DocumentRoot must be a directory". Has anyone tried something similar. My goal is to have a shared storage space so my nodes can scale easily and access the same document root. Thanks

    Read the article

  • Any need to make backup of data on Amazon S3?

    - by Chrille
    I'm hosting 200 GB of product images at S3 (this is my primary file host). Do I need to back that data up somewhere else, or is S3 safe as it is? I have been experimenting with mounting the S3 bucket to a EC2 instance, and then making a nightly rsync backup. The problem is that it's about 3 million files, so it takes a while to generate the different rsync needs. The backup actually takes about 3 days to complete. Any ideas how to do this better? (if it's even necessary?)

    Read the article

  • How to configure S3 or DNS to handle incomplete name (sans www) for web site?

    - by user193116
    I have a set up a bucket called "www.mydomainname.com" to host my website and I have configured the CNAME such that "www.mydomainname.com" points to the my endopint http://www.mydomainname.com.s3-website-us-east-1.amazonaws.com/ It works and when people who type the the full url "www.mydomainname.com" are able to see my index page But most people are in the habit of typing incoplete domain name -- they just type "mydomainname.com" and their browser fails to find my site. Is there a way to configure CName or S3 bucket such that typing "mydomainname.com" take them to my s3 website ? (I am using Networksolutions as my DNS provider).

    Read the article

  • C# code to GZip and upload a string to Amazon S3

    - by BigJoe714
    Hello. I currently use the following code to retrieve and decompress string data from Amazon C#: GetObjectRequest getObjectRequest = new GetObjectRequest().WithBucketName(bucketName).WithKey(key); using (S3Response getObjectResponse = client.GetObject(getObjectRequest)) { using (Stream s = getObjectResponse.ResponseStream) { using (GZipStream gzipStream = new GZipStream(s, CompressionMode.Decompress)) { StreamReader Reader = new StreamReader(gzipStream, Encoding.Default); string Html = Reader.ReadToEnd(); parseFile(Html); } } } I want to reverse this code so that I can compress and upload string data to S3 without being written to disk. I tried the following, but I am getting an Exception: using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(AWSAccessKeyID, AWSSecretAccessKeyID)) { string awsPath = AWSS3PrefixPath + "/" + keyName+ ".htm.gz"; byte[] buffer = Encoding.UTF8.GetBytes(content); using (MemoryStream ms = new MemoryStream()) { using (GZipStream zip = new GZipStream(ms, CompressionMode.Compress)) { zip.Write(buffer, 0, buffer.Length); PutObjectRequest request = new PutObjectRequest(); request.InputStream = ms; request.Key = awsPath; request.BucketName = AWSS3BuckenName; using (S3Response putResponse = client.PutObject(request)) { //process response } } } } The exception I am getting is: Cannot access a closed Stream. What am I doing wrong?

    Read the article

  • Adding S3 metadata using jets3t

    - by billintx
    I'm just starting to use the jets3t API for S3, using version 0.7.2 I can't seem to save metadata with the S3Objects I'm creating. What am I doing wrong? The object is successfully saved when I putObject, but I don't see the metadata after I get the object. S3Service s3Service = new RestS3Service(awsCredentials); S3Bucket bucket = s3Service.getBucket(BUCKET_NAME); String key = "/1783c05a/p1"; String data = "This is test data at key " + key; S3Object object = new S3Object(key,data); object.addMetadata("color", "green"); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } s3Service.putObject(bucket, object); S3Object retreivedObject = s3Service.getObject(bucket, key); for (Iterator iterator = object.getMetadataMap().keySet() .iterator(); iterator.hasNext();) { String type = (String) iterator.next(); System.out.println(type + "==" + object.getMetadataMap().get(type)); } Here's the output before putObject Content-Length==37 color==green Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8 md5-hash==00e764936dd5ea4fab2c4574df5ef550 Here's the output after putObject/getObject Content-Length==37 ETag=="00e764936dd5ea4fab2c4574df5ef550" request-id==9ED1633672C0BAE9 Date==Wed Mar 24 09:51:44 CDT 2010 Content-MD5==AOdkk23V6k+rLEV03171UA== Content-Type==text/plain; charset=utf-8

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • Problem installing ubuntu touch on galaxy nexus

    - by Francesco
    I've installed ubuntu touch on my galaxy nexus following the tutorial on the official site. However the latter is not so clear.. In particular, during the installation, user action on the phone is requested and not documented on the tutorial: 1) The phone asked me whether rebooting, wiping the cache or something else (i did nothing and the phone rebooted) 2) The phone asked me whether replacing or not cmw (or something similar). I asked no.. After the installation all seemed to work correctly. However after shutting down the phone can't power on anymore... When I push the power button the battery icon appears, showing that the battery is completely charged. What am I supposed to do?

    Read the article

  • Mounting Samsung Galaxy S2 via USB on Ubuntu 13.04

    - by argvar
    Connecting my Samsung Galaxy S2 to Windows 7 worked seamlessly. Now that I'm on Ubuntu 13.04 I'd like to access my phone's drive via the USB cable, but that's not working. When I do plug it in I get a error message in Ubuntu: Unable to mount SAMSUNG_Android Unable to open MTP device '[usb:002,008]' Then when I unplug it I see a error message on the phone: Attention Unable to find software on your PC that can recognize your device. Service pack 3, Windows Media Player, version 10 or higher, for Windows XP or Android FIle Transfer for Mac OS must be installed. You can download and install PC Kies from http://www.samsung.com/kies in order to sync data with your device, back up data, and upgrade your device (Windows and Mac OS are supported) What can I do here to fix this? I don't want to use any special software with the phone, just access the phone's drive.

    Read the article

  • A weekend with the Samsung Galaxy Tab

    - by Richard Mitchell
    This weekend I took one of the Samsung Galaxy Tabs we have lying around the office here home to see how I got on with it as I've been thinking of buying one. Initial impressions The look and feel of the Tab is quite nice. It's a lot smaller than an iPad but that is no bad thing as I imagine they are targeted at different markets. The Tab fits into my inside coat pocket nicely and doesn't feel like it's weighing me down too much. Connecting up the Tab to the network at work was fine, typing in...(read more)

    Read the article

  • How to connect Samsung Galaxy S3 via USB?

    - by dez93_2000
    Connecting as either MTP or PTP: neither allows one to see pictures saved as default by phone camera to DCIM folder on external SD card. Similar problems with previous models (e.g. S2) were solvable by 'usb utilities' in wireless & networking settings, but this is no longer present. Other suggestions have mentioned uninstalling various libraries... but I don't wanna just start cutting stuff without knowing it'll help. Any thoughts on how to mount a Samsung Galaxy S3 over USB?

    Read the article

  • Taking web sites offline for demonstration on Galaxy Tablet

    This article is the Android sequel to the initial article about how to prepare an offline version of your web site for the purpose of demonstration or for exhibitions: Taking web sites offline for demonstration. If you didn't read the original article, please take some minutes (5 to 10 maximum) to gain a better understanding on the following. Thanks. I'm going to describe my steps using a Samsung Galaxy Tab 10.1 running on Ice Cream Sandwich (ICS - version 4.0.4) but I would assume that any other Android-based device will show more or less the same results. Transferring the prepared archive to your Android device

    Read the article

  • Deleting multiple objects in a AWS S3 bucket with s3curl.pl?

    - by user183394
    I have been trying to use the AWS "official" command line tool s3curl.pl to test out the recently announced multi-object delete. Here is what I have done: First, I tested out the s3curl.pl with a set of credentials without a hitch: $ s3curl.pl --id=s3 -- http://testbucket-0.s3.amazonaws.com/|xmllint --format - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 884 0 884 0 0 4399 0 --:--:-- --:--:-- --:--:-- 5703 <?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>testbucket-0</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>file_1</Key> <LastModified>2012-03-22T17:08:17.000Z</LastModified> <ETag>"ee0e521a76524034aaa5b331842a8b4e"</ETag> <Size>400000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <Contents> <Key>file_2</Key> <LastModified>2012-03-22T17:08:19.000Z</LastModified> <ETag>"6b32cbf8219a59690a9f69ba6ff3f590"</ETag> <Size>600000</Size> <Owner> <ID>e6d81ea69572270e58d3814ab674df8c8f1fd5d502669633a4951bdd5185f7f4</ID> <DisplayName>zackp</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> Then, I following the s3curl.pl's usage instructions: s3curl.pl --help Usage /usr/local/bin/s3curl.pl --id friendly-name (or AWSAccessKeyId) [options] -- [curl-options] [URL] options: --key SecretAccessKey id/key are AWSAcessKeyId and Secret (unsafe) --contentType text/plain set content-type header --acl public-read use a 'canned' ACL (x-amz-acl header) --contentMd5 content_md5 add x-amz-content-md5 header --put <filename> PUT request (from the provided local file) --post [<filename>] POST request (optional local file) --copySrc bucket/key Copy from this source key --createBucket [<region>] create-bucket with optional location constraint --head HEAD request --debug enable debug logging common curl options: -H 'x-amz-acl: public-read' another way of using canned ACLs -v verbose logging Then, I tried the following, and always got back error. I would appreciated it very much if someone could point out where I made a mistake? $ s3curl.pl --id=s3 --post multi_delete.xml -- http://testbucket-0.s3.amazonaws.com/?delete <?xml version="1.0" encoding="UTF-8"?> <Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><StringToSignBytes>50 4f 53 54 0a 0a 0a 54 68 75 2c 20 30 35 20 41 70 72 20 32 30 31 32 20 30 30 3a 35 30 3a 30 38 20 2b 30 30 30 30 0a 2f 7a 65 74 74 61 72 2d 74 2f 3f 64 65 6c 65 74 65</StringToSignBytes><RequestId>707FBE0EB4A571A8</RequestId><HostId>mP3ZwlPTcRqARQZd6gU4UvBrxGBNIVa0VVe5p0rqGmq5hM65RprwcG/qcXe+pmDT</HostId><SignatureProvided>edkNGuugiSFe0ku4eGzkh8kYgHw=</SignatureProvided><StringToSign>POST Thu, 05 Apr 2012 00:50:08 +0000 The file multi_delete.xml contains the following: cat multi_delete.xml <?xml version="1.0" encoding="UTF-8"?> <Delete> <Quiet>true</Quiet> <Object> <Key>file_1</Key> <VersionId> </VersionId>> </Object> <Object> <Key>file_2</Key> <VersionId> </VersionId> </Object> </Delete> Thanks for any help! --Zack

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • Galaxy Note II MTP on Ubuntu 12.04

    - by Anass Ahmed
    I bought a branding new Galaxy Note II and I tried to mount its storage to my ubuntu laptop. As you know, Android 4.0+ uses MTP by default. Android 4.1 doesn't support USB Mass Storage anymore! So I have to use MTP to open my files via USB. I followed this article to get it work. It worked only for External Memory Card. but the internal cannot be reached! $mount /dev/sda3 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda5 on /media/Islamics type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) /dev/sda8 on /media/Technology type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) /dev/sda7 on /media/Misc type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/anass/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=anass) gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev) mtpfs on /media/GalaxyNote2 type fuse.mtpfs (rw,nosuid,nodev,allow_other,user=anass)

    Read the article

  • Not All iPhone 5 and Galaxy SIII in Some Markets #UX #mobile #BBC #L10n

    - by ultan o'broin
    The BBC World Service provides news content to more people across the globe, and has launched a series of new apps tailored for Nokia devices, allowing mobile owners to receive news updates in 11 different languages. So, not everyone using an iPhone 5 or Samsung Galaxy SIII then? hardly surprising given one of these devices could cost you a large chunk of your annual income in some countries! The story is a reminder of taking into account local market requirements and using a toolkit to develop solutions for them. The article tells us The BBC World Service apps will feature content from the following BBC websites: BBC Arabic, BBC Brasil (in Portuguese), BBC Chinese, BBC Hindi, BBC Indonesia, BBC Mundo (in Spanish), BBC Russian, BBC Turkce, BBC Ukrainian, BBC Urdu and BBC Vietnamese. Users of the Chinese, Indonesian and Arabic apps will receive news content but will also be able to listen to radio bulletins.It’s a big move for the BBC, particularly as Nokia has sold more than 675 million Series 40 handsets to date. While the company’s smartphone sales dwindle, its feature phone business has continued to prop up its balance sheet. Ah, feature phones. Remember them? You should! Don't forget that Oracle Application Development Framework solution for feature phones too: Mobile Browser. So, don't ignore a huge market segment and opportunity to grow your business by disregarding feature phones when Oracle makes it easy  for you to develop mobile solutions for a full range of devices and users! Let's remind ourselves of the different mobile toolkit solutions offered by Oracle or coming soon that makes meeting the users of global content possible. Mobile Development with ADF Mobile (Oracle makes no contractual claims about development, release, and timing of future products.) All that said, check out where the next big markets for mobile apps is coming from in my post on Blogos: Where Will The Next 10 Million Apps Come From? BRIC to MIST.

    Read the article

  • Amazon S3 File Uploads

    - by Abdul Latif
    I can upload files from a form using post, but I am trying to find out how to add extra fields to the form i.e File Description, Type and etc. Also can I upload more than one file at once, I know it says you can't using post in the documentation but are there any work arounds? Thanks in Advance

    Read the article

  • Amazon S3 collisions with heroku and paperclip

    - by poseid
    I have an app on my localhost for development and an app for testing on heroku. Image upload with localhost and paperclip always works. However, doing the same experiment with image upload on my heroku app, the app hangs... and the upload seems to be going on forever. I suspect that there is a collision going on. What is needed to get but uploads working? Or do I need to use different buckets for each environment?

    Read the article

  • Special chars in Amazon S3 keys?

    - by Martin
    Is it possible to have special characters like åäö in the key? If i urlencode the key before storing it works, but i cant really find a way to access the object. If i write åäö in the url i get access denied (like i get if the object is not found). If i urlencode the url i paste in the browser i get "InvalidURICouldn't parse the specified URI". Is there some way to do this?

    Read the article

  • Automated incremental backups from Plesk on Centos to Amazon S3

    - by ChrisS
    Hi, I've done a far bit of research on this via Google and there seems to be quite a few ways of possibly doing this. I'm looking to incrementally backup new and updated files in two directories on my Plesk run Centos 5.2 server: /backups and /var/www/vhosts (preferable only httdocs within each vhost) Has anyone got some great feedback from using the various solutions - seems to be various Java, Perl and Ruby based solutions out there. Many thanks, Chris

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >