Search Results

Search found 21640 results on 866 pages for 'local storage'.

Page 87/866 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Which browsers support html5 offline storage?

    - by Marcin
    Essentially, I wanted to run a piece of demo code from W3c Offline Webapps page. It looks like that: var db = window.openDatabase("notes", "", "The Example Notes App!", 1048576); Firefox 3.5, IE8 and Chrome do not seem to get it. Is there anybody out there that actually wrote support for that? Or is this wishful thinking about 'the standard of the future'?

    Read the article

  • extra storage merge sort

    - by davit-datuashvili
    I need make a merge sort using an additional array. Here is my code: public class extra_storage{ public static void main(String[]args) { int x[]=new int[]{12,9,4,99,120,1,3,10}; int a[]=new int[x.length]; mergesort(x,0,x.length-1,a); for (int i=0;i<x.length;i++){ System.out.println(x[i]); } } public static void mergesort(int x[],int low,int high, int a[]){ if (low>=high) { return; } int middle=(low+high)/2; mergesort(x,low,middle,a); mergesort(x,middle+1,high,a); int k; int lo=low; int h=high; for (k=low;k<high;k++) if ((lo<=middle ) && ((h>high)||(x[lo]<x[h]))){ a[k]=x[lo++]; } else { a[k]=x[h++]; } for (k=low;k<=high;k++){ x[k]=a[k]; } } } But something is wrong. When I run it the output is this: 1 0 3 0 4 0 9 0 What is the problem?

    Read the article

  • PHP & MySQL username validation and storage problem.

    - by php
    For some reason when a user enters a brand new username the error message <p>Username unavailable</p> is displayed and the name is not stored. I was wondering if some can help find the flaw in my code so I can fix this error? Thanks Here is the PHP code. if($_POST['username'] && trim($_POST['username'])!=='') { $u = "SELECT * FROM users WHERE username = '$username' AND user_id <> '$user_id'"; $r = mysqli_query ($mysqli, $u) or trigger_error("Query: $u\n<br />MySQL Error: " . mysqli_error($mysqli)); if (mysqli_num_rows($r) == TRUE) { echo '<p>Username unavailable</p>'; $_POST['username'] = NULL; } else if(isset($_POST['username']) && mysqli_num_rows($r) == 0 && strlen($_POST['username']) <= 255) { $username = mysqli_real_escape_string($mysqli, $_POST['username']); } else if($_POST['username'] && strlen($_POST['username']) >= 256) { echo '<p>Username can not exceed 255 characters</p>'; } }

    Read the article

  • storage classes

    - by ramyabanu
    what is the difference between a variable declared as an auto and static? what is the difference in allocation of memory in auto and static variable? why do we use static with array of pointers and what is its significance?

    Read the article

  • Webservice connection data storage

    - by Mey
    Hello, I would like to ask you how can i implement the next: I have a tabBar where each tab make one different connection to one web service, and should show a different data. I would like to save this data "somewhere" and from it place load the different tabs. Where i should save this received data? Thank you, Best regards

    Read the article

  • Physical storage of data in Access 2007

    - by ste
    I've been trying to estimate the size of an Access table with a certain number of records. It has 4 Longs (4 bytes each), and a Currency (8 bytes). In theory: 1 Record = 24 bytes, 500,000 = ~11.5MB However, the accdb file (even after compacting) increases by almost 30MB (~61 bytes per record). A few extra bytes for padding wouldn't be so bad, but 2.5X seems a bit excessive - even for Microsoft bloat. What's with the discrepancy? The four longs are compound keys, would that matter?

    Read the article

  • Using AWS S3 for photo storage

    - by Sam
    I'm going to be using S3 to store user uploaded photos. Obviously, I wont be serving the image files to user agents without resizing them down. However, not one size would do, as some thumbnails will be smaller than other larger previews. So, I was thinking of making a standard set of dimensions scaling from the lowest 16x16 to some highest 1024x1024. Is this a good way to solve this problem? What if I need a new size later on? How would you solve this?

    Read the article

  • Objects with inheritance memory storage

    - by nikitas350
    Say that i have some classes like this example. class A { int k, m; public: A(int a, int b) { k = a; m = b; } }; class B { int k, m; public: B() { k = 2; m = 3; } }; class C : private A, private B { int k, m; public: C(int a, int b) : A(a, b) { k = b; m = a; } }; Now, in a class C object, are the variables stored in a specific way? I know what happens in a POD object, but this is not a POD object...

    Read the article

  • C - Error with read() of a file, storage in an array, and printing output properly

    - by ns1
    I am new to C, so I am not exactly sure where my error is. However, I do know that the great portion of the issue lies either in how I am storing the doubles in the d_buffer (double) array or the way I am printing it. Specifically, my output keeps printing extremely large numbers (with around 10-12 digits before the decimal point and a trail of zeros after it. Additionally, this is an adaptation of an older program to allow for double inputs, so I only really added the two if statements (in the "read" for loop and the "printf" for loop) and the d_buffer declaration. I would appreciate any input whatsoever as I have spent several hours on this error. #include <stdio.h> #include <fcntl.h> #include <sys/types.h> #include <unistd.h> #include <string.h> struct DataDescription { char fieldname[30]; char fieldtype; int fieldsize; }; /* ----------------------------------------------- eof(fd): returns 1 if file `fd' is out of data ----------------------------------------------- */ int eof(int fd) { char c; if ( read(fd, &c, 1) != 1 ) return(1); else { lseek(fd, -1, SEEK_CUR); return(0); } } void main() { FILE *fp; /* Used to access meta data */ int fd; /* Used to access user data */ /* ---------------------------------------------------------------- Variables to hold the description of the data - max 10 fields ---------------------------------------------------------------- */ struct DataDescription DataDes[10]; /* Holds data descriptions for upto 10 fields */ int n_fields; /* Actual # fields */ /* ------------------------------------------------------ Variables to hold the data - max 10 fields.... ------------------------------------------------------ */ char c_buffer[10][100]; /* For character data */ int i_buffer[10]; /* For integer data */ double d_buffer[10]; int i, j; int found; printf("Program for searching a mini database:\n"); /* ============================= Read in meta information ============================= */ fp = fopen("db-description", "r"); n_fields = 0; while ( fscanf(fp, "%s %c %d", DataDes[n_fields].fieldname, &DataDes[n_fields].fieldtype, &DataDes[n_fields].fieldsize) > 0 ) n_fields++; /* --- Prints meta information --- */ printf("\nThe database consists of these fields:\n"); for (i = 0; i < n_fields; i++) printf("Index %d: Fieldname `%s',\ttype = %c,\tsize = %d\n", i, DataDes[i].fieldname, DataDes[i].fieldtype, DataDes[i].fieldsize); printf("\n\n"); /* --- Open database file --- */ fd = open("db-data", O_RDONLY); /* --- Print content of the database file --- */ printf("\nThe database content is:\n"); while ( ! eof(fd) ) { /* ------------------ Read next record ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) read(fd, &i_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'F' ) read(fd, &d_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'C' ) read(fd, &c_buffer[j], DataDes[j].fieldsize); } double d; /* ------------------ Print it... ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) printf("%d ", i_buffer[j]); if ( DataDes[j].fieldtype == 'F' ) d = d_buffer[j]; printf("%lf ", d); if ( DataDes[j].fieldtype == 'C' ) printf("%s ", c_buffer[j]); } printf("\n"); } printf("\n"); printf("\n"); } Post edits output: 16777216 0.000000 107245694331284094976.000000 107245694331284094976.000000 Pi 33554432 107245694331284094976.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 Secret Key 50331648 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 -0.000000 -0.000000 The number E Expected Output: 3 rows of data ending with the number "e = 2.18281828" To reproduce the problem, the following two files need to be in the same directory as the lookup-data.c file: - db-data - db-description

    Read the article

  • iOS AS3 AIR local storage

    - by Kere Puki
    I am developing an app which requires a SQL DB on the device. I am using the File.applicationStorageDirectory and folder.resolvePath to add a new DB. When debugging the app it all looks like it executes correctly and I am able to successfully create a new table. I haven't gone too far with inserting and reading records however I just wanted to ask, when I re-run the app does the existing DB file get replaced with a new empty one? If so do I need to check if the file exists etc. How can I look at the DB on the device (iOS at this stage)? Thanks

    Read the article

  • Java reference storage question

    - by aab
    In java, when you pass an object to a method as a parameter, it is actually passing a reference, or a pointer, to that object because objects in Java are references. Inside the function, it has a pointer to that object which is a location in memory. I am wondering where this pointer lives in memory? Is a new memory location created once inside the function to hold this reference?

    Read the article

  • PHP PDO changes remote host to local hostname

    - by Wade Urry
    I'm trying to connect to a remote mysql server using PDO. However, regardless of the hostname or ip address i supply in the dsn, when the script is run it always reverts the address to the hostname of the local server where the webserver is running. Google suggests this could be something to do with SELinux and apaches ability to connect to remote databases, however i have SELinux disabled. Distro: Ubuntu 11.04 x64 Apache version: 2.2.17 PHP Version: PHP 5.3.5-1ubuntu7.11 with Suhosin-Patch (cli) Edit: Added code as requested. Though i dont believe this is an issue with my coding as it works fine on the local server, but doesnt allow remote connection. public function db_connect($driver, $dbhost, $dbname, $user, $pass) { $dsn = $driver . ':host=' . $dbhost . ';dbname=' . $dbname; try { $this->DB = new PDO($dsn, $user, $pass); } catch (PDOException $err) { print 'Database Connection Failed: ' . $err->getMessage(); die(); } } $remote_db = new DB('mysql', 'remote_server.domain.tld', 'database_name', 'user_name', 'password'); This is the error message i am receiving. Database Connection Failed: SQLSTATE[28000] [1045] Access denied for user 'user_name'@'local_server.domain.tld' (using password: YES)

    Read the article

  • /etc/resolv.conf nameserver fd00::1

    - by user88631
    My /etc/resolv.conf constantly get a mysterious entry, i run a home network with ipv6 provided by ravd, the interface is auto-configured by Network manager (all name server lookups are lost when this line is first in my /etc/resolv.conf) . Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) **# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN** nameserver fd00::1 nameserver 192.168.1.1 search home.int When ping is working cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 192.168.1.1 search home.int So something is putting fd00::1 at start of file, not if I ping6 fd00::1 I get Destination unreachable: Administratively prohibited To diagnose this I ran the router with single cable to connected to ubuntu machine. Ran tcpdump + restarted network on ubuntu. "tcpdump ip6 -e -i eth0 | grep fd00" finds nothing, it's not being advertised via the network.. The only hit I got was when an upstream router refused a connection attempt from the ubuntu machine to fd00::1. I have also switched on debug for network manager & it appears to set the mystery line.. 15:22:14 storage-pc NetworkManager[349]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. 15:22:14 storage-pc NetworkManager[349]: <warn> dnsmasq exited with error: Other problem (5) 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281528] [nm-dns-manager.c:598] update_dns(): updating resolv.conf 15:22:14 storage-pc NetworkManager[349]: <debug> [1346822534.281875] [nm-dns-manager.c:719] update_dns(): DNS: plugin dnsmasq ignored (caching disabled) 15:22:14 storage-pc NetworkManager[349]: <info> ((null)): writing resolv.conf to /sbin/resolvconf 15:22:14 storage-pc dbus[2184]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' 15:22:14 storage-pc dnsmasq[2875]: reading /etc/resolv.conf 15:22:14 storage-pc dnsmasq[2875]: using nameserver 192.168.1.1#53 15:22:14 storage-pc dnsmasq[2875]: using nameserver fd00::1#53 Any suggestions on how to find out where this comes from?

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • How to auto-scan any plugged in usb storage device with clamav?

    - by ossi
    I'd like to do an automatic virus scan on any plugged in usb device using ClamAV. I'm using Ubuntu 12.04. The closest thing I found was: Run clamav on mount of flashdrive How to run a shell script when a new USB storage device is detected? The first one is not working for me and the second one seems to target a known device. Is there a tutorial around I've missed? Or can I get some help with udev rules that apply to any usb storage device added? Currently nothing seems to do anything.

    Read the article

  • Need to set up shared storage for Guest virtual machines that are running on a Xen host

    - by Sajith
    My environment: I am doing these things at home with the purpose of learning about virtualization techniques. My machine have quad core processor that supports Intel-VT and 8GB RAM. XEN is the virtualization platform. In short, all domUs are LVM based. Mainly I have two questions; I need to have shared storage for these VMs. Something like NFS / NAS / iSCSI etc. However, I don't know which one is the best solution. Therefore, can someone tell me which suits best? Please note that, this shared storage need to be accessed by the other physical machines in the network. How to implement the selected solution for question #1? Any tutorials / guidelines / ebooks will be a great help and highly appreciated. Thank you in advance :)

    Read the article

  • T-SQL Snack: How Much Free Storage Space is Available?

    - by andyleonard
    Introduction Ever have a need to calculate the total available storage space for a server? Recently I did. Here's a solution I came up with - I bet someone can do this better! xp_fixeddrives There's a handy stored procedure called xp_fixeddrives that reports the available storage space: exec xp_fixeddrives This returns: drive MB free ----- ----------- C 6998 E 201066 Problem solved right? Maybe. The Sum What I really want is the sum total of all available space presented to the server. I built this...(read more)

    Read the article

  • Google renforce la sécurité de ses services hébergés avec l'authentification basée sur les certificats pour Cloud Storage, Prediction API

    Google renforce la sécurité de ses services hébergés avec l'authentification basée sur les certificats pour Cloud Storage, Prediction API, URL Shortener Google a apporté une mise à jour à ses services Cloud pour les développeurs en renforçant la sécurité de ceux-ci. Les services hébergés de la firme pourront désormais communiquer avec des applications qui utilisent des comptes de service basés sur les certificatifs pour l'authentification. Ainsi, la requête d'une application Web au service Google Cloud Storage pourra par exemple être authentifiée par un certificat ou lieu d'une clé partagée. Les certificats de sécurité offrent une méthode d'authentification plus renforc...

    Read the article

  • Error mounting CloudDrive snapshot in Azure

    - by Dave
    Hi, I've been running a cloud drive snapshot in dev for a while now with no probs. I'm now trying to get this working in Azure. I can't for the life of me get it to work. This is my latest error: Microsoft.WindowsAzure.Storage.CloudDriveException: Unknown Error HRESULT=D000000D ---> Microsoft.Window.CloudDrive.Interop.InteropCloudDriveException: Exception of type 'Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDriveException' was thrown. at ThrowIfFailed(UInt32 hr) at Microsoft.WindowsAzure.CloudDrive.Interop.InteropCloudDrive.Mount(String url, SignatureCallBack sign, String mount, Int32 cacheSize, UInt32 flags) at Microsoft.WindowsAzure.StorageClient.CloudDrive.Mount(Int32 cacheSize, DriveMountOptions options) Any idea what is causing this? I'm running both the WorkerRole and Storage in Azure so it's nothing to do with the dev simulation environment disconnect. This is my code to mount the snapshot: CloudDrive.InitializeCache(localPath.TrimEnd('\\'), size); var container = _blobStorage.GetContainerReference(containerName); var blob = container.GetPageBlobReference(driveName); CloudDrive cloudDrive = _cloudStorageAccount.CreateCloudDrive(blob.Uri.AbsoluteUri); string snapshotUri; try { snapshotUri = cloudDrive.Snapshot().AbsoluteUri; Log.Info("CloudDrive Snapshot = '{0}'", snapshotUri); } catch (Exception ex) { throw new InvalidCloudDriveException(string.Format( "An exception has been thrown trying to create the CloudDrive '{0}'. This may be because it doesn't exist.", cloudDrive.Uri.AbsoluteUri), ex); } cloudDrive = _cloudStorageAccount.CreateCloudDrive(snapshotUri); Log.Info("CloudDrive created: {0}", snapshotUri, cloudDrive); string driveLetter = cloudDrive.Mount(size, DriveMountOptions.None); The .Mount() method at the end is what's now failing. Please help as this has me royally stumped! Thanks in advance. Dave

    Read the article

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Xap file contents changes if built in Visual Studio or build server

    - by arch
    I'm using MEF with my Silverlight 4 app to dynamically load xap files. To optimize this process, I've removed various assemblies from my xaps since I know they've already been loaded by the base xap. This reduces the size of my dynamically loaded xaps. I accomplished this by setting the "Copy Local" flag for each assembly reference to "false". This all seems to work fine when I build in Visual Studio 2010 - my xaps are much smaller. However, when the same projects are built by the build server, all the excluded references are once again in the xap file hence tripling the size of the xap. I've read several blogs/articles regarding similar experiences but no resolution. Very frustrating - any help is appreciated.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >