Search Results

Search found 2714 results on 109 pages for 'extremely frustrated'.

Page 54/109 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • C#/.NET Little Wonders: The Useful But Overlooked Sets

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  Today we will be looking at two set implementations in the System.Collections.Generic namespace: HashSet<T> and SortedSet<T>.  Even though most people think of sets as mathematical constructs, they are actually very useful classes that can be used to help make your application more performant if used appropriately. A Background From Math In mathematical terms, a set is an unordered collection of unique items.  In other words, the set {2,3,5} is identical to the set {3,5,2}.  In addition, the set {2, 2, 4, 1} would be invalid because it would have a duplicate item (2).  In addition, you can perform set arithmetic on sets such as: Intersections: The intersection of two sets is the collection of elements common to both.  Example: The intersection of {1,2,5} and {2,4,9} is the set {2}. Unions: The union of two sets is the collection of unique items present in either or both set.  Example: The union of {1,2,5} and {2,4,9} is {1,2,4,5,9}. Differences: The difference of two sets is the removal of all items from the first set that are common between the sets.  Example: The difference of {1,2,5} and {2,4,9} is {1,5}. Supersets: One set is a superset of a second set if it contains all elements that are in the second set. Example: The set {1,2,5} is a superset of {1,5}. Subsets: One set is a subset of a second set if all the elements of that set are contained in the first set. Example: The set {1,5} is a subset of {1,2,5}. If We’re Not Doing Math, Why Do We Care? Now, you may be thinking: why bother with the set classes in C# if you have no need for mathematical set manipulation?  The answer is simple: they are extremely efficient ways to determine ownership in a collection. For example, let’s say you are designing an order system that tracks the price of a particular equity, and once it reaches a certain point will trigger an order.  Now, since there’s tens of thousands of equities on the markets, you don’t want to track market data for every ticker as that would be a waste of time and processing power for symbols you don’t have orders for.  Thus, we just want to subscribe to the stock symbol for an equity order only if it is a symbol we are not already subscribed to. Every time a new order comes in, we will check the list of subscriptions to see if the new order’s stock symbol is in that list.  If it is, great, we already have that market data feed!  If not, then and only then should we subscribe to the feed for that symbol. So far so good, we have a collection of symbols and we want to see if a symbol is present in that collection and if not, add it.  This really is the essence of set processing, but for the sake of comparison, let’s say you do a list instead: 1: // class that handles are order processing service 2: public sealed class OrderProcessor 3: { 4: // contains list of all symbols we are currently subscribed to 5: private readonly List<string> _subscriptions = new List<string>(); 6:  7: ... 8: } Now whenever you are adding a new order, it would look something like: 1: public PlaceOrderResponse PlaceOrder(Order newOrder) 2: { 3: // do some validation, of course... 4:  5: // check to see if already subscribed, if not add a subscription 6: if (!_subscriptions.Contains(newOrder.Symbol)) 7: { 8: // add the symbol to the list 9: _subscriptions.Add(newOrder.Symbol); 10: 11: // do whatever magic is needed to start a subscription for the symbol 12: } 13:  14: // place the order logic! 15: } What’s wrong with this?  In short: performance!  Finding an item inside a List<T> is a linear - O(n) – operation, which is not a very performant way to find if an item exists in a collection. (I used to teach algorithms and data structures in my spare time at a local university, and when you began talking about big-O notation you could immediately begin to see eyes glossing over as if it was pure, useless theory that would not apply in the real world, but I did and still do believe it is something worth understanding well to make the best choices in computer science). Let’s think about this: a linear operation means that as the number of items increases, the time that it takes to perform the operation tends to increase in a linear fashion.  Put crudely, this means if you double the collection size, you might expect the operation to take something like the order of twice as long.  Linear operations tend to be bad for performance because they mean that to perform some operation on a collection, you must potentially “visit” every item in the collection.  Consider finding an item in a List<T>: if you want to see if the list has an item, you must potentially check every item in the list before you find it or determine it’s not found. Now, we could of course sort our list and then perform a binary search on it, but sorting is typically a linear-logarithmic complexity – O(n * log n) - and could involve temporary storage.  So performing a sort after each add would probably add more time.  As an alternative, we could use a SortedList<TKey, TValue> which sorts the list on every Add(), but this has a similar level of complexity to move the items and also requires a key and value, and in our case the key is the value. This is why sets tend to be the best choice for this type of processing: they don’t rely on separate keys and values for ordering – so they save space – and they typically don’t care about ordering – so they tend to be extremely performant.  The .NET BCL (Base Class Library) has had the HashSet<T> since .NET 3.5, but at that time it did not implement the ISet<T> interface.  As of .NET 4.0, HashSet<T> implements ISet<T> and a new set, the SortedSet<T> was added that gives you a set with ordering. HashSet<T> – For Unordered Storage of Sets When used right, HashSet<T> is a beautiful collection, you can think of it as a simplified Dictionary<T,T>.  That is, a Dictionary where the TKey and TValue refer to the same object.  This is really an oversimplification, but logically it makes sense.  I’ve actually seen people code a Dictionary<T,T> where they store the same thing in the key and the value, and that’s just inefficient because of the extra storage to hold both the key and the value. As it’s name implies, the HashSet<T> uses a hashing algorithm to find the items in the set, which means it does take up some additional space, but it has lightning fast lookups!  Compare the times below between HashSet<T> and List<T>: Operation HashSet<T> List<T> Add() O(1) O(1) at end O(n) in middle Remove() O(1) O(n) Contains() O(1) O(n)   Now, these times are amortized and represent the typical case.  In the very worst case, the operations could be linear if they involve a resizing of the collection – but this is true for both the List and HashSet so that’s a less of an issue when comparing the two. The key thing to note is that in the general case, HashSet is constant time for adds, removes, and contains!  This means that no matter how large the collection is, it takes roughly the exact same amount of time to find an item or determine if it’s not in the collection.  Compare this to the List where almost any add or remove must rearrange potentially all the elements!  And to find an item in the list (if unsorted) you must search every item in the List. So as you can see, if you want to create an unordered collection and have very fast lookup and manipulation, the HashSet is a great collection. And since HashSet<T> implements ICollection<T> and IEnumerable<T>, it supports nearly all the same basic operations as the List<T> and can use the System.Linq extension methods as well. All we have to do to switch from a List<T> to a HashSet<T>  is change our declaration.  Since List and HashSet support many of the same members, chances are we won’t need to change much else. 1: public sealed class OrderProcessor 2: { 3: private readonly HashSet<string> _subscriptions = new HashSet<string>(); 4:  5: // ... 6:  7: public PlaceOrderResponse PlaceOrder(Order newOrder) 8: { 9: // do some validation, of course... 10: 11: // check to see if already subscribed, if not add a subscription 12: if (!_subscriptions.Contains(newOrder.Symbol)) 13: { 14: // add the symbol to the list 15: _subscriptions.Add(newOrder.Symbol); 16: 17: // do whatever magic is needed to start a subscription for the symbol 18: } 19: 20: // place the order logic! 21: } 22:  23: // ... 24: } 25: Notice, we didn’t change any code other than the declaration for _subscriptions to be a HashSet<T>.  Thus, we can pick up the performance improvements in this case with minimal code changes. SortedSet<T> – Ordered Storage of Sets Just like HashSet<T> is logically similar to Dictionary<T,T>, the SortedSet<T> is logically similar to the SortedDictionary<T,T>. The SortedSet can be used when you want to do set operations on a collection, but you want to maintain that collection in sorted order.  Now, this is not necessarily mathematically relevant, but if your collection needs do include order, this is the set to use. So the SortedSet seems to be implemented as a binary tree (possibly a red-black tree) internally.  Since binary trees are dynamic structures and non-contiguous (unlike List and SortedList) this means that inserts and deletes do not involve rearranging elements, or changing the linking of the nodes.  There is some overhead in keeping the nodes in order, but it is much smaller than a contiguous storage collection like a List<T>.  Let’s compare the three: Operation HashSet<T> SortedSet<T> List<T> Add() O(1) O(log n) O(1) at end O(n) in middle Remove() O(1) O(log n) O(n) Contains() O(1) O(log n) O(n)   The MSDN documentation seems to indicate that operations on SortedSet are O(1), but this seems to be inconsistent with its implementation and seems to be a documentation error.  There’s actually a separate MSDN document (here) on SortedSet that indicates that it is, in fact, logarithmic in complexity.  Let’s put it in layman’s terms: logarithmic means you can double the collection size and typically you only add a single extra “visit” to an item in the collection.  Take that in contrast to List<T>’s linear operation where if you double the size of the collection you double the “visits” to items in the collection.  This is very good performance!  It’s still not as performant as HashSet<T> where it always just visits one item (amortized), but for the addition of sorting this is a good thing. Consider the following table, now this is just illustrative data of the relative complexities, but it’s enough to get the point: Collection Size O(1) Visits O(log n) Visits O(n) Visits 1 1 1 1 10 1 4 10 100 1 7 100 1000 1 10 1000   Notice that the logarithmic – O(log n) – visit count goes up very slowly compare to the linear – O(n) – visit count.  This is because since the list is sorted, it can do one check in the middle of the list, determine which half of the collection the data is in, and discard the other half (binary search).  So, if you need your set to be sorted, you can use the SortedSet<T> just like the HashSet<T> and gain sorting for a small performance hit, but it’s still faster than a List<T>. Unique Set Operations Now, if you do want to perform more set-like operations, both implementations of ISet<T> support the following, which play back towards the mathematical set operations described before: IntersectWith() – Performs the set intersection of two sets.  Modifies the current set so that it only contains elements also in the second set. UnionWith() – Performs a set union of two sets.  Modifies the current set so it contains all elements present both in the current set and the second set. ExceptWith() – Performs a set difference of two sets.  Modifies the current set so that it removes all elements present in the second set. IsSupersetOf() – Checks if the current set is a superset of the second set. IsSubsetOf() – Checks if the current set is a subset of the second set. For more information on the set operations themselves, see the MSDN description of ISet<T> (here). What Sets Don’t Do Don’t get me wrong, sets are not silver bullets.  You don’t really want to use a set when you want separate key to value lookups, that’s what the IDictionary implementations are best for. Also sets don’t store temporal add-order.  That is, if you are adding items to the end of a list all the time, your list is ordered in terms of when items were added to it.  This is something the sets don’t do naturally (though you could use a SortedSet with an IComparer with a DateTime but that’s overkill) but List<T> can. Also, List<T> allows indexing which is a blazingly fast way to iterate through items in the collection.  Iterating over all the items in a List<T> is generally much, much faster than iterating over a set. Summary Sets are an excellent tool for maintaining a lookup table where the item is both the key and the value.  In addition, if you have need for the mathematical set operations, the C# sets support those as well.  The HashSet<T> is the set of choice if you want the fastest possible lookups but don’t care about order.  In contrast the SortedSet<T> will give you a sorted collection at a slight reduction in performance.   Technorati Tags: C#,.Net,Little Wonders,BlackRabbitCoder,ISet,HashSet,SortedSet

    Read the article

  • C++ : C++ Primer (Stanley Lipmann) or The C++ programming language (special edition)

    - by Kim
    I have a Computer Science degree (long2 time ago) .. I do know Java OOP but i am now trying to pick up C++. I do have C and of course data structure using C or pascal. I have started reading Bjarne Stroustrup book (The C++ Programming Language - Special Edition) but find it extremely difficult esp. some section which i don't have exposure such as Recursive Descent Parser (chapter 6). In terms of the language i don't foresee i have problem but i have problem as mentioned cos' those topic are usually covered in a Master Degree program such as construction of compiler. I just bought a book called C++ primer (Stanley Lipmann) which i heard it is a very good book for C++. Only setback is it's of course no match with the amount of information from the original C++ creator. Please advice. Thanks.

    Read the article

  • Adding array of images to Firebase using AngularFire

    - by user2833143
    I'm trying to allow users to upload images and then store the images, base64 encoded, in firebase. I'm trying to make my firebase structured as follows: |--Feed |----Feed Item |------username |------epic |---------name,etc. |------images |---------image1, image 2, etc. However, I can't get the remote data in firebase to mirror the local data in the client. When I print the array of images to the console in the client, it shows that the uploaded images have been added to the array of images...but these images never make it to firebase. I've tried doing this multiple ways to no avail. I tried using implicit syncing, explicit syncing, and a mixture of both. I can;t for the life of me figure out why this isn;t working and I'm getting pretty frustrated. Here's my code: $scope.complete = function(epicName){ for (var i = 0; i < $scope.activeEpics.length; i++){ if($scope.activeEpics[i].id === epicName){ var epicToAdd = $scope.activeEpics[i]; } } var epicToAddToFeed = {epic: epicToAdd, username: $scope.currUser.username, userImage: $scope.currUser.image, props:0, images:['empty']}; //connect to feed data var feedUrl = "https://epicly.firebaseio.com/feed"; $scope.feed = angularFireCollection(new Firebase(feedUrl)); //add epic var added = $scope.feed.add(epicToAddToFeed).name(); //connect to added epic in firebase var addedUrl = "https://epicly.firebaseio.com/feed/" + added; var addedRef = new Firebase(addedUrl); angularFire(addedRef, $scope, 'added').then(function(){ // for each image uploaded, add image to the added epic's array of images for (var i = 0, f; f = $scope.files[i]; i++) { var reader = new FileReader(); reader.onload = (function(theFile) { return function(e) { var filePayload = e.target.result; $scope.added.images.push(filePayload); }; })(f); reader.readAsDataURL(f); } }); } EDIT: Figured it out, had to connect to "https://epicly.firebaseio.com/feed/" + added + "/images"

    Read the article

  • Scrollbar problem with jquery ui dialog in Chrome and Safari

    - by alexis.kennedy
    I'm using the jquery ui dialog with modal=true. In Chrome and Safari, this disables scrolling via the scroll bar and cursor keys (scrolling with the mouse wheel and page up/down still works). This is a problem if the dialog is too tall to fit on one page - users on a laptop get frustrated. Someone raised this three months ago on the jquery bug tracker - http://dev.jqueryui.com/ticket/4671 - it doesn't look like fixing it is a priority. :) So does anyone (i) have a fix for this? (ii) have a suggested workaround that would give a decent usability experience? I'm experimenting with mouseover / scrollto on bits of the form, but it's not a great solution :( EDIT: props to Rowan Beentje (who's not on SO afaict) for finding a solution to this. jQueryUI prevents scrolling by capturing the mouseup / mousedown events. So this: $("dialogId").dialog({ open: function(event, ui) { window.setTimeout(function() { jQuery(document) .unbind('mousedown.dialog-overlay') .unbind('mouseup.dialog-overlay') ; }, 100); }, modal: true}); seems to fix it. Use at own risk, I don't know what other unmodal behaviour unbinding this stuff might allow.

    Read the article

  • Sending a Soap Header with a WSDL Soap Request with PHP

    - by Josh Smeaton
    I'm extremely new to SOAP and I'm trying to implement a quick test client in PHP that consumes a ASP.NET web service. The web service relies on a Soap Header that contains authorization parameters. Is it possible to send the auth header along with a soap request when using WSDL? My code: php $service = new SoapClient("http://localhost:16840/CTI.ConfigStack.WS/ATeamService.asmx?WSDL"); $service->AddPendingUsers($users, 3); // Example webservice [SoapHeader("AuthorisationHeader")] [WebMethod] public void AddPendingUsers(List<PendingUser> users, int templateUserId) { ateamService.AddPendingUsers(users, templateUserId, AuthorisationHeader.UserId); } How would the auth header be passed in this context? Or will I need to do a low lever __soapCall() to pass in the header? Also, am I invoking the correct soap call within PHP?

    Read the article

  • What useful things could a javaScript library provide?

    - by Delan Azabani
    In many of my answers I repeatedly urge users not to use JavaScript libraries like jQuery. I even wrote a blog post about the problems that using a library create. Some of these problems include holding back native standards development, keeping users comfortably using IE, and abstracting the developer from real JavaScript. If a site doesn't require IE as part of its audience, then how are libraries useful? The other popular browsers share extremely similar implementations and work well with things like JavaScript 1.6 arrays and AJAX. This is not a troll question, I'm truly wondering what they're useful for.

    Read the article

  • SmartGWT Canvas width problem

    - by Doug
    I am having problems with showing my entire SmartGWT Canvas on my initial page. I've stripped everything out and am left with an extremely basic EntryPoint to illustrate my issue. When I just create a Canvas and add it to the root panel, I get a horizontal scrollbar in my browser. Can anyone explain why and what I can do to have the Canvas sized to the width of the window? Thanks in advance. public class TestModule implements EntryPoint { protected Canvas view = null; /** * This is the entry point method. */ public void onModuleLoad() { view = new Canvas(); view.setWidth100(); view.setHeight100(); view.setShowEdges( true ); RootPanel.get().add( view ); } }

    Read the article

  • C# System.Data.SQLite Designer Code

    - by Nathan
    I've been messing around with the SQLite Designer in Visual Studio 2008 and I have noticed that when I use the generated Insert/Update statements they run extremely slow. Example: I have a data table with four columns and 5700 rows it took ~5 mins to insert the data into the database table However, I wrote my own database connection and insert methods using parameters and a single transaction and the same 5700 rows were inserted in under 1 second. Why is the generated code so slow and what is benefit to even using it? Thanks. Nathan

    Read the article

  • WCF client service call very slow

    - by JohnIdol
    I have a WCF client (running on Win7) pointing to a WebSphere service. All is good from a test harness but when my calls to the service originate from an HttpHandler or a standard aspx page the calls are extremely slow (it takes up to 10 times longer) and not just the first time. If I send the exact same envelopes through Soap-UI or without an HttpHandler it's all good. If the envelope is the same the only thing left is the HttpHeader. What should I be looking for and is there some config setting that might do the trick? Any help appreciated!

    Read the article

  • VS 11 Beta Cannot start process because a file name has not been provided

    - by Leniel Macaferi
    This is what I'm getting when I build my Test project: With this I'm unable to run my tests since they're not being discovered by VS. See the message "Unexpected error detected. Check the Tests Output Pane for details." at the window bottom. Now if you look at the Tests OUTPUT pane you'll have no clue about what's the problem. This is extremely helpful... :) I know VS 11 is in beta but it used to work... I've already restarted VS but it didn't work after that too. Any ideas about what's going on? Would it be a bug somewhere? Note: the only thing I can think is related with VS 2010 uninstall I did sometime ago. Maybe it uninstalled some necessary bits. Beats me...

    Read the article

  • Problem with importing an mdf created with SQL Server Express 2008 into SQL Server 2005

    - by user252160
    The question is probably extremely easy to resolve, but I need to resolve it because I need to carry on with my project. I am using SQL Server Express 2008 at home, and I've been working on an ASP.NET MVC app that stores my DB in an mdf file in the project's folder. The problem is that the SQL Server in the Uni labs is SQL Server 2005, and when I try to open the mdf file with the VS Server Explorer,It says that the version of the mdf file is more than the server can accept. The only option that comes to my mind is exporting the DB as an sql file, just like I've done it thousand times with phpmyadmin. the thing is that the SQL Management Studio Express is not the most usable tool in the world, and for some strange reason all the articles I could find in Google were irrelevant. Please, help.

    Read the article

  • WPF - Render text in Viewport3D

    - by eWolf
    I want to present up to 300 strings (just a few words) in a Viewport3D - fast! I want to render them on different Z positions and zoom in and out fluently. The ways I have found so far to render text in a Viewport3D: Put a TextBlock in a Viewport2DVisual3D. This guy's PlanarText class. The same guy's SolidText class. Create my own 2D panel and align TextBlocks on it. Call InvalidateArrange() every time I update the camera position. All of these are extremely slow and far apart from zooming fluently even with 10 strings only. Does anyone have a solution for this handy? It's got to be possible to render some text in a Viewport3D without waiting seconds!

    Read the article

  • Brute force characters into a textbox in c#

    - by Fred Dunly
    Hey everyone, I am VERY new to programming and the only language I know is C# So I will have to stick with that... I want to make a program that "test passwords" to see how long they would take to break with a basic brute force attack. So what I did was make 2 text boxes. (textbox1 and textbox2) and wrote the program so if the text boxes had the input, a "correct password" label would appear, but i want to write the program so that textbox2 will run a brute force algorithm in it, and when it comes across the correct password, it will stop. I REALLY need help, and if you could just post my attached code with the correct additives in it that would be great. The program so far is extremely simple, but I am very new to this, so. Thanks in advance. private void textBox2_TextChanged(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { if (textBox2.Text == textBox1.Text) { label1.Text = "Password Correct"; } else { label1.Text = "Password Wrong"; } } private void label1_Click(object sender, EventArgs e) { } } } `

    Read the article

  • iPhone GameKit Picker Fundamental Connection Issues

    - by Kyle
    Hello.. This is one of the more interesting things I've seen in iPhone development. The following question has nothing to do with code because I'm using an SDK Example from Apple (Tanks example). I have a 3GS iPhone, and a 3G iPhone both showing the GameKit picker screen. Both will eventually show the other phone in range just fine (It does take about 25 seconds, though). If I pick the 3G iPhone with my 3GS, the 3G will get a connection request and a connection can be made. However, it will ABSOLUTELY not work in the vice versa. Both phones have bluetooth switched on, and both phones are running the latest OS version. The simple fact is I'm using the SDK example, and it's just not working for the 3G trying to issue the connection. Is there any way to explain this extremely odd behavior? Thanks alot for reading!

    Read the article

  • Using Raw SQL with Doctrine

    - by Levi Hackwith
    I have some extremely complex queries that I need to use to generate a report in my application. I'm using symfony as my framework and doctrine as my ORM. My question is this: What is the best way to pass in highly-complex sql queries directly to Doctrine without converting them to the Doctrine Query Language? I've been reading about the Raw_SQL extension but it appears that you still need to pass the query in sections (like from()). Is there anything for just dumping in a bunch of raw sql commands?

    Read the article

  • Ruby Irb reacts strangely to control keys

    - by eegg
    Hi. I'm (extremely) new to Ruby, having started today. I just moved from my system's Ruby 1.8 installation to Ruby 1.9, compiled from source. In doing so, irb has taken a turn for the worse. It reacts in a most unfriendly way to the non-alphanumeric control keys: UP key prints: ^[[A DOWN key prints: ^[[B DELETE key prints: ^[[3~ ...and so on. The main result of this for me is that I have no access to previously issued commands. Nor does tab-completion work; though none of this seems to be an issue with Wirble - the same happens when I remove my ~/.irbrc. I'm using: Ubuntu 9.10 GNOME Terminal 2.28.1 ruby 1.9.1p376 (2009-12-07 revision 26041) [i686-linux] Irb version 0.9.5 (05/04/13) Any ideas? :(

    Read the article

  • WCF fault logging and SQL Exception 4060 error.

    - by Bill
    I have been attempting to compile/run a sample WCF application from Juval Lowy's website (author of Programming WCF Services & founder of IDesign) for several days. The example app utilizes Juval's ServiceModelEx library which logs faults/errors to a "WCFLogbook" SQL database. Unfortunately, when the sample app faults, I get the following error: SQL Exception 4060: "Cannot open database \"WCFLogbook\" requested by the login. The login failed.\r\nLogin failed for user 'Bill-PC\Bill'." I confirmed that the SQL WCFLogbook database has been created and have granted all of the appropriate permissions for my (Bill-PC\Bill) access to the database. Additionally port 8006 and port 1433 have been opened in the Firewall. TCP/IP has been enabled and "Allow remote connections to this server" has been checked. I am using the following endpoint within the App.Config file: <client> <endpoint name="LogbookTCP" address="net.tcp://Bill-PC:8006/LogbookManager" binding="netTcpBinding" contract="ILogbookManager" /> </client> Unfortunately SQL is a 'world' that I hadn't needed to venture into before now and I am terribly frustrated with my lack of success. Would anyone have any other suggestions on how to get this working? Have I missed anything?

    Read the article

  • Creating a user-defined code generation system in PHP

    - by user270766
    My newest project that I'm looking to build with CodeIgniter would require some sort of system that would allow a user to drag and drop pre-defined functions/methods into mini-classes/objects and then run/test them in the browser. So it'd something similar to Scratch. I've designed a relational database that I think could work for this (storing the function names and have these classes "subscribe" to those functions) - but I'm wondering whether or not to go ahead with it. Is there a better way to do this or is there a system out there that would accomplish this for me? EDIT: It would have to be extremely simple for an end user, but hopefully be flexible enough to easily add more complex functionality in the future.

    Read the article

  • Ruby GUI (non-complex layouts)

    - by Ruby Novice
    I've done quite a bit of research on Ruby GUI design, and it appears to be the one area where Ruby tends to be behind the curve. I've explored the options of MonkeyBars, wxRuby, fxRuby, Shoes, etc. and was just wanted to get some input from the Ruby community. While they're definitely usable, the development on each seems to have fallen off. There is not a great deal of useful documentation or user bases that I could find on any (minus the fxRuby book). I'm just looking to make a simple GUI, so I don't really want to spend hundreds of hours learning the intricacies of the more complex tools or attempt to use something that is no longer even being developed (Shoes is the type of application I'm looking for, but it's extremely buggy and not being actively developed.) Out of all of the options, which would you guys recommend as being the quickest to pick up and that still has some sort of development base? Thanks!

    Read the article

  • Visual Studio Windows Forms Designer keyboard shortcuts

    - by Dan Tao
    Extremely basic question. Are there common actions I can perform using keyboard shortcuts in the Windows Forms designer in Visual Studio (2008)? Alternately, could I add my own keyboard shortcuts (either through settings or macros)? It'd really be nice if I could, for example, set a control to dock/undock in its parent container by typing Alt+D. Or if I could set a control's name just by typing Alt+N and typing the name. Things like that. It's just kind of tedious to click on the item, scroll in the Properties grid to the property I want to change, type the new value, scroll to the next property I want to change, etc. Which is why I have a feeling this functionality is in there already, or is easily configurable, and I just don't know about it.

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Building using CMake-GUI

    - by goodwince
    I am trying to build Marble on Windows (XP) and have been extremely unsuccessful. Following the instructions written here. I am wanting to build it as QT only. Configuration of the build tree fails. I am using the cmake-gui since I have never tried to compile using cmake it's possible it could be something in there. I found a similar issue; however, they compiled QT using Visual Studio and I am using QT Creator which compiles with MinGW so I need to compile in a different manner. CMake-GUI configuration source-code: MyDirectory/Marble where to build: MyDirectory/Build I also added an entry as.. DQTONLY and it's value set to ON

    Read the article

  • Which programming languages aren't considered high-level?

    - by hilo
    In informatics theory I hear and read about high-level and low-level languages all time. Yet I don't understand why this is still relevant as there aren't any (relevant) low-level languages except assembler in use today. So you get: Low-level Assembler Definitely not low-level C BASIC FORTRAN COBOL ... High-level C++ Ruby Python PHP ... And if assembler is low-level, how could you put for example C into the same list. I mean: C is extremely high-level compared to assembler. Same even for COBOL, Fortran, etc. So why does everybody keep mentioning high and low-level languages if assembler is really the only low-level language.

    Read the article

  • Bash alias to open Vim at last edit mark

    - by Pierre LaFayette
    The mark " in Vim takes you to your last edit position. I want to create an alias that will open my Vim instance and jump to that mark; something which is obviously extremely useful. This works from the command line: $ vim -c "'\"" File.cpp Now I want to make an alias for this: $ alias v='vim -c "'\"" File.cpp' Well that's not going to work! You need to escape the first single quote you say... $ alias v='vim -c "\'\"" File.cpp' Hmm. That didn't work either... So I try a whole lot of variations of single quoted and double quoted madness, bang my head against the table and load up stackoverflow in my browser, and here we are. How do I properly escape this alias?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >