Search Results

Search found 17727 results on 710 pages for 'large apps'.

Page 10/710 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Web apps or Desktop apps

    - by Ram
    If we compare Windows and Web applications against following criteria Insight of .NET and OS Design Patterns Logic development Development of a fresher into a good developer which one is better.

    Read the article

  • Sending a large file over network continuously

    - by David Parunakian
    Hello, We need to write software that would continuously (i.e. new data is sent as it becomes available) send very large files (several Tb) to several destinations simultaneously. Some destinations have a dedicated fiber connection to the source, while some do not. Several questions arise: We plan to use TCP sockets for this task. What failover procedure would you recommend in order to handle network outages and dropped connections? What should happen upon upload completion: should the server close the socket? If so, then is it a good design decision to have another daemon provide file checksums on another port? Could you recommend a method to handle corrupted files, aside from downloading them again? Perhaps I could break them into 10Mb chunks and calculate checksums for each chunk separately? Thanks.

    Read the article

  • how do I download a large file (via HTTP) in .NET

    - by nickcartwright
    I need to download a LARGE file (2GB) over HTTP in a C# console app. Problem is, after about 1.2GB, the app runs out of memory. Here's the code I'm using: WebClient request = new WebClient(); request.Credentials = new NetworkCredential(username, password); byte[] fileData = request.DownloadData(baseURL + fName); As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk. Does anyone know how I could do this?

    Read the article

  • Edit very large sql dump/text file (on linux)

    - by geo
    I have to import a large mysql dump (up to 10G). However the sql dump already predefined with a database structure with index definition. I want to speed up the db insert by removing the index and table definition. That means I have to remove/edit the first few lines of a 10G text file. What is the most efficient way to do this on linux? Programs that require loading the entire file into RAM will be an overkill to me.

    Read the article

  • iPhone/iPad: Get Alerts When Paid Apps Go Free

    - by Gopinath
    iPhone users has thousands of cool applications to choose. These apps are either paid or absolutely free. Many of the paid applications goes free for either a limited time or forever depending on the mood of their developers. Will it not be cool to get alerts whenever a paid app goes free? Yeah, it will be great. Free App Alert is a handy website that checks iTunes store regularly and sends alerts to it’s subscribers about the apps that have gone from paid to free. You can receive the alerts by following them on twitter, facebook or subscribing to the traditional RSS feeds(yeah RSS is a traditional technology). The home page of this website shows the apps that have gone free today and you can browse through the previous day free apps listing with the help of links available at the bottom. Free App Alert is definitely a cool site to check out for iPhone/iPod/iPad users and certainly easier than scrolling through iTunes store and checking prices. Tip: Immediately download the app that have gone from paid to free as many apps are free for limited time. You can see many free apps going back to paid version if you go through the previous pages the website. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Bypass Facebook Social Reader Apps using Google Chrome Extension

    - by Gopinath
    One of the most annoying features of Facebook  is it’s Social Reader Apps that share automatically whatever your read, watch or listen online.  I don’t like to share what ever I do online to Facebook as I want my privacy. Few of  my friends knowingly or unknowingly are using Social Reader apps and their online activity is automatically posted to the wall. To read these articles or watch videos shared by Social Reader application I need to add the application and allow it to automatically post. I don’t like Social Reader Apps and if you are one like me, here is a Google Chrome browser plugin that allows us to bypass Social Reader Apps. The extension Facebook Unsocial Reader smartly rewrites Facebook links in such a way that you will be able to access content of links without adding Social Reader Apps to your account. To rewrite the links, the extension cleverly uses Google I’m Feeling Lucky service and searches for the article’s title. The first search result of Google is almost perfect in identifying the original article link. If you are a heavy Facebook user and concerned about using Social Reader Apps, this plugin is must to have. Photo (cc) Josh Hallett. Facebook Unsocial Reader Extension for Google Chrome

    Read the article

  • Updating large icon in iTunes Connect

    - by Shaggy Frog
    Just wanted to see if I understand properly how/when one can change the "Large icon" for their iOS app in iTunes Connect. Questions are in bold below. To start, first the facts (as I gather) from version 6.6 of the iTC guide (March 2, 2011): The Large Icon is a "locked" piece of version information "You will only be permitted to edit Locked version information when your app is in an Editable state" The "Editable" states are: Prepare For Upload Waiting For Upload Waiting For Review Waiting For Export Compliance Upload Received Rejected Developer Rejected Invalid Binary Missing Screenshot Am I missing anything up until this point? If not, then am I correct to say that the only time I can change an app's Large Icon is when I update the application? Here's a more specific use case: My app is currently on sale, version 2.0 I have version 2.1 ready, and I want the update to coincide with a sale, so I also put a "SALE" banner on top of my large icon (what most devs are doing) I have to upload this "SALE" Large Icon when I upload the binary. If I wait until it's been reviewed, it's too late, and I'll have developer-reject the binary so I can fix it. Is this correct? Say I want the sale to last a week. So at the end of that week, I'll want to switch my Large Icon back to the pre-"SALE" version. Will I necessarily have to upload a new binary at that time? (Also posted on the Developer Forums, but it's getting no love there...)

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Unable to login using OpenID for google apps using vanity URL

    - by GeekTantra
    Unable to login using OpenID for google apps using vanity URL I keep getting the following error whenever I use ajatus.co.in/openid as the openid url: The Allow Access screen appears but followed by this error Unable to log in with your OpenID provider: The OpenID Provider issued an assertion for an Identifier whose discovery information did not match. Assertion endpoint info: ClaimedIdentifier: http://ajatus.co.in/openid?id=1134xxxxxxxxxxxxxxx39 ProviderLocalIdentifier: http://ajatus.co.in/openid?id=1134xxxxxxxxxxxxxxx39 ProviderEndpoint: https://www.google.com/a/ajatus.co.in/o8/ud?be=o8 OpenID version: 2.0 Service Type URIs: Discovered endpoint info: [{ ClaimedIdentifier: http://specs.openid.net/auth/2.0/identifier_select ProviderLocalIdentifier: http://specs.openid.net/auth/2.0/identifier_select ProviderEndpoint: https://www.google.com/a/ajatus.co.in/o8/ud?be=o8 OpenID version: 2.0 Service Type URIs: http://specs.openid.net/auth/2.0/server },] Contents of ajatus.co.in/openid <?xml version="1.0" encoding="UTF-8"?> <xrds:XRDS xmlns:xrds="xri://$xrds" xmlns="xri://$xrd*($v*2.0)"> <XRD> <Service priority="0"> <Type>http://specs.openid.net/auth/2.0/signon</Type> <URI>https://www.google.com/a/ajatus.co.in/o8/ud?be=o8</URI> </Service> <Service priority="0"> <Type>http://specs.openid.net/auth/2.0/server</Type> <URI>https://www.google.com/a/ajatus.co.in/o8/ud?be=o8</URI> </Service> </XRD> </xrds:XRDS> contents of ajatus.co.in/.well-known/host-meta is Link: <https://www.google.com/accounts/o8/site-xrds?hd=ajatus.co.in>; rel="describedby http://reltype.google.com/openid/xrd-op"; type="application/xrds+xml"

    Read the article

  • Read large file into sqlite table in objective-C on iPhone

    - by James Testa
    I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding: NULL error: NULL]; NSArray *lineArray = [file_text componentsSeparatedByString:@"\n"]; for(int k = 0; k < [lineArray count]; k++){ NSArray *parts = [[lineArray objectAtIndex:k] componentsSeparatedByString: @","]; NSString *field0 = [parts objectAtIndex:0]; NSString *field2 = [parts objectAtIndex:2]; NSString *field3 = [parts objectAtIndex:3]; NSString *loadSQLi = [[NSString alloc] initWithFormat: @"INSERT INTO TABLE (TABLE, FIELD0, FIELD2, FIELD3) VALUES ('%@', '%@', '%@');",field0, field2, field3]; if (sqlite3_exec (db_table, [loadSQLi UTF8String], NULL, NULL, &errorMsg) != SQLITE_OK) { sqlite3_close(db_table); NSAssert1(0, @"Error loading table: %s", errorMsg); } Am I missing something? Does anyone know of a fast way to get the file into a database? Or is it possible to translate the file into a sqlite format that can be read directly into sqlite? Or should I turn the file into a plist and load it into a Dictionary? Unfortunately I need to search on two of the fields, and I think a Dictionary can only have one key? Jim

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Managing large binary files with git

    - by pi
    Hi there. I am looking for opinions of how to handle large binary files on which my source code (web application) is dependent. We are currently discussing several alternatives: Copy the binary files by hand. Pro: Not sure. Contra: I am strongly against this, as it increases the likelihood of errors when setting up a new site/migrating the old one. Builds up another hurdle to take. Manage them all with git. Pro: Removes the possibility to 'forget' to copy a important file Contra: Bloats the repository and decreases flexibility to manage the code-base and checkouts/clones/etc will take quite a while. Separate repositories. Pro: Checking out/cloning the source code is fast as ever, and the images are properly archived in their own repository. Contra: Removes the simpleness of having the one and only git repository on the project. Surely introduces some other things I haven't thought about. What are your experiences/thoughts regarding this? Also: Does anybody have experience with multiple git repositories and managing them in one project? Update: The files are images for a program which generates PDFs with those files in it. The files will not change very often(as in years) but are very relevant to a program. The program will not work without the files. Update2: I found a really nice screencast on using git-submodule at GitCasts.

    Read the article

  • ASP.Net : Error in sending EMail from Google Apps hosted at Godaddy

    - by user279244
    Hello, I want to send EMail from my Website hosted at GoDaddy, and we are using Google Apps for Emails. Here is my code in ASP.Net/ C# MailMessage mMailMessage = new MailMessage(); mMailMessage.From = new MailAddress("[email protected]", "GotFeedback", System.Text.Encoding.UTF8); mMailMessage.To.Add(new MailAddress("[email protected]")); mMailMessage.Subject = "subject"; mMailMessage.SubjectEncoding = System.Text.Encoding.UTF8; mMailMessage.Body = body; mMailMessage.BodyEncoding = System.Text.Encoding.UTF8; mMailMessage.IsBodyHtml = true; mMailMessage.Priority = MailPriority.Normal; SmtpClient mSmtpClient = new SmtpClient(); mSmtpClient.UseDefaultCredentials = false; mSmtpClient.Credentials = new System.Net.NetworkCredential("[email protected]", "mypassword"); mSmtpClient.Host = "SMTP.GOOGLE.COM"; mSmtpClient.Port = 587; mSmtpClient.EnableSsl = true; try { mSmtpClient.Send(mMailMessage); } catch(Exception e) { string edsf = e.ToString(); } But I am getting Exception that it is unable to connect to remote server. Please help. Thanks

    Read the article

  • How can I quickly parse large (>10GB) files?

    - by Andrew
    Hi - I have to process text files 10-20GB in size of the format: field1 field2 field3 field4 field5 I would like to parse the data from each line of field2 into one of several files; the file this gets pushed into is determined line-by-line by the value in field4. There are 25 different possible values in field2 and hence 25 different files the data can get parsed into. I have tried using Perl (slow) and awk (faster but still slow) - does anyone have any suggestions or pointers toward alternative approaches? FYI here is the awk code I was trying to use; note I had to revert to going through the large file 25 times because I wasn't able to keep 25 files open at once in awk: chromosomes=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25) for chr in ${chromosomes[@]} do awk < my_in_file_here -v pat="$chr" '{if ($4 == pat) for (i = $2; i <= $2+52; i++) print i}' >> my_out_file_"$chr".query done

    Read the article

  • Getting Started Building Windows 8 Store Apps with XAML/C#

    - by dwahlin
    Technology is fun isn’t it? As soon as you think you’ve figured out where things are heading a new technology comes onto the scene, changes things up, and offers new opportunities. One of the new technologies I’ve been spending quite a bit of time with lately is Windows 8 store applications. I posted my thoughts about Windows 8 during the BUILD conference in 2011 and still feel excited about the opportunity there. Time will tell how well it ends up being accepted by consumers but I’m hopeful that it’ll take off. I currently have two Windows 8 store application concepts I’m working on with one being built in XAML/C# and another in HTML/JavaScript. I really like that Microsoft supports both options since it caters to a variety of developers and makes it easy to get started regardless if you’re a desktop developer or Web developer. Here’s a quick look at how the technologies are organized in Windows 8: In this post I’ll focus on the basics of Windows 8 store XAML/C# apps by looking at features, files, and code provided by Visual Studio projects. To get started building these types of apps you’ll definitely need to have some knowledge of XAML and C#. Let’s get started by looking at the Windows 8 store project types available in Visual Studio 2012.   Windows 8 Store XAML/C# Project Types When you open Visual Studio 2012 you’ll see a new entry under C# named Windows Store. It includes 6 different project types as shown next.   The Blank App project provides initial starter code and a single page whereas the Grid App and Split App templates provide quite a bit more code as well as multiple pages for your application. The other projects available can be be used to create a class library project that runs in Windows 8 store apps, a WinRT component such as a custom control, and a unit test library project respectively. If you’re building an application that displays data in groups using the “tile” concept then the Grid App or Split App project templates are a good place to start. An example of the initial screens generated by each project is shown next: Grid App Split View App   When a user clicks a tile in a Grid App they can view details about the tile data. With a Split View app groups/categories are shown and when the user clicks on a group they can see a list of all the different items and then drill-down into them:   For the remainder of this post I’ll focus on functionality provided by the Blank App project since it provides a simple way to get started learning the fundamentals of building Windows 8 store apps.   Blank App Project Walkthrough The Blank App project is a great place to start since it’s simple and lets you focus on the basics. In this post I’ll focus on what it provides you out of the box and cover additional details in future posts. Once you have the basics down you can move to the other project types if you need the functionality they provide. The Blank App project template does exactly what it says – you get an empty project with a few starter files added to help get you going. This is a good option if you’ll be building an app that doesn’t fit into the grid layout view that you see a lot of Windows 8 store apps following (such as on the Windows 8 start screen). I ended up starting with the Blank App project template for the app I’m currently working on since I’m not displaying data/image tiles (something the Grid App project does well) or drilling down into lists of data (functionality that the Split App project provides). The Blank App project provides images for the tiles and splash screen (you’ll definitely want to change these), a StandardStyles.xaml resource dictionary that includes a lot of helpful styles such as buttons for the AppBar (a special type of menu in Windows 8 store apps), an App.xaml file, and the app’s main page which is named MainPage.xaml. It also adds a Package.appxmanifest that is used to define functionality that your app requires, app information used in the store, plus more. The App.xaml, App.xaml.cs and StandardStyles.xaml Files The App.xaml file handles loading a resource dictionary named StandardStyles.xaml which has several key styles used throughout the application: <Application x:Class="BlankApp.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:BlankApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- Styles that define common aspects of the platform look and feel Required by Visual Studio project and item templates --> <ResourceDictionary Source="Common/StandardStyles.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application>   StandardStyles.xaml has style definitions for different text styles and AppBar buttons. If you scroll down toward the middle of the file you’ll see that many AppBar button styles are included such as one for an edit icon. Button styles like this can be used to quickly and easily add icons/buttons into your application without having to be an expert in design. <Style x:Key="EditAppBarButtonStyle" TargetType="ButtonBase" BasedOn="{StaticResource AppBarButtonStyle}"> <Setter Property="AutomationProperties.AutomationId" Value="EditAppBarButton"/> <Setter Property="AutomationProperties.Name" Value="Edit"/> <Setter Property="Content" Value="&#xE104;"/> </Style> Switching over to App.xaml.cs, it includes some code to help get you started. An OnLaunched() method is added to handle creating a Frame that child pages such as MainPage.xaml can be loaded into. The Frame has the same overall purpose as the one found in WPF and Silverlight applications - it’s used to navigate between pages in an application. /// <summary> /// Invoked when the application is launched normally by the end user. Other entry points /// will be used when the application is launched to open a specific file, to display /// search results, and so forth. /// </summary> /// <param name="args">Details about the launch request and process.</param> protected override void OnLaunched(LaunchActivatedEventArgs args) { Frame rootFrame = Window.Current.Content as Frame; // Do not repeat app initialization when the Window already has content, // just ensure that the window is active if (rootFrame == null) { // Create a Frame to act as the navigation context and navigate to the first page rootFrame = new Frame(); if (args.PreviousExecutionState == ApplicationExecutionState.Terminated) { //TODO: Load state from previously suspended application } // Place the frame in the current Window Window.Current.Content = rootFrame; } if (rootFrame.Content == null) { // When the navigation stack isn't restored navigate to the first page, // configuring the new page by passing required information as a navigation // parameter if (!rootFrame.Navigate(typeof(MainPage), args.Arguments)) { throw new Exception("Failed to create initial page"); } } // Ensure the current window is active Window.Current.Activate(); }   Notice that in addition to creating a Frame the code also checks to see if the app was previously terminated so that you can load any state/data that the user may need when the app is launched again. If you’re new to the lifecycle of Windows 8 store apps the following image shows how an app can be running, suspended, and terminated.   If the user switches from an app they’re running the app will be suspended in memory. The app may stay suspended or may be terminated depending on how much memory the OS thinks it needs so it’s important to save state in case the application is ultimately terminated and has to be started fresh. Although I won’t cover saving application state here, additional information can be found at http://msdn.microsoft.com/en-us/library/windows/apps/xaml/hh465099.aspx. Another method in App.xaml.cs named OnSuspending() is also included in App.xaml.cs that can be used to store state as the user switches to another application:   /// <summary> /// Invoked when application execution is being suspended. Application state is saved /// without knowing whether the application will be terminated or resumed with the contents /// of memory still intact. /// </summary> /// <param name="sender">The source of the suspend request.</param> /// <param name="e">Details about the suspend request.</param> private void OnSuspending(object sender, SuspendingEventArgs e) { var deferral = e.SuspendingOperation.GetDeferral(); //TODO: Save application state and stop any background activity deferral.Complete(); } The MainPage.xaml and MainPage.xaml.cs Files The Blank App project adds a file named MainPage.xaml that acts as the initial screen for the application. It doesn’t include anything aside from an empty <Grid> XAML element in it. The code-behind class named MainPage.xaml.cs includes a constructor as well as a method named OnNavigatedTo() that is called once the page is displayed in the frame.   /// <summary> /// An empty page that can be used on its own or navigated to within a Frame. /// </summary> public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); } /// <summary> /// Invoked when this page is about to be displayed in a Frame. /// </summary> /// <param name="e">Event data that describes how this page was reached. The Parameter /// property is typically used to configure the page.</param> protected override void OnNavigatedTo(NavigationEventArgs e) { } }   If you’re experienced with XAML you can switch to Design mode and start dragging and dropping XAML controls from the ToolBox in Visual Studio. If you prefer to type XAML you can do that as well in the XAML editor or while in split mode. Many of the controls available in WPF and Silverlight are included such as Canvas, Grid, StackPanel, and Border for layout. Standard input controls are also included such as TextBox, CheckBox, PasswordBox, RadioButton, ComboBox, ListBox, and more. MediaElement is available for rendering video or playing audio files. Some of the “common” XAML controls included out of the box are shown next:   Although XAML/C# Windows 8 store apps don’t include all of the functionality available in Silverlight 5, the core functionality required to build store apps is there with additional functionality available in open source projects such as Callisto (started by Microsoft’s Tim Heuer), Q42.WinRT, and others. Standard XAML data binding can be used to bind C# objects to controls, converters can be used to manipulate data during the data binding process, and custom styles and templates can be applied to controls to modify them. Although Visual Studio 2012 doesn’t support visually creating styles or templates, Expression Blend 5 handles that very well. To get started building the initial screen of a Windows 8 app you can start adding controls as mentioned earlier. Simply place them inside of the <Grid> element that’s included. You can arrange controls in a stacked manner using the StackPanel control, add a border around controls using the Border control, arrange controls in columns and rows using the Grid control, or absolutely position controls using the Canvas control. One of the controls that may be new to you is the AppBar. It can be used to add menu/toolbar functionality into a store app and keep the app clean and focused. You can place an AppBar at the top or bottom of the screen. A user on a touch device can swipe up to display the bottom AppBar or right-click when using a mouse. An example of defining an AppBar that contains an Edit button is shown next. The EditAppBarButtonStyle is available in the StandardStyles.xaml file mentioned earlier. <Page.BottomAppBar> <AppBar x:Name="ApplicationAppBar" Padding="10,0,10,0" AutomationProperties.Name="Bottom App Bar"> <Grid> <StackPanel x:Name="RightPanel" Orientation="Horizontal" Grid.Column="1" HorizontalAlignment="Right"> <Button x:Name="Edit" Style="{StaticResource EditAppBarButtonStyle}" Tag="Edit" /> </StackPanel> </Grid> </AppBar> </Page.BottomAppBar> Like standard XAML controls, the <Button> control in the AppBar can be wired to an event handler method in the MainPage.Xaml.cs file or even bound to a ViewModel object using “commanding” if your app follows the Model-View-ViewModel (MVVM) pattern (check out the MVVM Light package available through NuGet if you’re using MVVM with Windows 8 store apps). The AppBar can be used to navigate to different screens, show and hide controls, display dialogs, show settings screens, and more.   The Package.appxmanifest File The Package.appxmanifest file contains configuration details about your Windows 8 store app. By double-clicking it in Visual Studio you can define the splash screen image, small and wide logo images used for tiles on the start screen, orientation information, and more. You can also define what capabilities the app has such as if it uses the Internet, supports geolocation functionality, requires a microphone or webcam, etc. App declarations such as background processes, file picker functionality, and sharing can also be defined Finally, information about how the app is packaged for deployment to the store can also be defined. Summary If you already have some experience working with XAML technologies you’ll find that getting started building Windows 8 applications is pretty straightforward. Many of the controls available in Silverlight and WPF are available making it easy to get started without having to relearn a lot of new technologies. In the next post in this series I’ll discuss additional features that can be used in your Windows 8 store apps.

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • Python: slicing a very large binary file

    - by Duncan Tait
    Say I have a binary file of 12GB and I want to slice 8GB out of the middle of it. I know the position indices I want to cut between. How do I do this? Obviously 12GB won't fit into memory, that's fine, but 8GB won't either... Which I thought was fine, but it appears binary doesn't seem to like it if you do it in chunks! I was appending 10MB at a time to a new binary file and there are discontinuities on the edges of each 10MB chunk in the new file. Is there a Pythonic way of doing this easily?

    Read the article

  • Use php to zip large files

    - by Joseph
    Hi, I have a php form that has a bunch of checkboxes that all contain links to files. Once a user clicks on which checkboxes (files) they want, it then zips up the files and forces a download. I got a simple php zip force download to work, but when one of the files is huge or if someone lets say selects the whole list to zip up and download, my server errors out. I understand that I can increase the server size, but are there any other ways? Thanks!

    Read the article

  • Any specifications/docs around optimization of Google Apps Script, to avoid timeouts and "hangs"

    - by BruceM
    From my experience so far, it seems that if you write a script that makes lots of expensive calls close together, the functionality just "hangs", or you get inconsistent responses, and have to refresh the browser because sheets stop updating etc. Are there any docs or specs that clarify this, as releasing an app fr real-world use is not possible if users can expect it to work most of the time, and produce random results every now and then...

    Read the article

  • Reading a large file into Perl array of arrays and manipulating the output for different purposes

    - by Brian D.
    Hello, I am relatively new to Perl and have only used it for converting small files into different formats and feeding data between programs. Now, I need to step it up a little. I have a file of DNA data that is 5,905 lines long, with 32 fields per line. The fields are not delimited by anything and vary in length within the line, but each field is the same size on all 5905 lines. I need each line fed into a separate array from the file, and each field within the line stored as its own variable. I am having no problems storing one line, but I am having difficulties storing each line successively through the entire file. This is how I separate the first line of the full array into individual variables: my $SampleID = substr("@HorseArray", 0, 7); my $PopulationID = substr("@HorseArray", 9, 4); my $Allele1A = substr("@HorseArray", 14, 3); my $Allele1B = substr("@HorseArray", 17, 3); my $Allele2A = substr("@HorseArray", 21, 3); my $Allele2B = substr("@HorseArray", 24, 3); ...etc. My issues are: 1) I need to store each of the 5905 lines as a separate array. 2) I need to be able to reference each line based on the sample ID, or a group of lines based on population ID and sort them. I can sort and manipulate the data fine once it is defined in variables, I am just having trouble constructing a multidimensional array with each of these fields so I can reference each line at will. Any help or direction is much appreciated. I've poured over the Q&A sections on here, but have not found the answer to my questions yet. Thanks!! -Brian

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Windows store apps: ScrollViewer with dinamic content

    - by Alexandru Circus
    I have a scrollViewer with an ItemsControl (which holds rows with data) as content. The data from these rows is grabbed from the server so I want to display a ProgressRing with a text until the data arrives. Basically I want the content of the ScrollViewer to be a grid with progress ring and a text and after the data arrives the content to be changed with my ItemsControl. The problem is that the ScrollViewer does not accept more than 1 element as content. Please tell me how can I solve this problem. (I'm a C# beginner) <FlipView x:Name="OptionPagesFlipView" Grid.Row="1" TabNavigation="Cycle" SelectionChanged="OptionPagesFlipView_SelectionChanged" ItemsSource="{Binding OptionsPageItems}"> <FlipView.ItemTemplate> <DataTemplate x:Name="OptionMonthPageTemplate"> <ScrollViewer x:Name="OptionsScrollViewer" HorizontalScrollMode="Disabled" HorizontalAlignment="Stretch" VerticalScrollBarVisibility="Auto"> <ItemsControl x:Name="OptionItemsControl" ItemsSource="{Binding OptionItems, Mode=OneWay}" Visibility="Collapsed"> <ItemsControl.ItemTemplate> <DataTemplate x:Name="OptionsChainItemTemplate"> <Grid x:Name="OptionItemGrid" Background="#FF9DBDF7" HorizontalAlignment="Stretch"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <!-- CALL BID --> <TextBlock Text="Bid" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="0" Grid.Column="0" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallBidTextBlock" Text="{Binding CallBid}" Foreground="Blue" HorizontalAlignment="Left" Grid.Row="1" Grid.Column="0" Margin="5,0,5,5" FontSize="18"/> <!-- CALL ASK --> <TextBlock Text="Ask" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="2" Grid.Column="0" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallAskTextBlock" Text="{Binding CallAsk}" Foreground="Blue" HorizontalAlignment="Left" Grid.Row="3" Grid.Column="0" Margin="5,0,5,0" FontSize="18"/> <!-- CALL LAST --> <TextBlock Text="Last" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="0" Grid.Column="1" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallLastTextBlock" Text="{Binding CallLast}" Foreground="Blue" HorizontalAlignment="Left" Grid.Row="1" Grid.Column="1" Margin="5,0,5,5" FontSize="18"/> <!-- CALL NET CHANGE --> <TextBlock Text="Net Ch" Foreground="Gray" HorizontalAlignment="Left" Grid.Row="2" Grid.Column="1" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="CallNetChTextBlock" Text="{Binding CallNetChange}" Foreground="{Binding CallNetChangeForeground}" HorizontalAlignment="Left" Grid.Row="3" Grid.Column="1" Margin="5,0,5,5" FontSize="18"/> <!-- STRIKE --> <TextBlock Text="Strike" Foreground="Gray" HorizontalAlignment="Center" Grid.Row="1" Grid.Column="2" FontSize="18" Margin="5,0,5,0"/> <Border Background="{Binding StrikeBackground}" HorizontalAlignment="Center" Grid.Row="2" Grid.Column="2" Margin="5,0,5,5"> <TextBlock x:Name="StrikeTextBlock" Text="{Binding Strike}" Foreground="Blue" FontSize="18"/> </Border> <!-- PUT LAST --> <TextBlock Text="Last" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="0" Grid.Column="3" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="PutLastTextBlock" Text="{Binding PutLast}" Foreground="Blue" HorizontalAlignment="Right" Grid.Row="1" Grid.Column="3" Margin="5,0,5,5" FontSize="18"/> <!-- PUT NET CHANGE --> <TextBlock Text="Net Ch" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="2" Grid.Column="3" FontSize="18" Margin="5,0,5,0"/> <TextBlock x:Name="PutNetChangeTextBlock" Text="{Binding PutNetChange}" Foreground="{Binding PutNetChangeForeground}" HorizontalAlignment="Right" Grid.Row="3" Grid.Column="3" Margin="5,0,5,5" FontSize="18"/> <!-- PUT BID --> <TextBlock Text="Bid" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="0" Grid.Column="4" FontSize="18" Margin="5,0,15,0"/> <TextBlock x:Name="PutBidTextBlock" Text="{Binding PutBid}" Foreground="Blue" HorizontalAlignment="Right" Grid.Row="1" Grid.Column="4" Margin="5,0,15,5" FontSize="18"/> <!-- PUT ASK --> <TextBlock Text="Ask" Foreground="Gray" HorizontalAlignment="Right" Grid.Row="2" Grid.Column="4" FontSize="18" Margin="5,0,15,0"/> <TextBlock x:Name="PutAskTextBlock" Text="{Binding PutAsk}" Foreground="Blue" HorizontalAlignment="Right" Grid.Row="3" Grid.Column="4" Margin="5,0,15,5" FontSize="18"/> <!-- BOTTOM LINE SEPARATOR--> <Rectangle Fill="Black" Height="1" Grid.ColumnSpan="5" VerticalAlignment="Bottom" Grid.Row="3"/> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <!--<Grid> <Grid.RowDefinitions> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <ProgressRing x:Name="CustomProgressRing" Height="40" Width="40" IsActive="true" Grid.Column="0" Margin="20" Foreground="White"/> <TextBlock x:Name="CustomTextBlock" Height="auto" Width="auto" FontSize="25" Grid.Column="1" Margin="20"/> <Border BorderBrush="#FFFFFF" BorderThickness="1" Grid.ColumnSpan="2"/> </Grid>--> </ScrollViewer> </DataTemplate> </FlipView.ItemTemplate>

    Read the article

  • Are large include files like iostream efficient? (C++)

    - by Keand64
    Iostream, when all of the files it includes, the files that those include, and so on and so forth, adds up to about 3000 lines. Consider the hello world program, which needs no more functionality than to print something to the screen: #include <iostream> //+3000 lines right there. int main() { std::cout << "Hello, World!"; return 0; } this should be a very simple piece of code, but iostream adds 3000+ lines to a marginal piece of code. So, are these 3000+ lines of code really needed to simply display a single line to the screen, and if not, do they create a less efficient program than if I simply copied the relevant lines into the code?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >