Search Results

Search found 13004 results on 521 pages for 'pretty printing'.

Page 93/521 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Unable to get to remote samba share

    - by tubaguy50035
    I have a remote VPS that I would like to setup samba on and only allow my IP access to it. I currently have in my smb.conf: [global] netbios name = apollo security = user encrypt passwords = true socket options = TCP_NODELAY printing = bsd log level = 3 log file = /var/log/samba/log/%m debug timestamp = yes max log size = 100 [hosting] path = /hosting/ comment = Hosting Folder browseable = yes read only = yes guest account = yes valid users = nick I have the ports (137,138,139,445) open in iptables (they're open to everyone right now while I debug) and I see nothing in the syslog about iptables blocking my requests. When I try to open a file browser to my address \\ipaddress, it hangs for a good thirty seconds, and then opens a log in box. I enter my user name and password for the server, hit okay. It then opens the same box, I enter my credentials again and hit enter. Windows then tells me it could not connect. My user account is added to Samba already. Anybody have any suggestions what I can do to get this working?

    Read the article

  • How to change internal page numbers in the meta data of a PDF?

    - by YGA
    I have a pdf document I created through non-Acrobat means (printing to pdf, then merging a bunch of pdfs), but I'd like to manually change the page numbers (i.e. the first several pages are simply title pages, the page that is labeled "page 1" is really the 7th sheet of the pdf). What's the simplest (and ideally, free) way to do this? To be clear, I am not trying to change the numbers on the pages themselves, but the page numbers in the "metadata" that the pdf stores (the pages themselves are already numbered correctly; I just want "go to page 1" to go to the page labeled 1, which could be sheet 7). For what it's worth, I'm on Windows, though I have access to Macs as well.

    Read the article

  • How to add an image as a full-page background in Word 2010

    - by Oak
    I'm trying to add an image as a full-page background in word. I've tried page layout -> page color -> fill effect -> picture which looks fine in the preview (though when I try to zoom in or out it no longer looks the same), but when printing it tiled the image instead of just showing it once. I've tried insert -> picture and then setting it to "behind text" and settings its location to (0,0), but then when trying to change the image size the "relative" option is greyed out, so I can't set it to 100% of page size: I guess I can set it manually to the page size, but is there another, simpler way to just set a single image as a background?

    Read the article

  • Printer recommendation

    - by Coding District
    Hi guys, I'm looking to buy a printer for home use and I'm not sure which one to get. I'm not very good when it comes to printers. Here's what I'm looking for: cheap (least $ per page) good quality (last longest, any specific brands to avoid?) not heavy printing (let's say ~5 pages per week) OK quality (I don't need "the best". I'm not going to print any photos but will need color) can scan, fax, and print I'm currently looking at these two since it's boxing day tomorrow and they're on sale: http://www.bestbuy.ca/EN-CA/product/id/10155178.aspx http://www.bestbuy.ca/en-CA/product/hewlett-packard-hp-officejet-wireless-all-in-one-inkjet-printer-4500-wl-4500-wl/10146663.aspx?path=14c256643988a02e34424eec10028145en02 Can I get some opinions about the above?

    Read the article

  • parallel to usb driver windows 7 dot matrix printer

    - by user975234
    I recently built a new Core i5 system with Windows 7 Professional. I had planned to hook up my TVS MSP 250 XL printer using a USB to parallel cable Once I plug the cable in, Windows 7 recognizes it as an IEEE-1284 controller and automatically installs the appropriate driver. However, in the status window it reports the following: "USB Printing Support -- Ready to use" "No Printer Attached -- Ready to use" When I then go ahead and manually add the printer using the "virtual printer port for USB" I can add the printer seemingly without problem. Once finished, it appears in the Devices and Printers panel. Yet, all attempts to print on this printer fail. It appears that simply no data is sent to the printer (either by programs like word or adobe or by attempting to print a test page. Does anybody know how to fix this?

    Read the article

  • How to Split a Big Postscript file (3000 pages) into one individual file per page (using Windows 7)?

    - by Pablo
    Hi, I'm having trouble doing the following: I have a big PDF file that I converted to postscript (for commercial printing). The resulting file is too big to be processed by the printer (machine). I've been trying to find a way to either: Convert from the original (many pages) PDF file to many Postscript file (one postcript file per PDF page in original PDF file(. Convert from PDF to PS (or even EPS). - I managed to do this Then split the PS file into a collection of smaller files. I've tried using Ghostscript, but it is all gibberish to me. Thanks. PS. If you have a good GS tutorial (for dummies?), please share the link.

    Read the article

  • Netgear WNDR3700 2.4GHz and 5GHz Wi-Fi are segregated

    - by Tim Sullivan
    I have a Netgear WNDR3700 with both the 2.4GHz (B/G/N) and 5GHz (A/N) broadcasting. I'd assumed that devices connecting on either channel would work together without issues, but there are some devices (such as my iPhone 4 or HP wireless printer) that will only connect at 2.4GHz, and some that connect at 5GHz. However, it seems that they're segregated on the network, so a computer at 5GHz can't print to the printer, but the iPhone can. This is causing issues not just with printing, but with AppleTV and so on. Is there a way to have the router let these devices all talk to each other?

    Read the article

  • Moving cpanel backup of magento site to VPS

    - by user2564024
    I was having my site in shared hosting, I took the entire backup, its structure is like addons homedir mysql resellerpackages suspendinfo bandwidth homedir_paths mysql.sql sds userconfig counters httpfiles mysql-timestamps sds2 userdata cp locale nobodyfiles shadow va cron logaholic pds shell vad digestshadow logs proftpdpasswd ssl version dnszones meta psql sslcerts vf domainkeys mm quota ssldomain fp mma resellerconfig sslkeys has_sslstorage mms resellerfeatures suspended Now I have subscribed to vps, I have copied the files inside homedir/public_html to var/www/html of my new hosting, but am seeing the following error when I view it browser, There has been an error processing your request Exception printing is disabled by default for security reasons. Error log record number: 259343920016 I have just created database with name magenhto inside mysql. Previously I had cpanel and used one click installer. Hence am not aware of how to use that data inside mysql to this new system and are there any more changes.

    Read the article

  • Mac OS X keeps "old" environment variable around

    - by Xymak1y
    So far I had /Applications/play-1.2.5/ added to my $PATH variable. Now I'm working with 2.2.1, which I installed in /Applications/play-2.2.1 and changed in ~/.bash_profile (which is getting sourced at startup). However, when printing $PATH, 1.2.5 is somehow still around: mbp:~ user$ echo $PATH /usr/local/share/npm/bin:/Applications/play-2.2.1:/usr/local/heroku/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Applications/play-1.2.5:/Applications/XAMPP/xamppfiles/bin/:/opt/X11/bin As far as I now, I only entered $PATH variables in .bash_profile, which looks like this: mbp:~ user$ cat .bash_profile source ~/.git-completion.bash ### Added by the Heroku Toolbelt export PATH="/usr/local/heroku/bin:$PATH" ### Play Framework export PATH="/Applications/play-2.2.1:$PATH" export PATH="/usr/local/share/npm/bin:$PATH" I'm also not sure where the XAMPP extension to the variable comes from. Can I see somewhere which other files are being sourced on startup?

    Read the article

  • Some printers are merged into control panel on Windows 7

    - by Bertrand SCHITS
    I install the same printer several times to allow easy selection for users. They just choose the desired printer to have the corresponding functionnality (instead of going to printer properties and change them back and forth): one with double-sided printing, one for draft quality, one for upper tray, etc. On Windows 7, when I install a printer several times, the control panel's printers icons are merged. I then have only one icon for two or three identical printers. I can right-clic on the icon to display the usual menu, and some items of the menu have an arrow to access each printer. This is very missleading for users and admins. I have this behaviour on some computers, when others don't exhibit this. I didn't found what is the difference between computers with and without the problem. Anyone know how to prevent this ?

    Read the article

  • Vagrant and cups port forwarding not working. Not accessible

    - by AAlvz
    I'm working with vagrant and I'm trying to use it as a printing server. I installed cups. Internally everything works just fine. I can even make a quick curl to my localhost:631 (cups port inside my vagrant) and there's everything. The thing is I can't access it in any way I try from the host machine. Obviously I forwarded the port and I've tried with several ports. I've also tried with Debian squeeze and Ubuntu 12.04. Here is my current Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "guruDebian" config.vm.network :forwarded_port, guest: 80, host: 8080 config.vm.network :forwarded_port, guest: 631, host: 6363 ## HERE IS CUPS end Any ideas? ... I'll upload any file if necessary.

    Read the article

  • Diagramming Software for a Developer/Designer

    - by Craig Walker
    For a long time I've been looking for a good diagramming/vector-based drawing program that meets my needs as a developer. I'd like to: Draw database diagrams Draw flow charts Draw object-modeling diagrams (UML being the standard) Draw other free-form diagrams (basically boxes & arrows with the occasional clipart) Draw mockups of user interfaces and web pages EDIT: I want good-looking electronic-format diagrams that I can show to 3rd parties, not just something for my own internal use. EDIT 2: I'm also looking for Windows software, although I'm toying with the idea of switching to Mac, so a really good Mac-only product might get me to switch. Basically I need a good vector graphic program (with decent grouping, connecting lines, and ideally auto-routing). I'd prefer a diagramming tool that can also be used for drawing (for the UI mockups) rather than a drawing tool that can also be used for diagrams. I've tried Visio on several occasions, and every time I've been disappointed. The interface always seems to get in my way at some point. It's pretty close to what I want, and the latest version (I got the trail from MS) seems to be better than previous ones in terms of usability, but I really don't want to plunk down that sort of cash for a mediocre product. I've tried Dia and Inkscape, and while initially promising and with the right price tag, I found both of them to be lacking in several ways (including some recurring bugs). I've toyed with getting Adobe Illustrator, but I've never used it before, and I have a feeling that it wouldn't handle the diagramming aspect very well, and I don't want to buy a copy just to find out it doesn't meet my needs. So far, the product that I've had the most success with is, sadly, OpenOffice Draw. It's free of course (which lowers my expectations and thus improves my view of it) and its usability is pretty good, but in the end I'd like something more suited to diagramming. I'm willing to spend real money (in the $500-$1K range) for a really good piece of software if it does everything I want it to. The front runner is of course Visio but I'm hoping for more. Does anybody have any recommendations? CONCLUSION: @dlamblin had the most informative post, but the part I gained the most from was his/her (and others) mention of OmniGraffle, not Gliffy. I gave Gliffy a try, and it seemed neet for occational use, but since it's a Flash app (note: not AJAX as dlamblin mentioned) it's still a bit of a pain to use (no keyboard shortcuts for copy/paste was pretty much a deal breaker for me). I also tried SmartDraw, but it had 3-strikes-you're-out against it: The trial period was only 7 days long. It used some nonstandard (and visually jarring) GUI widget toolkit for its UI. At the very least it makes me suspicious (how do I know it will actually work & support the standard Windows features?) It crashed on me early into my trial. OmniGraffle looks like exactly what I want... except that it's Mac-only (so I couldn't give it a try). However, it got good reviews from my Mac-owning coworker, and I hope to try it on a friend's Mac soon. If it's good enough then I might spring for a new MacBook.

    Read the article

  • Adding UL to jQuery UI Tabs

    - by Dave Kiss
    It seems like whenever I try to add a UL inside of the containers when using jQuery UI - Tabs, it breaks the javascript. Is there a way I can use a UL inside of these containers that I am missing? Thanks <div id="tabs"> <div id="fragment-1"> <h4>Pre-Press Requirements</h4> ? SAMPLE of final artwork with noted sizes. ? NATIVE FILES & High Resolution PDF preferred. Files must be created or saved to the listed accepted file formats. ? FONTS used in files need to be supplied in a separate folder marked "fonts". Please ensure all families (screen & printer) fonts are supplied for the job. ? IMAGES must be saved as CMYK and no less than 300dpi. NO RGB FILES! We prefer images to be TIF or EPS formats. If you are submitting artwork for spot color printing vector artwork is preferred. Your images should be supplied in a separate folder marked "links". This will ensure proper reproduction of your artwork. ? COLORS need to be clearly specified. Pantone (PMS) colors preferred. Please specify if job is to be printed CMYK, spot color, etc. ? BLEEDS should be no less than .25" ? PDF's should be High Resolution. Please include any spot colors or CMYK format for full color printing. NO RGB FILES! All bleeds should be included with trim marks. All fonts must be embedded or outlined. No "layered" PDF files. </div> <div id="fragment-2"> <p>Our presses are all capable of sizes up to 11" x 17" using spot color or full color.</p> </div> <div id="fragment-3"> <p>USA Quickprint has complete in house bindery to finish each job to meet your needs.</p> <ul> <li>Folding</li> <li>Scoring</li> <li>Perforation</li> <li>Drilling</li> <li>Shrink Wrap</li> <li>Trimming</li> <li>Collating</li> <li>Spiral Binding</li> <li>GBC Binding</li> <li>Padding</li> <li>Stapling</li> <li>Numbering</li> <li>Lamination</li> </ul> </div>

    Read the article

  • JPA @Version behaviour

    - by Albert Kam
    Hello, im using JPA2 with Hibernate 3.6.x I have made a simple testing on the @Version. Let's say we have 2 entities, Entity Team has a List of Player Entities, bidirectional relationship, lazy fetchtype, cascade-type All Both entities have @Version And here are the scenarios : Whenever a modification is made to one of the team/player entity, the team/player's version will be increased when flushed/commited (version on the modified record is increased). Adding a new player entity to team's collection using persist, the entity the team's version will be assigned after persist (adding a new entity, that new entity will got it's version). Whenever an addition/modification/removal is made to one of the player entity, the team's version will be increased when flushed/commited. (add/modify/remove child record, parent's version got increased also) I can understand the number 1 and 2, but the number 3, i dont understand, why the team's version got increased ? And that makes me think of other questions : What if i got Parent <- child <- granchildren relation ship. Will an addition or modification on the grandchildren increase the version of child and parent ? In scenario number 2, how can i get the version on the team before it's commited, like perhaps by using flush ? Is it a recommended way to get the parent's version after we do something to the child[s] ? Here's a code sample from my experiment, proving that when ReceivingGoodDetail is the owning side, and the version got increased in the ReceivingGood after flushing. Sorry that this use other entities, but ReceivingGood is like the Team, ReceivingGoodDetail is like the Player. 1 ReceivingGood/Team, many ReceivingGoodDetail/Player. /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, .. too long Hibernate: select product0_.id as id0_4_, product0_.creationDate as creation2_0_4_, .. too long before persisting the new detail, version of header is : 14 persisting the detail 1c9f81e1-8a49-4189-83f5-4484508e71a7 printing the size of the header : Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, .. too long 7 after persisting the new detail, version of header is : 14 Hibernate: insert into ReceivingGoodDetail (creationDate, modificationDate, usercreate_id, usermodify_id, version, buyQuantity, buyUnit, internalQuantity, internalUnit, product_id, receivinggood_id, supplierLotNumber, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? after flushing, version of header is now : 15 */ public void addDetailWithoutTouchingCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // create a new detail ReceivingGoodDetail receivingGoodDetailCumi = new ReceivingGoodDetail(); receivingGoodDetailCumi.setBuyUnit("Drum"); receivingGoodDetailCumi.setBuyQuantity(1L); receivingGoodDetailCumi.setInternalUnit("Liter"); receivingGoodDetailCumi.setInternalQuantity(10L); receivingGoodDetailCumi.setProduct(getProduct("b3e83b2c-d27b-4572-bf8d-ac32f6de5eaa")); receivingGoodDetailCumi.setSupplierLotNumber("Supplier Lot 1"); decorateEntity(receivingGoodDetailCumi, getUser("3978fee3-9690-4377-84bd-9fb05928a6fc")); receivingGoodDetailCumi.setReceivingGood(receivingGood); System.out.println("before persisting the new detail, version of header is : " + receivingGood.getVersion()); // persist it System.out.println("persisting the detail " + receivingGoodDetailCumi.getId()); em.persist(receivingGoodDetailCumi); System.out.println("printing the size of the header : "); System.out.println(receivingGood.getDetails().size()); System.out.println("after persisting the new detail, version of header is : " + receivingGood.getVersion()); em.flush(); System.out.println("after flushing, version of header is now : " + receivingGood.getVersion()); }

    Read the article

  • destructor and copy-constructor calling..(why does it get called at these times)

    - by sil3nt
    Hello there, I have the following code #include <iostream> using namespace std; class Object { public: Object(int id){ cout << "Construct(" << id << ")" << endl; m_id = id; } Object(const Object& obj){ cout << "Copy-construct(" << obj.m_id << ")" << endl; m_id = obj.m_id; } Object& operator=(const Object& obj){ cout << m_id << " = " << obj.m_id << endl; m_id = obj.m_id; return *this; } ~Object(){ cout << "Destruct(" << m_id << ")" << endl; } private: int m_id; }; Object func(Object var) { return var; } int main(){ Object v1(1); cout << "( a )" << endl; Object v2(2); v2 = v1; cout << "( b )" << endl; Object v4 = v1; Object *pv5; pv5 = &v1; pv5 = new Object(5); cout << "( c )" << endl; func(v1); cout << "( d )" << endl; delete pv5; } which outputs Construct(1) ( a ) Construct(2) 2 = 1 ( b ) Copy-construct(1) Construct(5) ( c ) Copy-construct(1) Copy-construct(1) Destruct(1) Destruct(1) ( d ) Destruct(5) Destruct(1) Destruct(1) Destruct(1) I have some issues with this, firstly why does Object v4 = v1; call the copy constructor and produce Copy-construct(1) after the printing of ( b ). Also after the printing of ( c ) the copy-constructor is again called twice?, Im not certain of how this function works to produce that Object func(Object var) { return var; } and just after that Destruct(1) gets called twice before ( d ) is printed. sorry for the long question, I'm confused with the above.

    Read the article

  • How to manipulate data after its retrieved via remote database

    - by bMon
    So I've used code examples from all over the net and got my app to accurately call a .php file on my server, retrieve the JSON data, then parse the data, and print it. The problem is that its just printing to the screen for sake of the tutorial I was following, but now I need to use that data in other places and need help figuring out that process. The ultimate goal is to return my db query with map coordinates, then plot them on a google map. I have another app in which I manually plot points on a map, so I'll be integrating this app with that once I can get my head around how to correctly manipulate the data returned. public class Remote extends Activity { /** Called when the activity is first created. */ TextView txt; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // Create a crude view - this should really be set via the layout resources // but since its an example saves declaring them in the XML. LinearLayout rootLayout = new LinearLayout(getApplicationContext()); txt = new TextView(getApplicationContext()); rootLayout.addView(txt); setContentView(rootLayout); // Set the text and call the connect function. txt.setText("Connecting..."); //call the method to run the data retreival txt.setText(getServerData(KEY_121)); } public static final String KEY_121 = "http://example.com/mydbcall.php"; private String getServerData(String returnString) { InputStream is = null; String result = ""; //the year data to send //ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); //nameValuePairs.add(new BasicNameValuePair("year","1970")); try{ HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost(KEY_121); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); }catch(Exception e){ Log.e("log_tag", "Error in http connection "+e.toString()); } //convert response to string try{ BufferedReader reader = new BufferedReader(new InputStreamReader(is,"iso-8859-1"),8); StringBuilder sb = new StringBuilder(); String line = null; while ((line = reader.readLine()) != null) { sb.append(line + "\n"); } is.close(); result=sb.toString(); }catch(Exception e){ Log.e("log_tag", "Error converting result "+e.toString()); } //parse json data try{ JSONArray jArray = new JSONArray(result); for(int i=0;i<jArray.length();i++){ JSONObject json_data = jArray.getJSONObject(i); Log.i("log_tag","longitude: "+json_data.getDouble("longitude")+ ", latitude: "+json_data.getDouble("latitude") ); //Get an output to the screen returnString += "\n\t" + jArray.getJSONObject(i); } }catch(JSONException e){ Log.e("log_tag", "Error parsing data "+e.toString()); } return returnString; } } So the code: returnString += "\n\t" + jArray.getJSONObject(i); is what is currently printing to the screen. What I have to figure out is how to get the data into something I can reference in other spots in the program, and access the individual elements ie: double longitude = jArray.getJSONObject(3).longitude; or something to that effect.. I figure the class getServerData will have to return a Array type or something? Any help is appreciated, thanks.

    Read the article

  • Export data to Excel from Silverlight/WPF DataGrid

    - by outcoldman
    Data export from DataGrid to Excel is very common task, and it can be solved with different ways, and chosen way depend on kind of app which you are design. If you are developing app for enterprise, and it will be installed on several computes, then you can to advance a claim (system requirements) with which your app will be work for client. Or customer will advance system requirements on which your app should work. In this case you can use COM for export (use infrastructure of Excel or OpenOffice). This approach will give you much more flexibility and give you possibility to use all features of Excel app. About this approach I’ll speak below. Other way – your app is for personal use, it can be installed on any home computer, in this case it is not good to ask user to install MS Office or OpenOffice just for using your app. In this way you can use foreign tools for export, or export to xml/html format which MS Office can read (this approach used by JIRA). But in this case will be more difficult to satisfy user tasks, like create document with landscape rotation and with defined fields for printing. At this article I'll show you how to work with Excel object from .NET 4 and Silverlight 4 with dynamic objects and give you an approach which allow you to export data from DataGrid Silverlight and WPF controls. Read more...

    Read the article

  • Webby Nominations for Retail

    - by David Dorf
    The Webby Awards were created back in 1996 when the internet was just a baby. This is their 14th year of highlighting excellence on the Web, and there are lots of great nominations. Its quite amazing to see the rich content and interactivity of today's websites. Some interesting nominees for this year are: Sephora did a campaign at Christmas, and what remains of the Sephora Clause website is a bunch of wishes. The Starbucks "All you need is Love" campaign has lots of cool videos. The Sound Check from Walmart highlights raising music artists. Refinery29 has their fashion info hub. The five nominees in the retail category are: Bugaboo.com's website for selling high-end baby strollers. If you're looking for a high-end bag, check out Crumpler's flash-based e-commerce site. It's highly interactive, but a little on the slow side. I Make My Case sells custom designed iPod/iPhone and Blackberry cases. At MOO.com, they love to print. Tons of art for printing customized business cards, post cards, etc. If you want light shoes, check out Puma L.I.F.T. and see just how light shoes can weigh. Check them out, cast a few votes, and see if you're inspired to create something even better.

    Read the article

  • ASP.NET GZip Encoding Caveats

    - by Rick Strahl
    GZip encoding in ASP.NET is pretty easy to accomplish using the built-in GZipStream and DeflateStream classes and applying them to the Response.Filter property.  While applying GZip and Deflate behavior is pretty easy there are a few caveats that you have watch out for as I found out today for myself with an application that was throwing up some garbage data. But before looking at caveats let’s review GZip implementation for ASP.NET. ASP.NET GZip/Deflate Basics Response filters basically are applied to the Response.OutputStream and transform it as data is written to it through the ASP.NET Response object. So a Response.Write eventually gets written into the output stream which if a filter is also written through the filter stream’s interface. To perform the actual GZip (and Deflate) encoding typically used by Web pages .NET includes the GZipStream and DeflateStream stream classes which can be readily assigned to the Repsonse.OutputStream. With these two stream classes in place it’s almost trivially easy to create a couple of reusable methods that allow you to compress your HTTP output. In my standard WebUtils utility class (from the West Wind West Wind Web Toolkit) created two static utility methods – IsGZipSupported and GZipEncodePage – that check whether the client supports GZip encoding and then actually encodes the current output (note that although the method includes ‘Page’ in its name this code will work with any ASP.NET output). /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } /// <summary> /// Sets up the current page or handler to use GZip through a Response.Filter /// IMPORTANT: /// You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() { HttpResponse Response = HttpContext.Current.Response; if (IsGZipSupported()) { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (AcceptEncoding.Contains("deflate")) { Response.Filter = new System.IO.Compression.DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "deflate"); } else { Response.Filter = new System.IO.Compression.GZipStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); Response.Headers.Remove("Content-Encoding"); Response.AppendHeader("Content-Encoding", "gzip"); } } } As you can see the actual assignment of the Filter is as simple as: Response.Filter = new DeflateStream(Response.Filter, System.IO.Compression.CompressionMode.Compress); which applies the filter to the OutputStream. You also need to ensure that your response reflects the new GZip or Deflate encoding and ensure that any pages that are cached in Proxy servers can differentiate between pages that were encoded with the various different encodings (or no encoding). To use this utility function now is trivially easy: In any ASP.NET code that wants to compress its Response output you simply use: protected void Page_Load(object sender, EventArgs e) { WebUtils.GZipEncodePage(); Entry = WebLogFactory.GetEntry(); var entries = Entry.GetLastEntries(App.Configuration.ShowEntryCount, "pk,Title,SafeTitle,Body,Entered,Feedback,Location,ShowTopAd", "TEntries"); if (entries == null) throw new ApplicationException("Couldn't load WebLog Entries: " + Entry.ErrorMessage); this.repEntries.DataSource = entries; this.repEntries.DataBind(); } Here I use an ASP.NET page, but the above WebUtils.GZipEncode() method call will work in any ASP.NET application type including HTTP Handlers. The only requirement is that the filter needs to be applied before any other output is sent to the OutputStream. For example, in my CallbackHandler service implementation by default output over a certain size is GZip encoded. The output that is generated is JSON or XML and if the output is over 5k in size I apply WebUtils.GZipEncode(): if (sbOutput.Length > GZIP_ENCODE_TRESHOLD) WebUtils.GZipEncodePage(); Response.ContentType = ControlResources.STR_JsonContentType; HttpContext.Current.Response.Write(sbOutput.ToString()); Ok, so you probably get the idea: Encoding GZip/Deflate content is pretty easy. Hold on there Hoss –Watch your Caching Or is it? There are a few caveats that you need to watch out for when dealing with GZip content. The fist issue is that you need to deal with the fact that some clients don’t support GZip or Deflate content. Most modern browsers support it, but if you have a programmatic Http client accessing your content GZip/Deflate support is by no means guaranteed. For example, WinInet Http clients don’t support GZip out of the box – it has to be explicitly implemented. Other low level HTTP clients on other platforms too don’t support GZip out of the box. The problem is that your application, your Web Server and Proxy Servers on the Internet might be caching your generated content. If you return content with GZip once and then again without, either caching is not applied or worse the wrong type of content is returned back to the client from a cache or proxy. The result is an unreadable response for *some clients* which is also very hard to debug and fix once in production. You already saw the issue of Proxy servers addressed in the GZipEncodePage() function: // Allow proxy servers to cache encoded and unencoded versions separately Response.AppendHeader("Vary", "Content-Encoding"); This ensures that any Proxy servers also check for the Content-Encoding HTTP Header to cache their content – not just the URL. The same thing applies if you do OutputCaching in your own ASP.NET code. If you generate output for GZip on an OutputCached page the GZipped content will be cached (either by ASP.NET’s cache or in some cases by the IIS Kernel Cache). But what if the next client doesn’t support GZip? She’ll get served a cached GZip page that won’t decode and she’ll get a page full of garbage. Wholly undesirable. To fix this you need to add some custom OutputCache rules by way of the GetVaryByCustom() HttpApplication method in your global_ASAX file: public override string GetVaryByCustomString(HttpContext context, string custom) { // Override Caching for compression if (custom == "GZIP") { string acceptEncoding = HttpContext.Current.Response.Headers["Content-Encoding"]; if (string.IsNullOrEmpty(acceptEncoding)) return ""; else if (acceptEncoding.Contains("gzip")) return "GZIP"; else if (acceptEncoding.Contains("deflate")) return "DEFLATE"; return ""; } return base.GetVaryByCustomString(context, custom); } In a page that use Output caching you then specify: <%@ OutputCache Duration="180" VaryByParam="none" VaryByCustom="GZIP" %> To use that custom rule. It’s all Fun and Games until ASP.NET throws an Error Ok, so you’re up and running with GZip, you have your caching squared away and your pages that you are applying it to are jamming along. Then BOOM, something strange happens and you get a lovely garbled page that look like this: Lovely isn’t it? What’s happened here is that I have WebUtils.GZipEncode() applied to my page, but there’s an error in the page. The error falls back to the ASP.NET error handler and the error handler removes all existing output (good) and removes all the custom HTTP headers I’ve set manually (usually good, but very bad here). Since I applied the Response.Filter (via GZipEncode) the output is now GZip encoded, but ASP.NET has removed my Content-Encoding header, so the browser receives the GZip encoded content without a notification that it is encoded as GZip. The result is binary output. Here’s what Fiddler says about the raw HTTP header output when an error occurs when GZip encoding was applied: HTTP/1.1 500 Internal Server Error Cache-Control: private Content-Type: text/html; charset=utf-8 Date: Sat, 30 Apr 2011 22:21:08 GMT Content-Length: 2138 Connection: close ?`I?%&/m?{J?J??t??` … binary output striped here Notice: no Content-Encoding header and that’s why we’re seeing this garbage. ASP.NET has stripped the Content-Encoding header but left our filter intact. So how do we fix this? In my applications I typically have a global Application_Error handler set up and in this case I’ve been using that. One thing that you can do in the Application_Error handler is explicitly clear out the Response.Filter and set it to null at the top: protected void Application_Error(object sender, EventArgs e) { // Remove any special filtering especially GZip filtering Response.Filter = null; … } And voila I get my Yellow Screen of Death or my custom generated error output back via uncompressed content. BTW, the same is true for Page level errors handled in Page_Error or ASP.NET MVC Error handling methods in a controller. Another and possibly even better solution is to check whether a filter is attached just before the headers are sent to the client as pointed out by Adam Schroeder in the comments: protected void Application_PreSendRequestHeaders() { // ensure that if GZip/Deflate Encoding is applied that headers are set // also works when error occurs if filters are still active HttpResponse response = HttpContext.Current.Response; if (response.Filter is GZipStream && response.Headers["Content-encoding"] != "gzip") response.AppendHeader("Content-encoding", "gzip"); else if (response.Filter is DeflateStream && response.Headers["Content-encoding"] != "deflate") response.AppendHeader("Content-encoding", "deflate"); } This uses the Application_PreSendRequestHeaders() pipeline event to check for compression encoding in a filter and adjusts the content accordingly. This is actually a better solution since this is generic – it’ll work regardless of how the content is cleaned up. For example, an error Response.Redirect() or short error display might get changed and the filter not cleared and this code actually handles that. Sweet, thanks Adam. It’s unfortunate that ASP.NET doesn’t natively clear out Response.Filters when an error occurs just as it clears the Response and Headers. I can’t see where leaving a Filter in place in an error situation would make any sense, but hey - this is what it is and it’s easy enough to fix as long as you know where to look. Riiiight! IIS and GZip I should also mention that IIS 7 includes good support for compression natively. If you can defer encoding to let IIS perform it for you rather than doing it in your code by all means you should do it! Especially any static or semi-dynamic content that can be made static should be using IIS built-in compression. Dynamic caching is also supported but is a bit more tricky to judge in terms of performance and footprint. John Forsyth has a great article on the benefits and drawbacks of IIS 7 compression which gives some detailed performance comparisons and impact reviews. I’ll post another entry next with some more info on IIS compression since information on it seems to be a bit hard to come by. Related Content Built-in GZip/Deflate Compression in IIS 7.x HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET   IIS7  

    Read the article

  • Back to Basics: When does a .NET Assembly Dependency get loaded

    - by Rick Strahl
    When we work on typical day to day applications, it's easy to forget some of the core features of the .NET framework. For me personally it's been a long time since I've learned about some of the underlying CLR system level services even though I rely on them on a daily basis. I often think only about high level application constructs and/or high level framework functionality, but the low level stuff is often just taken for granted. Over the last week at DevConnections I had all sorts of low level discussions with other developers about the inner workings of this or that technology (especially in light of my Low Level ASP.NET Architecture talk and the Razor Hosting talk). One topic that came up a couple of times and ended up a point of confusion even amongst some seasoned developers (including some folks from Microsoft <snicker>) is when assemblies actually load into a .NET process. There are a number of different ways that assemblies are loaded in .NET. When you create a typical project assemblies usually come from: The Assembly reference list of the top level 'executable' project The Assembly references of referenced projects Dynamically loaded at runtime via AppDomain/Reflection loading In addition .NET automatically loads mscorlib (most of the System namespace) the boot process that hosts the .NET runtime in EXE apps, or some other kind of runtime hosting environment (runtime hosting in servers like IIS, SQL Server or COM Interop). In hosting environments the runtime host may also pre-load a bunch of assemblies on its own (for example the ASP.NET host requires all sorts of assemblies just to run itself, before ever routing into your user specific code). Assembly Loading The most obvious source of loaded assemblies is the top level application's assembly reference list. You can add assembly references to a top level application and those assembly references are then available to the application. In a nutshell, referenced assemblies are not immediately loaded - they are loaded on the fly as needed. So regardless of whether you have an assembly reference in a top level project, or a dependent assembly assemblies typically load on an as needed basis, unless explicitly loaded by user code. The same is true of dependent assemblies. To check this out I ran a simple test: I have a utility assembly Westwind.Utilities which is a general purpose library that can work in any type of project. Due to a couple of small requirements for encoding and a logging piece that allows logging Web content (dependency on HttpContext.Current) this utility library has a dependency on System.Web. Now System.Web is a pretty large assembly and generally you'd want to avoid adding it to a non-Web project if it can be helped. So I created a Console Application that loads my utility library: You can see that the top level Console app a reference to Westwind.Utilities and System.Data (beyond the core .NET libs). The Westwind.Utilities project on the other hand has quite a few dependencies including System.Web. I then add a main program that accesses only a simple utillity method in the Westwind.Utilities library that doesn't require any of the classes that access System.Web: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); } StringUtils.NewStringId() calls into Westwind.Utilities, but it doesn't rely on System.Web. Any guesses what the assembly list looks like when I stop the code on the ReadLine() command? I'll wait here while you think about it… … … So, when I stop on ReadLine() and then fire up Process Explorer and check the assembly list I get: We can see here that .NET has not actually loaded any of the dependencies of the Westwind.Utilities assembly. Also not loaded is the top level System.Data reference even though it's in the dependent assembly list of the top level project. Since this particular function I called only uses core System functionality (contained in mscorlib) there's in fact nothing else loaded beyond the main application and my Westwind.Utilities assembly that contains the method accessed. None of the dependencies of Westwind.Utilities loaded. If you were to open the assembly in a disassembler like Reflector or ILSpy, you would however see all the compiled in dependencies. The referenced assemblies are in the dependency list and they are loadable, but they are not immediately loaded by the application. In other words the C# compiler and .NET linker are smart enough to figure out the dependencies based on the code that actually is referenced from your application and any dependencies cascading down into the dependencies from your top level application into the referenced assemblies. In the example above the usage requirement is pretty obvious since I'm only calling a single static method and then exiting the app, but in more complex applications these dependency relationships become very complicated - however it's all taken care of by the compiler and linker figuring out what types and members are actually referenced and including only those assemblies that are in fact referenced in your code or required by any of your dependencies. The good news here is: That if you are referencing an assembly that has a dependency on something like System.Web in a few places that are not actually accessed by any of your code or any dependent assembly code that you are calling, that assembly is never loaded into memory! Some Hosting Environments pre-load Assemblies The load behavior can vary however. In Console and desktop applications we have full control over assembly loading so we see the core CLR behavior. However other environments like ASP.NET for example will preload referenced assemblies explicitly as part of the startup process - primarily to minimize load conflicts. Specifically ASP.NET pre-loads all assemblies referenced in the assembly list and the /bin folder. So in Web applications it definitely pays to minimize your top level assemblies if they are not used. Understanding when Assemblies Load To clarify and see it actually happen what I described in the first example , let's look at a couple of other scenarios. To see assemblies loading at runtime in real time lets create a utility function to print out loaded assemblies to the console: public static void PrintAssemblies() { var assemblies = AppDomain.CurrentDomain.GetAssemblies(); foreach (var assembly in assemblies) { Console.WriteLine(assembly.GetName()); } } Now let's look at the first scenario where I have class method that references internally uses System.Web. In the first scenario lets add a method to my main program like this: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.ReadLine(); PrintAssemblies(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } UpdateFromWebRequest() internally accesses HttpContext.Current to read some information of the ASP.NET Request object so it clearly needs a reference System.Web to work. In this first example, the method that holds the calling code is never called, but exists as a static method that can potentially be called externally at some point. What do you think will happen here with the assembly loading? Will System.Web load in this example? No - it doesn't. Because the WebLogEntry() method is never called by the mainline application (or anywhere else) System.Web is not loaded. .NET dynamically loads assemblies as code that needs it is called. No code references the WebLogEntry() method and so System.Web is never loaded. Next, let's add the call to this method, which should trigger System.Web to be loaded because a dependency exists. Let's change the code to: static void Main(string[] args) { Console.WriteLine(StringUtils.NewStringId()); Console.WriteLine("--- Before:"); PrintAssemblies(); WebLogEntry(); Console.WriteLine("--- After:"); PrintAssemblies(); Console.ReadLine(); } public static void WebLogEntry() { var entry = new WebLogEntry(); entry.UpdateFromRequest(); Console.WriteLine(entry.QueryString); } Looking at the code now, when do you think System.Web will be loaded? Will the before list include it? Yup System.Web gets loaded, but only after it's actually referenced. In fact, just until before the call to UpdateFromRequest() System.Web is not loaded - it only loads when the method is actually called and requires the reference in the executing code. Moral of the Story So what have we learned - or maybe remembered again? Dependent Assembly References are not pre-loaded when an application starts (by default) Dependent Assemblies that are not referenced by executing code are never loaded Dependent Assemblies are just in time loaded when first referenced in code All of this is nothing new - .NET has always worked like this. But it's good to have a refresher now and then and go through the exercise of seeing it work in action. It's not one of those things we think about everyday, and as I found out last week, I couldn't remember exactly how it worked since it's been so long since I've learned about this. And apparently I'm not the only one as several other people I had discussions with in relation to loaded assemblies also didn't recall exactly what should happen or assumed incorrectly that just having a reference automatically loads the assembly. The moral of the story for me is: Trying at all costs to eliminate an assembly reference from a component is not quite as important as it's often made out to be. For example, the Westwind.Utilities module described above has a logging component, including a Web specific logging entry that supports pulling information from the active HTTP Context. Adding that feature requires a reference to System.Web. Should I worry about this in the scope of this library? Probably not, because if I don't use that one class of nearly a hundred, System.Web never gets pulled into the parent process. IOW, System.Web only loads when I use that specific feature and if I am, well I clearly have to be running in a Web environment anyway to use it realistically. The alternative would be considerably uglier: Pulling out the WebLogEntry class and sticking it into another assembly and breaking up the logging code. In this case - definitely not worth it. So, .NET definitely goes through some pretty nifty optimizations to ensure that it loads only what it needs and in most cases you can just rely on .NET to do the right thing. Sometimes though assembly loading can go wrong (especially when signed and versioned local assemblies are involved), but that's subject for a whole other post…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  CSharp   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Ask How-To Geek: iPad Battery Life, Batch Resizing Photos, and Syncing Massive Music Collections

    - by Jason Fitzpatrick
    Christmas was good to many of you and now you’ve got all sorts of tech questions related to your holiday spoils. Come on in and we’ll clear up how to squeeze more life out of your iPad, resize all those photos, and sync massive music collections to mobile devices. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? Orbiting at the Edge of the Atmosphere Wallpaper Simon’s Cat Explores the Christmas Tree! [Video] The Outdoor Lights Scene from National Lampoon’s Christmas Vacation [Video] The Famous Home Alone Pizza Delivery Scene [Classic Video] Chronicles of Narnia: The Voyage of the Dawn Treader Theme for Windows 7 Cardinal and Rabbit Sharing a Tree on a Cold Winter Morning Wallpaper

    Read the article

  • virtualbox, MAAS: help needed

    - by Roberto Attias
    Ok, I made some progress wrt the original question (still below). I found /etc/maas/dhcpd.conf contained option domain-name-servers 10.0.3.15, and changed it to 192.168.0.11. After restarting the daemon, I now see "node" getting the right DNS, unfortunately this doesn't fix the main problem, which I believe is the reference to 169.254.169.254. It does introduce a new question: while the remaining information from /etc/maas/dhcp.conf is present in the maas GUI, there is no field to enter the dns address. Why? Anyway, my original problem still stands... Any idea? Original question follows. In VirtualBox, I have: master VM: ubuntu 12.04.3 server eth0: Internal Network, IP= 192.168.0.11 eth1: NAT, IP= 10.0.3.15 eth2: Host-only, IP= 192.168.56.102 running MAAS region and cluster controlller, with DHCP and DNS enabled node VM: eth0: Internal Network node VM boots in PXEboot. DHCP succeeds, and the boot process starts, but during boot I see some issues. One of them is "disk drive not ready yet or not present" for / and /tmp. I've googled this issue, and some people say it happens when the fisical disk is a SSD, which is my case. Anywaythe system seems to recover from this eventually. Immediately after it starts printing a lot of messages of the form: 2013-10-01 16:52:37,142 - url_helper.py[WARNING]: Calling 'http://169.254.168.254/2009-04-04/meta-data/instance-id failed [x/y]: url error [[Errno 113] No route to host] That IP address is clearly bogous, not sure where it came from. Before that point, I had seen the following network configuration: address: 192.168.0.100 broadcast: 192.168.0.255 netmask: 255.255.255.0 gateway: 192.168.0.1 dns0 : 10.0.3.15 dns1 : 0.0.0.0 Not sure if related, but the dns doesn't seem right, as node doesn't have an interface to reach 10.0.3.15. If that's the problem, what should I change to have the DNS point to 192.168.0.11? Thanks, Roberto

    Read the article

  • How to Create Quality Photo Prints With Free Software

    - by Eric Z Goodnight
    Photoshop may be the professional standard for high quality photo prints, but that doesn’t mean you have to pay hundreds of dollars for printing software. Freeware program Google Picasa can create excellent quality photo prints that’ll only cost you a download. Picasa and other freeware graphics programs are hardly news to savvy geeks, although with a little patience, they can produce quality prints few could tell apart from thousand dollar graphics suites. Stay tuned for links to various graphics programs, and a simple how-to on getting the perfect print settings for your photos Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • Design and Print Your Own Christmas Cards in MS Word, Part 1

    - by Eric Z Goodnight
    Looking for a  little DIY fun this holiday season? Open up familiar tool MS Word and create simple, beautiful Christmas and Holiday cards, and impress your family with your crafting skills. This is the first part of a two part article. In this first section, we’ll tackle design in MS Word. In our second, we’ll cover supplies and proper printing methods to get a great look out of your dusty old inkjet. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >