Search Results

Search found 23102 results on 925 pages for 'browser width'.

Page 430/925 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Coding error at open URL

    - by Lobo
    Hi, I have the following method to open a URL API String c=""; URL direccionURL; try { direccionURL = new URL("http://api.stackoverflow.com/1.0/users/523725"); BufferedReader in = new BufferedReader(new InputStreamReader( direccionURL.openStream())); String inputLine; while ((inputLine = in.readLine()) != null) c+=inputLine; in.close(); } catch (MalformedURLException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return c; In the end, the "c" variable contains a set of characters that are not the same I get if I open the same URL with a browser. Why?, What am I doing wrong? Thank's for help. Regards!

    Read the article

  • ASP.NET Web API - Screencast series Part 3: Delete and Update

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we'll start to modify data on the server using DELETE and POST methods. So far we've been looking at GET requests, and the difference between standard browsing in a web browser and navigating an HTTP API isn't quite as clear. Delete is where the difference becomes more obvious. With a "traditional" web page, to delete something'd probably have a form that POSTs a request back to a controller that needs to know that it's really supposed to be deleting something even though POST was really designed to create things, so it does the work and then returns some HTML back to the client that says whether or not the delete succeeded. There's a good amount of plumbing involved in communicating between client and server. That gets a lot easier when we just work with the standard HTTP DELETE verb. Here's how the server side code works: public Comment DeleteComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); repository.Delete(id); return comment; } If you look back at the GET /api/comments code in Part 2, you'll see that they start the exact same because the use cases are kind of similar - we're looking up an item by id and either displaying it or deleting it. So the only difference is that this method deletes the comment once it finds it. We don't need to do anything special to handle cases where the id isn't found, as the same HTTP 404 handling works fine here, too. Pretty much all "traditional" browsing uses just two HTTP verbs: GET and POST, so you might not be all that used to DELETE requests and think they're hard. Not so! Here's the jQuery method that calls the /api/comments with the DELETE verb: $(function() { $("a.delete").live('click', function () { var id = $(this).data('comment-id'); $.ajax({ url: "/api/comments/" + id, type: 'DELETE', cache: false, statusCode: { 200: function(data) { viewModel.comments.remove( function(comment) { return comment.ID == data.ID; } ); } } }); return false; }); }); So in order to use the DELETE verb instead of GET, we're just using $.ajax() and setting the type to DELETE. Not hard. But what's that statusCode business? Well, an HTTP status code of 200 is an OK response. Unless our Web API method sets another status (such as by throwing the Not Found exception we saw earlier), the default response status code is HTTP 200 - OK. That makes the jQuery code pretty simple - it calls the Delete action, and if it gets back an HTTP 200, the server-side delete was successful so the comment can be deleted. Adding a new comment uses the POST verb. It starts out looking like an MVC controller action, using model binding to get the new comment from JSON data into a c# model object to add to repository, but there are some interesting differences. public HttpResponseMessage<Comment> PostComment(Comment comment) { comment = repository.Add(comment); var response = new HttpResponseMessage<Comment>(comment, HttpStatusCode.Created); response.Headers.Location = new Uri(Request.RequestUri, "/api/comments/" + comment.ID.ToString()); return response; } First off, the POST method is returning an HttpResponseMessage<Comment>. In the GET methods earlier, we were just returning a JSON payload with an HTTP 200 OK, so we could just return the  model object and Web API would wrap it up in an HttpResponseMessage with that HTTP 200 for us (much as ASP.NET MVC controller actions can return strings, and they'll be automatically wrapped in a ContentResult). When we're creating a new comment, though, we want to follow standard REST practices and return the URL that points to the newly created comment in the Location header, and we can do that by explicitly creating that HttpResposeMessage and then setting the header information. And here's a key point - by using HTTP standard status codes and headers, our response payload doesn't need to explain any context - the client can see from the status code that the POST succeeded, the location header tells it where to get it, and all it needs in the JSON payload is the actual content. Note: This is a simplified sample. Among other things, you'll need to consider security and authorization in your Web API's, and especially in methods that allow creating or deleting data. We'll look at authorization in Part 6. As for security, you'll want to consider things like mass assignment if binding directly to model objects, etc. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

    Read the article

  • Increase Security by Enabling Two-Factor Authentication on Your Google Account

    - by Jason Fitzpatrick
    You can easily increase the security of your Google account by enabling two-factor authentication; flip it on today for a free security boost. It’s not a new feature but it’s a feature worth giving a second look. Watch the above video for a quick overview of Google’s two-factor authentication system. Essentially your mobile phone becomes the second authentication tool–you use your password + a code sent to your phone to log into your account. It’s a great way to easily increase the security of your Google account, it’s free, and you can set it so that you only have to validate your home computer once every 30 days. Google Two-Step Verification [via Google+] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • DIY Grid-It Clone Organizes Your Tech Gear in Style

    - by Jason Fitzpatrick
    If you’re looking for a customizable way to organize your cables and small electronics, this DIY Grid-It clone uses a series of elastic straps to hold everything in place. Grid-It is a commercial cable and device organizer that is, essentially, a stiff insert for your briefcase or bag that is wrapped in inter-woven elastic straps. You lift and slide the straps the secure your items in place creating, on the fly, customized organization for your cables and small devices. This DIY project recreations the Grid-It system using an old hard cover book as the foundation for the straps–it doubles the amount of usable space, provides a stiff cover, and (if you select a striking book) looks striking at the same time. Hit up the link below to check out the full DIY guide. DIY Project: Vintage Book Travel-Tech Organizer [Design Sponge via GeekSugar] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

    - by Maria Zakourdaev
      I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”. Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for. Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.   Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts. All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version. First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code. This is how, for instance, you can create a table and populate it with some data: DECLARE @html varchar(max) SET @html = '<html>' + '<H3><font id="Text" style='color: Green;'>Top Databases: </H3>' + '<table border="1" bordercolor="#3300FF" style='background-color:#DDF8CC' width='70%' cellpadding='3' cellspacing='3'>' + '<tr><font color="Green"><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>' + CAST( (SELECT TOP 10                             td = name,'',                             td = size * 8/1024 ,'',                             td = physical_name              FROM sys.master_files               ORDER BY size DESC             FOR XML PATH ('tr'),TYPE ) AS VARCHAR(MAX)) + '</table>' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Top databases', @body = @html, @body_format = 'HTML' This is the result:   If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed. There are two versions of the Google Charts Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.           Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment. Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code DECLARE @html  varchar(8000) DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);     SET @Series = ''; SET @Labels = ''; SET @Legend = ''; SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ',',                         @Labels = @Labels +CAST(size * 8/1024 as varchar) + 'MB'+'|',                         @Legend = @Legend + name + '|' FROM sys.master_files ORDER BY size DESC SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),         @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),         @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1) SET @html =   '<H3><font color="Green"> '+@@ServerName+' top 5 databases : </H3>'+    '<br>'+    '<img src="http://chart.apis.google.com/chart?'+    'chf=bg,s,DDF8CC&'+    'cht=p&'+    'chs=400x200&'+    'chco=3072F3|7777CC|FF9900|FF0000|4A8C26&'+    'chd=t:'+@Series+'&'+    'chl='+@Labels+'&'+    'chma=0,0,0,0&'+    'chdl='+@Legend+'&'+    'chdlp=b"'+    'alt="'+@@ServerName+' top 5 databases" />'              EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]',                             @subject = 'Top databases',                             @body = @html,                             @body_format = 'HTML' This is what you get. Isn’t it great? Chart parameters reference: chf     Gradient fill  bg - backgroud ; s- solid cht     chart type  ( p - pie) chs        chart size width/height chco    series colors chd        chart data string        1,2,3,2 chl        pir chart labels        a|b|c|d chma    chart margins chdl    chart legend            a|b|c|d chdlp    chart legend text        b - bottom of chart   Line graph implementation is also really easy and powerful DECLARE @html varchar(max) DECLARE @Series varchar(max) DECLARE @HourList varchar(max) SET @Series = ''; SET @HourList = ''; SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + '|' ,              @Series = @Series + CAST( COUNT(1) as varchar) + ',' FROM sys.dm_exec_query_stats s     CROSS APPLY sys.dm_exec_sql_text(plan_handle) t WHERE last_execution_time > = getdate()-1 GROUP BY CONVERT(varchar(13),last_execution_time,121) ORDER BY CONVERT(varchar(13),last_execution_time,121) SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1) SET @html = '<img src="http://chart.apis.google.com/chart?'+ 'chco=CA3D05,87CEEB&'+ 'chd=t:'+@Series+'&'+ 'chds=1,350&'+ 'chdl= Proc executions from cache&'+ 'chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&'+ 'chg=25.0,25.0,3,2&'+ 'chls=3|3&'+ 'chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&'+ 'chs=600x450&'+ 'cht=lc&'+ 'chts=FFFFFF,14&'+ 'chtt=Executions for from' +(SELECT CONVERT(varchar(16),min(last_execution_time),121)          FROM sys.dm_exec_query_stats          WHERE last_execution_time > = getdate()-1) +' till '+ +(SELECT CONVERT(varchar(16),max(last_execution_time),121)     FROM sys.dm_exec_query_stats) + '&'+ 'chxp=1,50.0|4,50.0&'+ 'chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&'+ 'chxt=y,y,x,x,x&'+ 'chxl=0:|1|350|1:|N|2:|'+@HourList+'3:|Hour&'+ 'chma=55,120,0,0" alt="" />' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Daily number of executions', @body = @html, @body_format = 'HTML' Chart parameters reference: chco    series colors chd        series data chds    scale format chdl    chart legend chf        background fills chg        grid line chls    line style chm        line fill chs        chart size cht        chart type chts    chart style chtt    chart title chxp    axis label positions chxs    axis label styles chxt    axis tick mark styles chxl    axis labels chma    chart margins If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email. DECLARE @html varchar(max),@query varchar(max) DECLARE @SeriesDBusers  varchar(800);     SET @SeriesDBusers = ''; SELECT @SeriesDBusers = @SeriesDBusers +  ' ["'+DB_NAME(r.database_id) +'", ' +cast(count(1) as varchar)+'],' FROM sys.dm_exec_requests r GROUP BY DB_NAME(database_id) ORDER BY count(1) desc; SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1) SET @query = ' PRINT '' <html>   <head>     <script type="text/javascript" src="https://www.google.com/jsapi"></script>     <script type="text/javascript">       google.load("visualization", "1", {packages:["corechart"]});        google.setOnLoadCallback(drawChart);       function drawChart() {                      var data = google.visualization.arrayToDataTable([                        ["Database Name", "Active users"],                        '+@SeriesDBusers+'                      ]);                        var options = {                        title: "Active users",                        pieSliceText: "value"                      };                        var chart = new google.visualization.PieChart(document.getElementById("chart_div"));                      chart.draw(data, options);       };     </script>   </head>   <body>     <table>     <tr><td>         <div id="chart_div" style='width: 800px; height: 300px;'></div>         </td></tr>     </table>   </body> </html> ''' EXEC msdb.dbo.sp_send_dbmail    @recipients = '[email protected]',    @subject ='Active users',    @body = @html,    @body_format = 'HTML',    @query = @Query,     @attach_query_result_as_file = 1,     @query_attachment_filename = 'Results.htm' After opening the email attachment in the browser you are getting this kind of report: In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS. If you need more information on how to customize the charts, you can try the following: Image Based Charts wizard https://google-developers.appspot.com/chart/image/docs/chart_wizard  Live Image Charts Playground https://google-developers.appspot.com/chart/image/docs/chart_playground Image Based Charts Parameters List https://google-developers.appspot.com/chart/image/docs/chart_params Java Script Charts Playground https://code.google.com/apis/ajax/playground/?type=visualization Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques. Yours, Maria

    Read the article

  • Follow your friends on StackOverflow with FriendOverflow

    - by Mike Grace
    Screenshot About I created this app because I wanted to see what my friends and co-workers were doing on StackOverflow. I was previously going to their profiles to see what they were asking, answering, and commenting on because most of the time I found what they were doing was interesting or relevant to what I was doing. This app is for anyone who visits StackOverflow using their desktop browser and has 'friends' they would like to follow on StackOverflow. Cost Free Download Google Chrome extension http://goo.gl/ooE34 Mozilla Firefox extension http://goo.gl/3Pnqa Bookmarklet http://goo.gl/FkuQW Platform Desktop browsers via Google Chrome extension, Mozilla Firefox extension, and bookmarklet Contact @MikeGrace Code App was built on the Kynetx platform using KRL (Kynetx Rule Language)

    Read the article

  • Which SSL do I need?

    - by Maik Klein
    I need to buy a ssl certificate. Now there are so many different alternatives with a huge price range. I know the very basic differences of browser compatibility and security level. But I need a "cheap" ssl certificate. My homepage looks like this http://www.test.com Now if I go to the loginpage i should switch to https like this https:/www.test.com/login I am also considering to secure the whole site if the user has singed in. Now there are sites which are offering SSl for 7$/year. Would this do the job? Or would you recommend me to get something more expensive like this one? I want to add paypal support in a later version of my website and I don't want to save money on the wrong end. What would you recommend me?

    Read the article

  • Visualising data a different way with Pivot collections

    - by Rob Farley
    Roger’s been doing a great job extending PivotViewer recently, and you can find the list of LobsterPot pivots at http://pivot.lobsterpot.com.au Many months back, the TED Talk that Gary Flake did about Pivot caught my imagination, and I did some research into it. At the time, most of what we did with Pivot was geared towards what we could do for clients, including making Pivot collections based on students at a school, and using it to browse PDF invoices by their various properties. We had actual commercial work based on Pivot collections back then, and it was all kinds of fun. Later, we made some collections for events that were happening, and even got featured in the TechEd Australia keynote. But I’m getting ahead of myself... let me explain the concept. A Pivot collection is an XML file (with .cxml extension) which lists Items, each linking to an image that’s stored in a Deep Zoom format (this means that it contains tiles like Bing Maps, so that the browser can request only the ones of interest according to the zoom level). This collection can be shown in a Silverlight application that uses the PivotViewer control, or in the Pivot Browser that’s available from getpivot.com. Filtering and sorting the items according to their facets (attributes, such as size, age, category, etc), the PivotViewer rearranges the way that these are shown in a very dynamic way. To quote Gary Flake, this lets us “see patterns which are otherwise hidden”. This browsing mechanism is very suited to a number of different methods, because it’s just that – browsing. It’s not searching, it’s more akin to window-shopping than doing an internet search. When we decided to put something together for the conferences such as TechEd Australia 2010 and the PASS Summit 2010, we did some screen-scraping to provide a different view of data that was already available online. Nick Hodge and Michael Kordahi from Microsoft liked the idea a lot, and after a bit of tweaking, we produced one that Michael used in the TechEd Australia keynote to show the variety of talks on offer. It’s interesting to see a pattern in this data: The Office track has the most sessions, but if the Interactive Sessions and Instructor-Led Labs are removed, it drops down to only the sixth most popular track, with Cloud Computing taking over. This is something which just isn’t obvious when you look an ordinary search tool. You get a much better feel for the data when moving around it like this. The more observant amongst you will have noticed some difference in the collection that Michael is demonstrating in the picture above with the screenshots I’ve shown. That’s because it’s been extended some more. At the SQLBits conference in the UK this year, I had some interesting discussions with the guys from Xpert360, particularly Phil Carter, who I’d met in 2009 at an earlier SQLBits conference. They had got around to producing a Pivot collection based on the SQLBits data, which we had been planning to do but ran out of time. We discussed some of ways that Pivot could be used, including the ways that my old friend Howard Dierking had extended it for the MSDN Magazine. I’m not suggesting I influenced Xpert360 at all, but they certainly inspired us with some of their posts on the matter So with LobsterPot guys David Gardiner and Roger Noble both having dabbled in Pivot collections (and Dave doing some for clients), I set Roger to work on extending it some more. He’s used various events and so on to be able to make an environment that allows us to do quick deployment of new collections, as well as showing the data in a grid view which behaves as if it were simply a third view of the data (the other two being the array of images and the ‘histogram’ view). I see PivotViewer as being a significant step in data visualisation – so much so that I feature it when I deliver talks on Spatial Data Visualisation methods. Any time when there is information that can be conveyed through an image, you have to ask yourself how best to show that image, and whether that image is the focal point. For Spatial data, the image is most often a map, and the map becomes the central mode for navigation. I show Pivot with postcode areas, since I can browse the postcodes based on their data, and many of the images are recognisable (to locals of South Australia). Naturally, the images could link through to the map itself, and so on, but generally people think of Spatial data in terms of navigating a map, which doesn’t always gel with the information you’re trying to extract. Roger’s even looking into ways to hook PivotViewer into the Bing Maps API, in a similar way to the Deep Earth project, displaying different levels of map detail according to how ‘zoomed in’ the images are. Some of the work that Dave did with one of the schools was generating the Deep Zoom tiles “on the fly”, based on images stored in a database, and Roger has produced a collection which uses images from flickr, that lets you move from one search term to another. Pulling the images down from flickr.com isn’t particularly ideal from a performance aspect, and flickr doesn’t store images in a small-enough format to really lend itself to this use, but you might agree that it’s an interesting concept which compares nicely to using Maps. I’m looking forward to future versions of the PivotViewer control, and hope they provide many more events that can be used, and even more hooks into it. Naturally, LobsterPot could help provide your business with a PivotViewer experience, but you can probably do a lot of it yourself too. There’s a thorough guide at getpivot.com, which is how we got into it. For some examples of what we’ve done, have a look at http://pivot.lobsterpot.com.au. I’d like to see PivotViewer really catch on a data visualisation tool.

    Read the article

  • Introduction to Reading Electronics Schematics [Video]

    - by Jason Fitzpatrick
    If you’re interested in electronics tinkering but a bit overwhelmed by learning electronics schematics, this helpful introductory video will get you started. Courtesy of Make magazine, this video tutorial covers what a schematic is, how schematics are laid out, and the basics of reading a schematic and its component symbols. When you’re done with the video you’ll have a better grasp of electronics circuit schematics than most of the population and, hopefully, and increased comfort reading schematics for all those DIY projects we post here. Collin’s Lab: Schematics [Make via Hacked Gadgets] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • A tale of two useful utilities

    - by TATWORTH
    This time I want to introduce you to two utilities that both have a tail! The first is the BeaverTail ADSI browser at http://adsi.mvps.org/adsi/CSharp/beavertail.html. This is a useful utility for doing active directory queries. This is free for both personal and commercial use. The souece code is also available. The second is a windows equivalent to the unit tail command to allow easy reading of flat file logs. This is free for personal use but must be registered for commercial use. Download it from http://www.uvviewsoft.com/logviewer/

    Read the article

  • Window management shortcuts?

    - by pwnguin
    I've got a single massive monitor at home, and I've decided to mimic the Windows 7 window tiling shortcuts. I found a few guides online using wmctrl, and it's going well, save one thing: maximized windows don't respond to it. gconftool-2 --type string --set /apps/metacity/keybinding_commands/command_1 "wmctrl -r :ACTIVE: -e 0, 0,0, `xwininfo -root | grep Width | awk '{ print ($2/2)}'`, `xwininfo -root | grep Height | awk '{ print $2 }'`" (I've added line returns to make an otherwise massive one-liner readable.) I've bound this to a hotkey and it works, unless the window is maximized. Any ideas on how to fix this up?

    Read the article

  • Add Firefox’s Awesome Bar Bookmark Search Function to Chrome and Iron

    - by Asian Angel
    Do you have a large number of bookmarks saved in your Chromium-based browser and need a quick way to search through them? Then see how easy it is to search through those bookmarks just like Firefox users do with the AwesomeBar extension. To engage the bookmark search function type “ab” in the Address Bar as seen above and press either Tab or the Space Bar. That will display the AwesomeBar prefix-bar as seen below. Enter the desired text to begin your search. For our example we decided to conduct a search for bookmarks related to the Ubuntu Twitter client Hotot. The results will continue to narrow down nicely as you type… Typing just a bit more finishes narrowing our search down the rest of the way for Hotot related items. Install the AwesomeBar Extension [Google Chrome Extensions] How to Enable Google Chrome’s Secret Gold IconHow to Create an Easy Pixel Art Avatar in Photoshop or GIMPInternet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • E_FAIL: An undetermined error occurred (-2147467259) when loading a cube texture

    - by Boreal
    I'm trying to implement a skybox into my engine, and I'm having some trouble loading the image as a cube map. Everything works (but it doesn't look right) if I don't load using an ImageLoadInformation struct in the ShaderResourceView.FromFile() method, but it breaks if I do. I need to, of course, because I need to tell SlimDX to load it as a cubemap. How can I fix this? Here is my new loading code after the "fix": public static void LoadCubeTexture(string filename) { ImageLoadInformation loadInfo = new ImageLoadInformation() { BindFlags = BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, Depth = 32, FilterFlags = FilterFlags.None, FirstMipLevel = 0, Format = SlimDX.DXGI.Format.B8G8R8A8_UNorm, Height = 512, MipFilterFlags = FilterFlags.Linear, MipLevels = 1, OptionFlags = ResourceOptionFlags.TextureCube, Usage = ResourceUsage.Default, Width = 512 }; textures.Add(filename, ShaderResourceView.FromFile(Graphics.device, "Resources/" + filename, loadInfo)); } Each of the faces of my cube texture are 512x512.

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • Anti-Forgery Request in ASP.NET MVC and AJAX

    - by Dixin
    Background To secure websites from cross-site request forgery (CSRF, or XSRF) attack, ASP.NET MVC provides an excellent mechanism: The server prints tokens to cookie and inside the form; When the form is submitted to server, token in cookie and token inside the form are sent by the HTTP request; Server validates the tokens. To print tokens to browser, just invoke HtmlHelper.AntiForgeryToken():<% using (Html.BeginForm()) { %> <%: this.Html.AntiForgeryToken(Constants.AntiForgeryTokenSalt)%> <%-- Other fields. --%> <input type="submit" value="Submit" /> <% } %> which writes to token to the form:<form action="..." method="post"> <input name="__RequestVerificationToken" type="hidden" value="J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP" /> <!-- Other fields. --> <input type="submit" value="Submit" /> </form> and the cookie: __RequestVerificationToken_Lw__=J56khgCvbE3bVcsCSZkNVuH9Cclm9SSIT/ywruFsXEgmV8CL2eW5C/gGsQUf/YuP When the above form is submitted, they are both sent to server. [ValidateAntiForgeryToken] attribute is used to specify the controllers or actions to validate them:[HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult Action(/* ... */) { // ... } This is very productive for form scenarios. But recently, when resolving security vulnerabilities for Web products, I encountered 2 problems: It is expected to add [ValidateAntiForgeryToken] to each controller, but actually I have to add it for each POST actions, which is a little crazy; After anti-forgery validation is turned on for server side, AJAX POST requests will consistently fail. Specify validation on controller (not on each action) Problem For the first problem, usually a controller contains actions for both HTTP GET and HTTP POST requests, and usually validations are expected for HTTP POST requests. So, if the [ValidateAntiForgeryToken] is declared on the controller, the HTTP GET requests become always invalid:[ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { [HttpGet] public ActionResult Index() // Index page cannot work at all. { // ... } [HttpPost] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] public ActionResult PostAction2(/* ... */) { // ... } // ... } If user sends a HTTP GET request from a link: http://Site/Some/Index, validation definitely fails, because no token is provided. So the result is, [ValidateAntiForgeryToken] attribute must be distributed to each HTTP POST action in the application:public class SomeController : Controller { [HttpGet] public ActionResult Index() // Works. { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction1(/* ... */) { // ... } [HttpPost] [ValidateAntiForgeryToken(Salt = Constants.AntiForgeryTokenSalt)] public ActionResult PostAction2(/* ... */) { // ... } // ... } Solution To avoid a large number of [ValidateAntiForgeryToken] attributes (one attribute for one HTTP POST action), I created a wrapper class of ValidateAntiForgeryTokenAttribute, where HTTP verbs can be specified:[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false, Inherited = true)] public class ValidateAntiForgeryTokenWrapperAttribute : FilterAttribute, IAuthorizationFilter { private readonly ValidateAntiForgeryTokenAttribute _validator; private readonly AcceptVerbsAttribute _verbs; public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs) : this(verbs, null) { } public ValidateAntiForgeryTokenWrapperAttribute(HttpVerbs verbs, string salt) { this._verbs = new AcceptVerbsAttribute(verbs); this._validator = new ValidateAntiForgeryTokenAttribute() { Salt = salt }; } public void OnAuthorization(AuthorizationContext filterContext) { string httpMethodOverride = filterContext.HttpContext.Request.GetHttpMethodOverride(); if (this._verbs.Verbs.Contains(httpMethodOverride, StringComparer.OrdinalIgnoreCase)) { this._validator.OnAuthorization(filterContext); } } } When this attribute is declared on controller, only HTTP requests with the specified verbs are validated:[ValidateAntiForgeryTokenWrapper(HttpVerbs.Post, Constants.AntiForgeryTokenSalt)] public class SomeController : Controller { // Actions for HTTP GET requests are not affected. // Only HTTP POST requests are validated. } Now one single attribute on controller turns on validation for all HTTP POST actions. Submit token via AJAX Problem For AJAX scenarios, when request is sent by JavaScript instead of form:$.post(url, { productName: "Tofu", categoryId: 1 // Token is not posted. }, callback); This kind of AJAX POST requests will always be invalid, because server side code cannot see the token in the posted data. Solution The token must be printed to browser then submitted back to server. So first of all, HtmlHelper.AntiForgeryToken() must be called in the page where the AJAX POST will be sent. Then jQuery must find the printed token in the page, and post it:$.post(url, { productName: "Tofu", categoryId: 1, __RequestVerificationToken: getToken() // Token is posted. }, callback); To be reusable, this can be encapsulated in a tiny jQuery plugin:(function ($) { $.getAntiForgeryToken = function () { // HtmlHelper.AntiForgeryToken() must be invoked to print the token. return $("input[type='hidden'][name='__RequestVerificationToken']").val(); }; var addToken = function (data) { // Converts data if not already a string. if (data && typeof data !== "string") { data = $.param(data); } data = data ? data + "&" : ""; return data + "__RequestVerificationToken=" + encodeURIComponent($.getAntiForgeryToken()); }; $.postAntiForgery = function (url, data, callback, type) { return $.post(url, addToken(data), callback, type); }; $.ajaxAntiForgery = function (settings) { settings.data = addToken(settings.data); return $.ajax(settings); }; })(jQuery); Then in the application just replace $.post() invocation with $.postAntiForgery(), and replace $.ajax() instead of $.ajaxAntiForgery():$.postAntiForgery(url, { productName: "Tofu", categoryId: 1 }, callback); // Token is posted. This solution looks hard coded and stupid. If you have more elegant solution, please do tell me.

    Read the article

  • MUD source code

    - by Tchalvak
    I haven't been able to find a lot of the old, open source mud source codes. I find the way they did things very applicable to text-based/browser based games, and I'd love to be able to skim through parts of 'em for inspiration. For instance, we have this huge list of muds and the relationships between them, but little by way of access to source code. http://en.wikipedia.org/wiki/MUD_trees Often (I'm looking at you, dikumud, http://www.dikumud.com/links.aspx ) the sites of the mud itself doesn't even have a working link to the source. https://github.com/alexmchale/merc-mud has a copy of merc that I found, which certainly contains other works within it's history, but the pickings seems sparse. Does anyone have better resources for gaining access to MUD source code than these?

    Read the article

  • Easily Customize Internet Explorer 9 Using IE9 Tweaker Plus

    - by Lori Kaufman
    If you use Internet Explorer 9, we found a useful program, called IE Tweaker Plus, that allows you to easily tweak and customize over 27 settings in the browser, as well as create customized IE9 shortcuts that automatically open IE in InPrivate mode. IE9 Tweaker Plus does not need to be installed. To run it, simply extract the .zip file you downloaded (see the link at the end of this article) and double-click on the .exe file. If the User Account Control dialog box displays, click Yes to continue. HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization HTG Explains: What Is RSS and How Can I Benefit From Using It?

    Read the article

  • See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video]

    - by Asian Angel
    Ever wondered about all the work that goes into adding awesome sound effects large and small to your favorite movies? Then here is your chance! Watch as award-winning Foley artist Gary Hecker shows how it is done using the props in his studio. SoundWorks Collection: Gary Hecker – Veteran Foley Artist [via kottke.org & Michal Csanaky] Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • Understanding “Dispatcher” in WPF

    - by Pawan_Mishra
    Level : Beginner to intermediate Consider the following program MainWindow.xaml 1: < Window x:Class ="DispatcherTrial.MainWindow" 2: xmlns ="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 3: xmlns:x ="http://schemas.microsoft.com/winfx/2006/xaml" 4: Title ="MainWindow" Height ="350" Width ="525" > 5: < Grid > 6: < Grid.RowDefinitions > 7: < RowDefinition /> 8: < RowDefinition /> 9: </ Grid.RowDefinitions...(read more)

    Read the article

  • WebGL, security, and Microsoft

    - by 3412132
    I was writing a post about a link I saw, but realized it was also about what companies do to this industry, so I'd like to ask your opinions on that first (the original post is below). Is it ok for companies to act childish (not wanting to share, not-invented-here syndrome, etc)? ORIGINAL POST: http://news.cnet.com/8301-30685_3-20071726-264/microsoft-declares-webgl-harmful-to-security/ What gives? I understand they're making some real points here, but haven't they been doing similar things with ActiveX? Also who are they to talk when their browser has more security problems than modern browsers do?

    Read the article

  • Good, free isometric game engine?

    - by posfan12
    Any recommendations for a good isometric game engine that is also free? Should be possible to develop entirely using freely available tools (meaning: no Flash, and no I don't want to learn haXe...) Works-in-a-browser is a plus, but not required. Support for 32-bit images is required! Good performance. Excellent documentation. I have looked at FIFE but it is still too unfinished, and the documentation sucks! Thanks!

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • IE 11 Updates its Developers Tools

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/08/01/ie-11-updates-its-developers-tools.aspxI installed the IE 11 preview for Windows 7 (I’m getting upgraded to Windows 8 at work next week). I’ve never been a fan of the IE 8 – 10 developer tools so I’ve mostly been using Chrome or Firefox’s Firebug. This revamp looks great and seems to work well. I think I’ll be spending more time in IE with the developer tools, once IE 11 is released. “F12 Tools in Internet Explorer 11 Preview has been rebuilt from the ground up to give you: a new, cleaner user interface. new Responsiveness, Memory, and Emulation tools. new and improved functionality in familiar tools. an easier and faster workflow.” http://msdn.microsoft.com/en-us/library/ie/bg182632(v=vs.85).aspxhttp://ie.microsoft.com/testdrive/Browser/F12Adventure/ has a nice visual walk through of the new features.

    Read the article

  • How to sync tasks between Ubuntu and Android?

    - by andrewsomething
    I was using first Tasque and then GTG on Ubuntu and Astrid on Android with Remember The Milk as the backend. Recently, Astrid has dropped support for Remember The Milk, and both Tasque and GTG's support for syncing with RTM has always left something to be desired. So I'm looking for a new solution, and I'm open to leaving RTM especially if it is for something that isn't proprietary. What have you found to keep your tasks lists in-sync between Ubuntu and Android? In particular, I'm looking for a desktop based program on the Ubuntu side rather than something in the browser. Astrid has treated me well, but I wouldn't mind trying something else. Currently, it can sync with Astrid.com, Google Tasks, and Producteev. Anyone know of a desktop app that supports any of those?

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >