Search Results

Search found 19406 results on 777 pages for 'satellite image'.

Page 171/777 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • How to install windows on a server with no CD or DVD drive

    - by user29266
    I've found a few posts on this site, however my situation is different. I have a new Dell server with no OS installed. I would like to install Windows 2008 Web Edition. I have a few USB ports and Ethernet. No CD or DVD drives. Is this article the best & only way to proceed? Installing Windows 2008 via USB thumbdrive or should I just get a external hardrive and hook it up to a usb. Once the OS is installed I'll never need a DVD drive again - so that's idea is a waste of money.

    Read the article

  • What's the best way to be able to reimage windows computers?

    - by mos
    I've got a low-end machine for testing our software. It needs to be tested under various versions of Windows, so I was planning installing each one on its own partition. Then I realized that after testing our software, I'd want to roll back to the previous, clean state. I don't want to use any virtualization software because it tends to interfere with the workings of our app. That said, what's the best way to achieve my goal? Norton Ghost? Edit: I work for a pretty monstrously huge organization. Money is no object here (and sometimes, if the wrong people get wind of it, "open source" software is bad).

    Read the article

  • Where can I find a download of both Vista (Home Premium) and Windows 7 OEM ISO?

    - by AridDecay
    I'm trying to find a place where I can download both Vista Home Premium and Windows 7 OEM Iso's. I own both, and my hard drive died in my Vista computer, so I ran out, bought another one and now need to re-install my OS. However, the computer came with it, and didn't come with a disk (Thanks Acer!) So, is there a place I can download an ISO of my Windows that ISN'T illegaly activated? I can't find any torrents that are legitimate. Thanks in advance!

    Read the article

  • Images as links bad for SEO?

    - by Karl
    I am just about finished with my website and my as I was reading over Google's SEO information, they mentioned images as links without text is bad. What if it is simply a link to enlarge the picture? such as the following: <a href="images/image1.jpg"><img src="images/image1.jpg" height="200" width="100"></a> Any help on this would be great! Thanks, -K

    Read the article

  • "Windows failed to start" loop with 0xc0000225. No install discs, EasyRE/USB iso hasn't worked

    - by mvidaure
    I've been suffering from this "Windows failed to start" loop with 0xc0000225 for 3 days now and I still can't fix it. The major problem is that I don't have any sort of installation disc. However, I have tried EasyRE via both CD and USB but both result in the same problem.  I try to perform an 'Automated Repair' on my computer and I get in red text "The selected partition is corrupted and could not be accessed or repaired. Please select a different drive to continue." It is also labeled as NO under Active. Since I do not have a the installation discs, I made a USB with a Windows_7_Recovery_Disc  iso (as shown here http://www.sevenforums.com/tutorials/31541-windows-7-usb-dvd-download-tool.html) but it also doesn't work. I get a blue screen that says "RECOVERY You pc needs to be repaired. The application or operating system could not be uploaded because a required file is missing or contains errors... File:\WINDOWS\system32\winload.efi Error code: 0xc0000225 You'll need to use the recovery tools on your installation media. If you don't have any installation media, contact your system administrator or PC manufacturer." Thanks in advance! Miguel

    Read the article

  • Information on the BMPP File Extension/Format

    - by Angel Brighteyes
    I am looking for information on the file type BMPp. Namely I need an application that can create this file type, preferably open source or free. Wikipedia says for BMP File Format that 'BMPp' is a "type code", which is the "mechanism used by pre-OSX Macs ... to denote a files format..." (Look in the little info-box of general information under "Type code"). Continuing my research, I found an old 2009 archived mailing list "Re: Incorrect png file type 'PNG' that talks about something related to another problem a developer is having. In the response he talks about there being variant file types, and lists BMPp as being linked to an old version of Graphics Converter. The company Lemkesoft sells Graphics Converter, which I am not willing to purchase. I can't imagine that the only program in existence to make a BMPp file is that program. There has got to be another way to make that file type, other than creating a BMP file and just renaming it to BMPp (unless of course it is really that easy)? This is the first time I've run into this file format, and it took a bit on Google, Bing, and Wikipedia to find the information that I've posted here. Any further help would be appreciated.

    Read the article

  • Is it possible to "virtualize" an existing PC?

    - by pnongrata
    I'm running Ubuntu Desktop 12.04, and I was wondering if it was possible to somehow take my whole filesystem (everything under /) and create an ISO from it. Then, perhaps, use that ISO as the file system of a VBox VM (obviously, it would have to be Ubuntu, and probably 12.04). Basically, I've spent a lot of time configuring my development machine, but need to be able to work on it from whatever computer I happen to be at. VMs seem like the perfect solution. Thanks in advance!

    Read the article

  • Where can I get a Linux distro iso based on a 2.4 kernel?

    - by Mike
    I need to get my hands on an ISO file for a Linux distribution with a 2.4 kernel. I'm looking for an ISO specifically so I can use it with my Oracle VirtualBox. Since 2.4 is so old these days, I'll explain that I'm looking for it because my company uses an ancient 2.4 uClinux distro on our ancient hardware in our devices. I'd like to run some "desktop" tests using the same kernel version as what's in the hardware. As far as I can tell I can't run uCLinux on a desktop, so next best thing, I'd like to get anything running 2.4.

    Read the article

  • Making a non-trivial Image and Video Gallery with a really nice interface

    - by Cawas
    Short part: I'm starting to build an Image and Video Gallery for our intranet. It's pretty much like an image gallery with video thumbnails that play on click. It's just good to keep that in mind because caching and streaming happen in very different ways there. It will serve to browse our reference database, which will also contain searching, tagging and voting. There are 3 features that I need to begin with: Quick preview of thumbnails from each Gallery Fast zoom in / out Animated scrolling Now to the long part, since I can't seem to reduce this: I hope this question belongs here. Maybe people can read this through and identify with it, specially since I expect answers to be pretty specific. The first plan was getting Apple's mobile me gallery as base, because it just so happens to have all those 3 features. If you've never seem it, you should check it out. Move the mouse over each collection and you'll get a nice preview "per pixel". Carousel got a really good scrolling, not just because of the pretty effect. I like it much more than cooliris, but it would be nice to have it in several rows, maybe without the magnifying effect... Then it could be all in the same place with the zoom. The more the features blend-in together, the better. Zooming out with the mouse, scrolling by dragging, once the zoom is really out it becomes a browsing through galleries, with the quick-show preview of each, all properly cached and fast. That'd be perfect. A very compelling interface will be important in this project. Well, my point here is just describing what I need and hope to hear from people with more experience in all that stuff of what's already done and what I'd have to do myself. And how (i.e. which framework to use) to do it. To begin with, I've found a gallery demo and its source code (I think it was from here, but the link seems broken now). I guess it was made in SproutCore, which is what mobileme was based upon. Definitely with JQuery (which already seems a little slow). But I'm still missing two features there: the carousel and the fast zoom (it's just not as slow as the zoom in this demo with cappuccino). Then I've found a supposedly better one with a pretty good and similar zoom, that I'm not sure if it's using any framework and is already in php. Right now it'd be better having just the HTML, no server-side, since I'm building interface first. So, can anyone point directions? There are way too many options! Should I look for another solution closer to what I need, or try and tweak this one? I'm not familiar with any framework at all. Sorry for bringing this question that I will have to answer myself anyway sooner or later and sorry that I couldn't make this smaller. Thanks.

    Read the article

  • Error while trying to insert image in to wordML

    - by Kiru
    Hi, Help needed. I am getting this error {"The xml has invalid content and cannot be constructed as an element.\r\nParameter name: outerXml"} while passing constructed xml in to DocumentFormat.OpenXml.Office.Drawing.Drawing() constructor like this DocumentFormat.OpenXml.Office.Drawing.Drawing d = new DocumentFormat.OpenXml.Office.Drawing.Drawing(img); Here is the xml which is passed in <w:drawing xmlns:w="http://schemas.openxmlformats.org/drawingml/2006/main"> <wp:anchor distT="0" distB="0" distL="114300" distR="114300" simplePos="0" relativeHeight="251658240" behindDoc="0" locked="0" layoutInCell="1" allowOverlap="1" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing"> <wp:simplePos x="0" y="0"/> <wp:positionH relativeFrom="column"> <wp:align>right</wp:align> </wp:positionH> <wp:positionV relativeFrom="paragraph"> <wp:align>top</wp:align> </wp:positionV> <wp:extent cx="400" cy="400"/> <wp:effectExtent l="19050" t="0" r="0" b="0"/> <wp:wrapSquare wrapText="bothSides"/> <wp:docPr id="1" name="image"/> <wp:cNvGraphicFramePr> <a:graphicFrameLocks noChangeAspect="1" xmlns:a="http://schemas.openxmlformats.org/drawingml/2006/main"/> </wp:cNvGraphicFramePr> <a:graphic xmlns:a="http://schemas.openxmlformats.org/drawingml/2006/main"> <a:graphicData uri="http://schemas.openxmlformats.org/drawingml/2006/picture"> <pic:pic xmlns:pic="http://schemas.openxmlformats.org/drawingml/2006/picture"> <pic:nvPicPr> <pic:cNvPr id="0" name="image"/> <pic:cNvPicPr> <a:picLocks noChangeAspect="1" noChangeArrowheads="1"/> </pic:cNvPicPr> </pic:nvPicPr> <pic:blipFill> <a:blip r:embed="rIdImg4" cstate="print" xmlns:r="http://schemas.openxmlformats.org/drawingml/2006/relationships"/> <a:stretch> <a:fillRect/> </a:stretch> </pic:blipFill> <pic:spPr bwMode="auto"> <a:xfrm> <a:off x="0" y="0"/> <a:ext cx="400" cy="400"/> </a:xfrm> <a:prstGeom prst="rect"> <a:avLst/> </a:prstGeom> <a:noFill/> <a:ln w="9525"> <a:noFill/> <a:miter lim="800000"/> <a:headEnd/> <a:tailEnd/> </a:ln> </pic:spPr> </pic:pic> </a:graphicData> </a:graphic> </wp:anchor> </w:drawing> Thanks, Kiru

    Read the article

  • add an image in listview

    - by danish
    Hi I would like to add more images in my list view as this code below only displays image 1 and 2 continuously in each row. What I want to do is display a different image for each different row. Here is mycode below; Thanks for any help. I am not good at java please change the code where necessary and I can then refer to it. public class starters extends ListActivity { private static class EfficientAdapter extends BaseAdapter { private LayoutInflater mInflater; private Bitmap mIcon1; private Bitmap mIcon2; private Bitmap mIcon3; private Bitmap mIcon4; private Bitmap mIcon5; private Bitmap mIcon6; private Bitmap mIcon7; private Bitmap mIcon8; private Bitmap mIcon9; private Bitmap mIcon10; public EfficientAdapter(Context context) { // Cache the LayoutInflate to avoid asking for a new one each time. mInflater = LayoutInflater.from(context); // Icons bound to the rows. mIcon1 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters1); mIcon2 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters2); mIcon3 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters3); mIcon4 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters4); mIcon5 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters5); mIcon6 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters6); mIcon7 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters7); mIcon8 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters8); mIcon9 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters9); mIcon10 = BitmapFactory.decodeResource(context.getResources(), R.drawable.starters10); } public int getCount() { return DATA.length; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } public View getView(int position, View convertView, ViewGroup parent) { // A ViewHolder keeps references to children views to avoid unneccessary calls // to findViewById() on each row. ViewHolder holder; // When convertView is not null, we can reuse it directly, there is no need // to reinflate it. We only inflate a new View when the convertView supplied // by ListView is null. if (convertView == null) { convertView = mInflater.inflate(R.layout.starters, null); // Creates a ViewHolder and store references to the two children views // we want to bind data to. holder = new ViewHolder(); holder.text = (TextView) convertView.findViewById(R.id.text01); holder.text = (TextView) convertView.findViewById(R.id.secondLine); holder.icon = (ImageView) convertView.findViewById(R.id.icon01); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } // Bind the data efficiently with the holder. holder.text.setText(DATA[position]); holder.icon.setImageBitmap((position & 1) ==1 ? mIcon1 : mIcon2); return convertView; } static class ViewHolder { TextView text; ImageView icon; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setListAdapter(new EfficientAdapter(this)); } private static final String[] DATA = { "Original nachos", "Toasted chicken and cheese quesadillas", "Chicken, lime and coriander nachos", "Spicy bean and cheese quesadillas", "Tuna and corn quesadillas", "Cheesy bean and sweetcorn nachos", "Crispy chicken, avocado and lime salad", "Beef and baby corn tostada", "Spicy mexican rice with chicken and prawns", "Chilli potato boats"}; }

    Read the article

  • Div Background Image

    - by marskie
    i just wanted to put the 2nd grey image background on the bottom part of my body. Sorry for this kind of newbie question thank a lot for helping newbie like me.. #bgtop { background-image:url(images/bgtop.png); background-repeat: repeat-x; } #bgbottom { background:url(images/bgbottom.png) repeat-x bottom;} body { font: 100% Verdana, Arial, Helvetica, sans-serif; background: #ededed; margin: 0; / padding: 0; text-align: center; color: #000000; } #container { width: 80%; background: #FFFFFF; margin: 0 auto; border: 1px solid #000000; text-align: left; } #header { background: #DDDDDD; padding: 0 10px 0 20px; } #header h1 { margin: 0; padding: 10px 0; } #mainContent { padding: 0 20px; background: #FFFFFF; } #footer { padding: 0 10px; background:#DDDDDD; } #footer p { margin: 0; padding: 10px 0; } HTML <body> <div id="bgtop"> <div id="bgbottom"> <div id="container"> <div id="header"> <h1>Header</h1> <!-- end #header --></div> <div id="mainContent"> <h1> Main Content </h1> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Praesent aliquam, justo convallis luctus rutrum, erat nulla fermentum diam, at nonummy quam ante ac quam. Maecenas urna purus, fermentum id, molestie in, commodo porttitor, felis. Nam blandit quam ut lacus. Quisque ornare risus quis ligula. Phasellus tristique purus a augue condimentum adipiscing. Aenean sagittis. Etiam leo pede, rhoncus venenatis, tristique in, vulputate at, odio. Donec et ipsum et sapien vehicula nonummy. Suspendisse potenti. Fusce varius urna id quam. Sed neque mi, varius eget, tincidunt nec, suscipit id, libero. In eget purus. Vestibulum ut nisl. Donec eu mi sed turpis feugiat feugiat. Integer turpis arcu, pellentesque eget, cursus et, fermentum ut, sapien. Fusce metus mi, eleifend sollicitudin, molestie id, varius et, nibh. Donec nec libero.</p> <h2>H2 level heading </h2> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Praesent aliquam, justo convallis luctus rutrum, erat nulla fermentum diam, at nonummy quam ante ac quam. Maecenas urna purus, fermentum id, molestie in, commodo porttitor, felis. Nam blandit quam ut lacus. Quisque ornare risus quis ligula. Phasellus tristique purus a augue condimentum adipiscing. Aenean sagittis. Etiam leo pede, rhoncus venenatis, tristique in, vulputate at, odio.</p> <!-- end #mainContent --></div> <div id="footer"> <p>Footer</p> <!-- end #footer --></div> </div> </div> <!-- end #container --></div> </body> </html>

    Read the article

  • Get Image Source URLs from a Different Page Using JS

    - by SDD
    Everyone: I'm trying to grab the source URLs of images from one page and use them in some JavaScript in another page. I know how to pull in images using JQuery .load(). However, rather than load all the images and display them on the page, I want to just grab the source URLs so I can use them in a JS array. Page 1 is just a page with images: <html> <head> </head> <body> <img id="image0" src="image0.jpg" /> <img id="image1" src="image1.jpg" /> <img id="image2" src="image2.jpg" /> <img id="image3" src="image3.jpg" /> </body> </html> Page 2 contains my JS. (Please note that the end goal is to load images into an array, randomize them, and using cookies, show a new image on page load every 10 seconds. All this is working. However, rather than hard code the image paths into my javascript as shown below, I'd prefer to take the paths from Page 1 based on their IDs. This way, the images won't always need to be titled "image1.jpg," etc.) <script type = "text/javascript"> var days = 730; var rotator = new Object(); var currentTime = new Date(); var currentMilli = currentTime.getTime(); var images = [], index = 0; images[0] = "image0.jpg"; images[1] = "image1.jpg"; images[2] = "image2.jpg"; images[3] = "image3.jpg"; rotator.getCookie = function(Name) { var re = new RegExp(Name+"=[^;]+", "i"); if (document.cookie.match(re)) return document.cookie.match(re)[0].split("=")[1]; return''; } rotator.setCookie = function(name, value, days) { var expireDate = new Date(); var expstring = expireDate.setDate(expireDate.getDate()+parseInt(days)); document.cookie = name+"="+value+"; expires="+expireDate.toGMTString()+"; path=/"; } rotator.randomize = function() { index = Math.floor(Math.random() * images.length); randomImageSrc = images[index]; } rotator.check = function() { if (rotator.getCookie("randomImage") == "") { rotator.randomize(); document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenTime = parseInt(rotator.getCookie("timeClock"),10); if ( currentMilli > writtenTime + 10000 ) { rotator.randomize(); var writtenImage = rotator.getCookie("randomImage") while ( randomImageSrc == writtenImage ) { rotator.randomize(); } document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenImage = rotator.getCookie("randomImage") document.write("<img src=" + writtenImage + ">"); } } } rotator.check() </script> Can anyone point me in the right direction? My hunch is to use JQuery .get(), but I've been unsuccessful so far. Please let me know if I can clarify!

    Read the article

  • complex css image centering help?

    - by Tenshiko
    My problem is a bit more complex than the title says. Sorry, I don't know how to be more specific... I'm working on a website and I came across a part where I should display some thumbnails. The thing is, the thumbnails are not matching in dimensions. (I know, it sounds ridiculous, since this is thumbnails are for, right?) No, there is simply NO WAY to create them in the same dimensions!! I've managed to create a HTML+CSS structure to fix this problem, and the images are not stretching to fit their containers if they are smaller while keeping their aspect ratio. The only issue remaining, is to center the images. Since setting margin to "0 auto" or "auto 0" are not helping, I've tried setting up multiple containers and setting the margins to position the images. This is also not working: if I put a 120x120 picture in a 120x80 inner container, and I set the container's top and left margin to -50%, the margins become -60px both. Can this be fixed? Or is there yet another way to center images? I'm open to any suggestions! HTML: <div id="roll"> <div class="imgfix"> <div class="outer"> <div class="inner"> @if (ImageDimensionHelper.WhereToAlignImg(item.Width, item.Height, 120, 82) == ImgAlign.Width) <!-- ImageDimensionHelper tells me if the image should fit the container with its width or height. I set the class of the img accordingly. --> { <img class="width" src="@Url.Content(item.URL)" alt="@item.Name"/> } else { <img class="height" src="@Url.Content(item.URL)" alt="@item.Name"/> } </div> </div> </div> </div> CSS: .imgfix{ overflow:hidden; } .imgfix .outer { width:100%; height:100%;} .imgfix .inner { width:100%; height:100%; margin-top:-50%; margin-left:-50%; } /*This div (.inner) gets -60px for both margins every time, regardless of the size of itself, or the image inside it*/ #roll .imgfix { width:120px; height:82px; border: 1px #5b91ba solid; } #roll .imgfix .outer { margin-top:41px; margin-left:60px; } /*since I know specificly what these margins should be, I set them explicitly, because 50% got the wrong size.*/ #roll .imgfix img.width { width:120px; height:auto; margin: auto 0; } #roll .imgfix img.height { height:82px; width:auto; margin: 0 auto; }

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Google Authorship Image of my blogspot has been disappeared in Google SERP. Why?

    - by Sathiya Kumar
    I have a blogspot and i used my image to appear on the Google SERP for my keywords using Google Authorship Markup. My image was showed for last 2 months but while checking SERP for my blog, i found that my authorship markup is not working. My image, name and G+ followers count is not appearing near my blogspot URL in SERP. I didn't made any changes in my google+ profile or in my blogspot header tag where i had put the authorship code. I tried to find the reason but i didn't find any value answer. May anyone answer this question. Please let me know if you had already experienced like this.

    Read the article

  • Why was my Facebook game rejected with the note that "your app icon must not overlap with content in your cover image?"

    - by peterwilli
    My FB game just recently got rejected for two reasons. The first I fixed, but I just can't see to figure out what they mean by the second, and I was hoping someone else got the same issue and did know what they meant. The remaining error is: Cover Image Your app icon must not overlap with content in your cover image. Click on 'Web Preview' in the 'App Details' section to check for overlap prior to submitting your app. See more here. All I know is that the rejection has something to do with the cover image, not the icons or the screenshots. The web preview of my game looks like this now: Please let me know what to do to get approved.

    Read the article

  • Why is Firefox changing the color calibration of this image?

    - by eoinoc
    The symptom of my problem is that the same hex color in a PNG image does not match the CSS-defined color defined by the same hex code. This problem only happens in Firefox when gfx.color_management.mode is set to 2 (tagged images only) rather than 0 (off). (Firefox ICC color correction described here). The image is http://dzfk93w6juz0e.cloudfront.net/images/background-top-light.png which at the bottom has the color #c8e8bd. However, the shade of green is different to that color when Firefox color calibration is enabled. Is this image inadvertently "tagged" for color correction?

    Read the article

  • "Light Field" : prendre une photo puis faire le point, la technologie est-elle la prochaine révolution du traitement informatique de l'image ?

    Lytro : prendre une photo puis faire le point La technologie "Light Field" est-elle la prochaine révolution du traitement informatique de l'image ? Depuis quelques années, différents centres de recherche étudient une méthode de prise de vue nommée "light field". Le but est de remplacer la capture d'une image par la capture du flux de lumière de la scène. Il est alors possible de générer une image en modifiant le point de netteté. Après, une exploitation industrielle par la société RayTrix, une solution pour les smartphone par la société Pelican Imagine, c'est au tour de

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >