Search Results

Search found 30417 results on 1217 pages for 'background service'.

Page 556/1217 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • Application for taking pretty screenshots (like OS X does)

    - by Oli
    I've been building a website for a guy who uses Mac OS X and occasionally he sends me screenshots of bugs. They come out looking like this: This is fairly typical of Mac screenshots. You get the window decorations, the shadow from the window and a white or transparent background (not the desktop wallpaper -- I've checked). Compare this to an Ubuntu window-shot (Alt+Print screen): It's impossible to keep a straight face and say the Ubuntu one anywhere near as elegant. My question is: Is there an application that can do this in Ubuntu? Edit: Follow up: Is there an application that can do this in one move? Shutter is pretty good but running the plugin for every screenshot is pretty tiresome as it doesn't seem to remember my preference (I want south-shadow and that requires selecting south, then clicking refresh, then save) and it's more clicks than I'd like. Is there a simple way of telling shutter I want south-shadow for all screenshots (except entire desktop and area-selection)?

    Read the article

  • how to access inaccessible mac os x hard drive via ubuntu

    - by jon
    Background: My intention was to load a Virtual Machine (VM) on my Mac OS X Snow Leopard. My Mac had just enough room for a VM (my thought process was that VM was the same as partition) However, I burned the newest version of Ubuntu onto a CD, thinking that partitioning and running a virtual machine would be the same. I would restart my computer, booting up Ubuntu installer. The installation would not allow me to partition, forcing me to force shutdown my laptop. when I turn on my laptop, I see that my computer is "missing operating system". So, can someone help me fix my a) bootcamp, b) getting files and if a and b are fixed c) to install ubuntu as a VM?

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • White Paper/Case Study on ICONICS’ Use of StreamInsight for its Energy AnalytiX&#174; Solution

    - by Roman Schindlauer
    A couple of days ago, we released a new StreamInsight white paper/case study on TechNet and MSDN. The paper is joint work with ICONICS and discusses how ICONICS is using StreamInsight technology for its Energy AnalytiX® solution. The paper is available for download here in the Technical Articles section of the StreamInsight documentation. Today, businesses and organizations need to pay more and more attention to energy usage, as customers and the general public are becoming increasingly concerned about a respectful and sustainable use of resources. Organizations therefore need to carefully manage their use of energy and provide better visibility into their energy consumption. In this paper, we discuss how software solutions can help address these challenges. Besides providing some background on the drivers behind energy management, the paper discusses how organizations manage their use of energy with current product and service offerings from Microsoft and ICONICS. In the main body of the paper, a case study explains in depth how ICONICS Energy AnalytiX® is using Microsoft data platform components such as SQL Server StreamInsight to deliver market leading energy management solutions. Regards, The StreamInsight Team

    Read the article

  • Need help choosing between Grails and Yii Framework

    - by user530207
    I recently started on developing in PHP with the Yii Framework. I recently came across the Grails Framework and I'm pretty impressed by the sites they make, bigger companies seem to use Grails for their web development. When looking at yii, not many big companies are using it. I'm just starting out with the Yii framework and I don't want to turn back halfway when in the middle of learning Yii, so I hope someone can give me some comparison about the 2 in terms of power. Does Grails make things much easier and benefit me in the long run? I only have C++ background for now. It boils down to this. I want a powerful framework which will serve me for a very long time and by looking at the number of big companies using Grails, I feel discouraged to take the Yii path. Thank you! Some sites by Grails: http://video.sky.com/ http://espn.go.com/ http://www.atlassian.com/ http://www.linkedin.com/

    Read the article

  • Google I/O 2010 - Creating positive user experiences

    Google I/O 2010 - Creating positive user experiences Google I/O 2010 - Beyond design: Creating positive user experiences Tech Talks John Zeratsky, Matt Shobe Good user experience isn't just about good design. Learn how to create a positive user experience by being fast, open, engaged, surprising, polite, and, well... being yourself. Chock full of examples from the web and beyond, this talk is a practical introduction for developers who are passionate about user experience but may not have a background in design. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 185 6 ratings Time: 52:11 More in Science & Technology

    Read the article

  • Residual packages Ubuntu 12.04

    - by hydroxide
    I have an Asus Q500A with win8 and Ubuntu 12.04 64 bit; Linux kernel 3.8.0-32-generic. I have been having residual package issues which have been giving me trouble trying to reconfigure xserver-xorg-lts-raring. I tried removing all residual packages from synaptic but the following were not removed. Output of sudo dpkg -l | grep "^rc" rc gstreamer0.10-plugins-good:i386 0.10.31-1ubuntu1.2 GStreamer plugins from the "good" set rc libaa1:i386 1.4p5-39ubuntu1 ASCII art library rc libaio1:i386 0.3.109-2ubuntu1 Linux kernel AIO access library - shared library rc libao4:i386 1.1.0-1ubuntu2 Cross Platform Audio Output Library rc libasn1-8-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - ASN.1 library rc libasound2:i386 1.0.25-1ubuntu10.2 shared library for ALSA applications rc libasyncns0:i386 0.8-4 Asynchronous name service query library rc libatk1.0-0:i386 2.4.0-0ubuntu1 ATK accessibility toolkit rc libavahi-client3:i386 0.6.30-5ubuntu2 Avahi client library rc libavahi-common3:i386 0.6.30-5ubuntu2 Avahi common library rc libavc1394-0:i386 0.5.3-1ubuntu2 control IEEE 1394 audio/video devices rc libcaca0:i386 0.99.beta17-2.1ubuntu2 colour ASCII art library rc libcairo-gobject2:i386 1.10.2-6.1ubuntu3 The Cairo 2D vector graphics library (GObject library) rc libcairo2:i386 1.10.2-6.1ubuntu3 The Cairo 2D vector graphics library rc libcanberra-gtk0:i386 0.28-3ubuntu3 GTK+ helper for playing widget event sounds with libcanberra rc libcanberra0:i386 0.28-3ubuntu3 simple abstract interface for playing event sounds rc libcap2:i386 1:2.22-1ubuntu3 support for getting/setting POSIX.1e capabilities rc libcdparanoia0:i386 3.10.2+debian-10ubuntu1 audio extraction tool for sampling CDs (library) rc libcroco3:i386 0.6.5-1ubuntu0.1 Cascading Style Sheet (CSS) parsing and manipulation toolkit rc libcups2:i386 1.5.3-0ubuntu8 Common UNIX Printing System(tm) - Core library rc libcupsimage2:i386 1.5.3-0ubuntu8 Common UNIX Printing System(tm) - Raster image library rc libcurl3:i386 7.22.0-3ubuntu4.3 Multi-protocol file transfer library (OpenSSL) rc libdatrie1:i386 0.2.5-3 Double-array trie library rc libdbus-glib-1-2:i386 0.98-1ubuntu1.1 simple interprocess messaging system (GLib-based shared library) rc libdbusmenu-qt2:i386 0.9.2-0ubuntu1 Qt implementation of the DBusMenu protocol rc libdrm-nouveau2:i386 2.4.43-0ubuntu0.0.3 Userspace interface to nouveau-specific kernel DRM services -- runtime rc libdv4:i386 1.0.0-3ubuntu1 software library for DV format digital video (runtime lib) rc libesd0:i386 0.2.41-10build3 Enlightened Sound Daemon - Shared libraries rc libexif12:i386 0.6.20-2ubuntu0.1 library to parse EXIF files rc libexpat1:i386 2.0.1-7.2ubuntu1.1 XML parsing C library - runtime library rc libflac8:i386 1.2.1-6 Free Lossless Audio Codec - runtime C library rc libfontconfig1:i386 2.8.0-3ubuntu9.1 generic font configuration library - runtime rc libfreetype6:i386 2.4.8-1ubuntu2.1 FreeType 2 font engine, shared library files rc libgail18:i386 2.24.10-0ubuntu6 GNOME Accessibility Implementation Library -- shared libraries rc libgconf-2-4:i386 3.2.5-0ubuntu2 GNOME configuration database system (shared libraries) rc libgcrypt11:i386 1.5.0-3ubuntu0.2 LGPL Crypto library - runtime library rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6ubuntu2 GD Graphics Library version 2 rc libgdbm3:i386 1.8.3-10 GNU dbm database routines (runtime version) rc libgdk-pixbuf2.0-0:i386 2.26.1-1 GDK Pixbuf library rc libgif4:i386 4.1.6-9ubuntu1 library for GIF images (library) rc libgl1-mesa-dri-lts-quantal:i386 9.0.3-0ubuntu0.4~precise1 free implementation of the OpenGL API -- DRI modules rc libgl1-mesa-dri-lts-raring:i386 9.1.4-0ubuntu0.1~precise2 free implementation of the OpenGL API -- DRI modules rc libgl1-mesa-glx:i386 8.0.4-0ubuntu0.6 free implementation of the OpenGL API -- GLX runtime rc libgl1-mesa-glx-lts-quantal:i386 9.0.3-0ubuntu0.4~precise1 free implementation of the OpenGL API -- GLX runtime rc libgl1-mesa-glx-lts-raring:i386 9.1.4-0ubuntu0.1~precise2 free implementation of the OpenGL API -- GLX runtime rc libglapi-mesa:i386 8.0.4-0ubuntu0.6 free implementation of the GL API -- shared library rc libglapi-mesa-lts-quantal:i386 9.0.3-0ubuntu0.4~precise1 free implementation of the GL API -- shared library rc libglapi-mesa-lts-raring:i386 9.1.4-0ubuntu0.1~precise2 free implementation of the GL API -- shared library rc libglu1-mesa:i386 8.0.4-0ubuntu0.6 Mesa OpenGL utility library (GLU) rc libgnome-keyring0:i386 3.2.2-2 GNOME keyring services library rc libgnutls26:i386 2.12.14-5ubuntu3.5 GNU TLS library - runtime library rc libgomp1:i386 4.6.3-1ubuntu5 GCC OpenMP (GOMP) support library rc libgpg-error0:i386 1.10-2ubuntu1 library for common error values and messages in GnuPG components rc libgphoto2-2:i386 2.4.13-1ubuntu1.2 gphoto2 digital camera library rc libgphoto2-port0:i386 2.4.13-1ubuntu1.2 gphoto2 digital camera port library rc libgssapi-krb5-2:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries - krb5 GSS-API Mechanism rc libgssapi3-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - GSSAPI support library rc libgstreamer-plugins-base0.10-0:i386 0.10.36-1ubuntu0.1 GStreamer libraries from the "base" set rc libgstreamer0.10-0:i386 0.10.36-1ubuntu1 Core GStreamer libraries and elements rc libgtk2.0-0:i386 2.24.10-0ubuntu6 GTK+ graphical user interface library rc libgudev-1.0-0:i386 1:175-0ubuntu9.4 GObject-based wrapper library for libudev rc libhcrypto4-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - crypto library rc libheimbase1-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - Base library rc libheimntlm0-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - NTLM support library rc libhx509-5-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - X509 support library rc libibus-1.0-0:i386 1.4.1-3ubuntu1 Intelligent Input Bus - shared library rc libice6:i386 2:1.0.7-2build1 X11 Inter-Client Exchange library rc libidn11:i386 1.23-2 GNU Libidn library, implementation of IETF IDN specifications rc libiec61883-0:i386 1.2.0-0.1ubuntu1 an partial implementation of IEC 61883 rc libieee1284-3:i386 0.2.11-10build1 cross-platform library for parallel port access rc libjack-jackd2-0:i386 1.9.8~dfsg.1-1ubuntu2 JACK Audio Connection Kit (libraries) rc libjasper1:i386 1.900.1-13 JasPer JPEG-2000 runtime library rc libjpeg-turbo8:i386 1.1.90+svn733-0ubuntu4.2 IJG JPEG compliant runtime library. rc libjson0:i386 0.9-1ubuntu1 JSON manipulation library - shared library rc libk5crypto3:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries - Crypto Library rc libkeyutils1:i386 1.5.2-2 Linux Key Management Utilities (library) rc libkrb5-26-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - libraries rc libkrb5-3:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries rc libkrb5support0:i386 1.10+dfsg~beta1-2ubuntu0.3 MIT Kerberos runtime libraries - Support library rc liblcms1:i386 1.19.dfsg-1ubuntu3 Little CMS color management library rc libldap-2.4-2:i386 2.4.28-1.1ubuntu4.4 OpenLDAP libraries rc libllvm3.0:i386 3.0-4ubuntu1 Low-Level Virtual Machine (LLVM), runtime library rc libllvm3.1:i386 3.1-2ubuntu1~12.04.1 Low-Level Virtual Machine (LLVM), runtime library rc libllvm3.2:i386 3.2-2ubuntu5~precise1 Low-Level Virtual Machine (LLVM), runtime library rc libltdl7:i386 2.4.2-1ubuntu1 A system independent dlopen wrapper for GNU libtool rc libmad0:i386 0.15.1b-7ubuntu1 MPEG audio decoder library rc libmikmod2:i386 3.1.12-2 Portable sound library rc libmng1:i386 1.0.10-3 Multiple-image Network Graphics library rc libmpg123-0:i386 1.12.1-3.2ubuntu1 MPEG layer 1/2/3 audio decoder -- runtime library rc libmysqlclient18:i386 5.5.32-0ubuntu0.12.04.1 MySQL database client library rc libnspr4:i386 4.9.5-0ubuntu0.12.04.1 NetScape Portable Runtime Library rc libnss3:i386 3.14.3-0ubuntu0.12.04.1 Network Security Service libraries rc libodbc1:i386 2.2.14p2-5ubuntu3 ODBC library for Unix rc libogg0:i386 1.2.2~dfsg-1ubuntu1 Ogg bitstream library rc libopenal1:i386 1:1.13-4ubuntu3 Software implementation of the OpenAL API (shared library) rc liborc-0.4-0:i386 1:0.4.16-1ubuntu2 Library of Optimized Inner Loops Runtime Compiler rc libosmesa6:i386 8.0.4-0ubuntu0.6 Mesa Off-screen rendering extension rc libp11-kit0:i386 0.12-2ubuntu1 Library for loading and coordinating access to PKCS#11 modules - runtime rc libpango1.0-0:i386 1.30.0-0ubuntu3.1 Layout and rendering of internationalized text rc libpixman-1-0:i386 0.24.4-1 pixel-manipulation library for X and cairo rc libproxy1:i386 0.4.7-0ubuntu4.1 automatic proxy configuration management library (shared) rc libpulse-mainloop-glib0:i386 1:1.1-0ubuntu15.4 PulseAudio client libraries (glib support) rc libpulse0:i386 1:1.1-0ubuntu15.4 PulseAudio client libraries rc libqt4-dbus:i386 4:4.8.1-0ubuntu4.4 Qt 4 D-Bus module rc libqt4-declarative:i386 4:4.8.1-0ubuntu4.4 Qt 4 Declarative module rc libqt4-designer:i386 4:4.8.1-0ubuntu4.4 Qt 4 designer module rc libqt4-network:i386 4:4.8.1-0ubuntu4.4 Qt 4 network module rc libqt4-opengl:i386 4:4.8.1-0ubuntu4.4 Qt 4 OpenGL module rc libqt4-qt3support:i386 4:4.8.1-0ubuntu4.4 Qt 3 compatibility library for Qt 4 rc libqt4-script:i386 4:4.8.1-0ubuntu4.4 Qt 4 script module rc libqt4-scripttools:i386 4:4.8.1-0ubuntu4.4 Qt 4 script tools module rc libqt4-sql:i386 4:4.8.1-0ubuntu4.4 Qt 4 SQL module rc libqt4-svg:i386 4:4.8.1-0ubuntu4.4 Qt 4 SVG module rc libqt4-test:i386 4:4.8.1-0ubuntu4.4 Qt 4 test module rc libqt4-xml:i386 4:4.8.1-0ubuntu4.4 Qt 4 XML module rc libqt4-xmlpatterns:i386 4:4.8.1-0ubuntu4.4 Qt 4 XML patterns module rc libqtcore4:i386 4:4.8.1-0ubuntu4.4 Qt 4 core module rc libqtgui4:i386 4:4.8.1-0ubuntu4.4 Qt 4 GUI module rc libqtwebkit4:i386 2.2.1-1ubuntu4 Web content engine library for Qt rc libraw1394-11:i386 2.0.7-1ubuntu1 library for direct access to IEEE 1394 bus (aka FireWire) rc libroken18-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - roken support library rc librsvg2-2:i386 2.36.1-0ubuntu1 SAX-based renderer library for SVG files (runtime) rc librtmp0:i386 2.4~20110711.gitc28f1bab-1 toolkit for RTMP streams (shared library) rc libsamplerate0:i386 0.1.8-4 Audio sample rate conversion library rc libsane:i386 1.0.22-7ubuntu1 API library for scanners rc libsasl2-2:i386 2.1.25.dfsg1-3ubuntu0.1 Cyrus SASL - authentication abstraction library rc libsdl-image1.2:i386 1.2.10-3 image loading library for Simple DirectMedia Layer 1.2 rc libsdl-mixer1.2:i386 1.2.11-7 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsdl-net1.2:i386 1.2.7-5 Network library for Simple DirectMedia Layer 1.2, libraries rc libsdl-ttf2.0-0:i386 2.0.9-1.1ubuntu1 ttf library for Simple DirectMedia Layer with FreeType 2 support rc libsdl1.2debian:i386 1.2.14-6.4ubuntu3 Simple DirectMedia Layer rc libshout3:i386 2.2.2-7ubuntu1 MP3/Ogg Vorbis broadcast streaming library rc libsm6:i386 2:1.2.0-2build1 X11 Session Management library rc libsndfile1:i386 1.0.25-4 Library for reading/writing audio files rc libsoup-gnome2.4-1:i386 2.38.1-1 HTTP library implementation in C -- GNOME support library rc libsoup2.4-1:i386 2.38.1-1 HTTP library implementation in C -- Shared library rc libspeex1:i386 1.2~rc1-3ubuntu2 The Speex codec runtime library rc libspeexdsp1:i386 1.2~rc1-3ubuntu2 The Speex extended runtime library rc libsqlite3-0:i386 3.7.9-2ubuntu1.1 SQLite 3 shared library rc libssl0.9.8:i386 0.9.8o-7ubuntu3.1 SSL shared libraries rc libstdc++5:i386 1:3.3.6-25ubuntu1 The GNU Standard C++ Library v3 rc libstdc++6:i386 4.6.3-1ubuntu5 GNU Standard C++ Library v3 rc libtag1-vanilla:i386 1.7-1ubuntu5 audio meta-data library - vanilla flavour rc libtasn1-3:i386 2.10-1ubuntu1.1 Manage ASN.1 structures (runtime) rc libtdb1:i386 1.2.9-4 Trivial Database - shared library rc libthai0:i386 0.1.16-3 Thai language support library rc libtheora0:i386 1.1.1+dfsg.1-3ubuntu2 The Theora Video Compression Codec rc libtiff4:i386 3.9.5-2ubuntu1.5 Tag Image File Format (TIFF) library rc libtxc-dxtn-s2tc0:i386 0~git20110809-2.1 Texture compression library for Mesa rc libunistring0:i386 0.9.3-5 Unicode string library for C rc libusb-0.1-4:i386 2:0.1.12-20 userspace USB programming library rc libv4l-0:i386 0.8.6-1ubuntu2 Collection of video4linux support libraries rc libv4lconvert0:i386 0.8.6-1ubuntu2 Video4linux frame format conversion library rc libvisual-0.4-0:i386 0.4.0-4 Audio visualization framework rc libvorbis0a:i386 1.3.2-1ubuntu3 The Vorbis General Audio Compression Codec (Decoder library) rc libvorbisenc2:i386 1.3.2-1ubuntu3 The Vorbis General Audio Compression Codec (Encoder library) rc libvorbisfile3:i386 1.3.2-1ubuntu3 The Vorbis General Audio Compression Codec (High Level API) rc libwavpack1:i386 4.60.1-2 audio codec (lossy and lossless) - library rc libwind0-heimdal:i386 1.6~git20120311.dfsg.1-2ubuntu0.1 Heimdal Kerberos - stringprep implementation rc libwrap0:i386 7.6.q-21 Wietse Venema's TCP wrappers library rc libx11-6:i386 2:1.4.99.1-0ubuntu2.2 X11 client-side library rc libx11-xcb1:i386 2:1.4.99.1-0ubuntu2.2 Xlib/XCB interface library rc libxau6:i386 1:1.0.6-4 X11 authorisation library rc libxaw7:i386 2:1.0.9-3ubuntu1 X11 Athena Widget library rc libxcb-dri2-0:i386 1.8.1-1ubuntu0.2 X C Binding, dri2 extension rc libxcb-glx0:i386 1.8.1-1ubuntu0.2 X C Binding, glx extension rc libxcb-render0:i386 1.8.1-1ubuntu0.2 X C Binding, render extension rc libxcb-shm0:i386 1.8.1-1ubuntu0.2 X C Binding, shm extension rc libxcb1:i386 1.8.1-1ubuntu0.2 X C Binding rc libxcomposite1:i386 1:0.4.3-2build1 X11 Composite extension library rc libxcursor1:i386 1:1.1.12-1ubuntu0.1 X cursor management library rc libxdamage1:i386 1:1.1.3-2build1 X11 damaged region extension library rc libxdmcp6:i386 1:1.1.0-4 X11 Display Manager Control Protocol library rc libxext6:i386 2:1.3.0-3ubuntu0.1 X11 miscellaneous extension library rc libxfixes3:i386 1:5.0-4ubuntu4.1 X11 miscellaneous 'fixes' extension library rc libxft2:i386 2.2.0-3ubuntu2 FreeType-based font drawing library for X rc libxi6:i386 2:1.6.0-0ubuntu2.1 X11 Input extension library rc libxinerama1:i386 2:1.1.1-3ubuntu0.1 X11 Xinerama extension library rc libxml2:i386 2.7.8.dfsg-5.1ubuntu4.6 GNOME XML library rc libxmu6:i386 2:1.1.0-3 X11 miscellaneous utility library rc libxp6:i386 1:1.0.1-2ubuntu0.12.04.1 X Printing Extension (Xprint) client library rc libxpm4:i386 1:3.5.9-4 X11 pixmap library rc libxrandr2:i386 2:1.3.2-2ubuntu0.2 X11 RandR extension library rc libxrender1:i386 1:0.9.6-2ubuntu0.1 X Rendering Extension client library rc libxslt1.1:i386 1.1.26-8ubuntu1.3 XSLT 1.0 processing library - runtime library rc libxss1:i386 1:1.2.1-2 X11 Screen Saver extension library rc libxt6:i386 1:1.1.1-2ubuntu0.1 X11 toolkit intrinsics library rc libxtst6:i386 2:1.2.0-4ubuntu0.1 X11 Testing -- Record extension library rc libxv1:i386 2:1.0.6-2ubuntu0.1 X11 Video extension library rc libxxf86vm1:i386 1:1.1.1-2ubuntu0.1 X11 XFree86 video mode extension library rc odbcinst1debian2:i386 2.2.14p2-5ubuntu3 Support library for accessing odbc ini files rc skype-bin:i386 4.2.0.11-0ubuntu0.12.04.2 client for Skype VOIP and instant messaging service - binary files rc sni-qt:i386 0.2.5-0ubuntu3 indicator support for Qt rc wine-compholio:i386 1.7.4~ubuntu12.04.1 The Compholio Edition is a special build of the popular Wine software rc xaw3dg:i386 1.5+E-18.1ubuntu1 Xaw3d widget set

    Read the article

  • SharePoint 2010 Hosting :: How to Customize SharePoint 2010 Global Navigation

    - by mbridge
    Requirements - SharePoint Foundation or SharePoint Server 2010 site - SharePoint Designer 2010 Steps 1. The first step in my process was to download from codeplex a starter masterpage http://startermasterpages.codeplex.com/ . 2. Once you downloaded the starter master page, open up your SharePoint site in SharePoint Designer 2010 and on the left in the “Site Objects “ area click on the folder “All Files” and drill down to catalogs >> masterpages . Once you are in the Masterpage folder copy and paste the _starter.master into this folder. 3. The first step in the customization process is to create your custom style sheet. To create your custom style sheet, click on the “all Files” folder and click on “Style Library.” Right click in the style library section and choose Style sheet. Once the style sheet is created, rename it style.css. Now open the style sheet you created in SharePoint Designer. 4. In this next step you will copy and paste the SharePoint core styles for the global navigation into your custom style sheet. Copy and paste the css below into the style sheet and save file .s4-tn{ padding:0px; margin:0px; } .s4-tn ul.static{ white-space:nowrap; } .s4-tn li.static > .menu-item{ /* [ReplaceColor(themeColor:"Dark2")] */ color:#3b4f65; white-space:nowrap; border:1px solid transparent; padding:4px 10px; display:inline-block; height:15px; vertical-align:middle; } .s4-tn ul.dynamic{ /* [ReplaceColor(themeColor:"Light2")] */ background-color:white; /* [ReplaceColor(themeColor:"Dark2-Lighter")] */ border:1px solid #D9D9D9; } .s4-tn li.dynamic > .menu-item{ display:block; padding:3px 10px; white-space:nowrap; font-weight:normal; } .s4-tn li.dynamic > a:hover{ font-weight:normal; /* [ReplaceColor(themeColor:"Light2-Lighter")] */ background-color:#D9D9D9; } .s4-tn li.static > a:hover { /* [ReplaceColor(themeColor:"Accent1")] */ color:#44aff6; text-decoration:underline; } 5. Once you created the style sheet, go back to the masterpage folder and open the _starter.master file and in the Customization category click edit file. 6. Next, when the edit file opens make sure you view it in split view. Now you are going to search for the reference to our custom masterpage in the code. Make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste and then click find next. 7. Now, in the code replace You have now referenced your custom style sheet in your masterpage. 8. The next step is to locate your Global Navigation control, make sure you are scrolled to the top in the code section and press “ctrl f” on the key board. This will pop up the find and replace tool. In the” find what field”, copy and paste ID="TopNavigationMenuV4” and then click find next. Once you find ID="TopNavigationMenuV4” , you should see the following block of code which is the global navigation control: ID="TopNavigationMenuV4" Runat="server" EnableViewState="false" DataSourceID="topSiteMap" AccessKey="" UseSimpleRendering="true" UseSeparateCss="false" Orientation="Horizontal" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="1" SkipLinkText="" CssClass="s4-tn" 9. In the global navigation code above you should see CssClass="s4-tn" . As an additional step you can replace "s4-tn" your own custom name like CssClass="MyNav" . If you can the name of the CSS class make sure you update your custom style sheet with the new name, example below: .MyNav{ padding:0px; margin:0px; } .MyNav ul.static{ white-space:nowrap; } 10. At this point you are ready to brand your global navigation. The next step is to modify your style.css with your customizations to the default SharePoint styles. Have fun styling and make sure you save your work often. Hope it helps!!

    Read the article

  • How to Change and Manually Start and Stop Automatic Maintenance in Windows 8

    - by Lori Kaufman
    Windows 8 has a new feature that allows you to automatically run scheduled daily maintenance on your computer. These maintenance tasks run in the background and include security updating and scanning, Windows software updates, disk defragmentation, system diagnostics, among other tasks. We’ve previously shown you how to automate maintenance in Windows 7, Vista, and XP. Windows 8 maintenance is automatic by default and the performance and energy efficiency has been improved over Windows 7. The program for Windows 8 automatic maintenance is called MSchedExe.exe and it is located in the C:\Windows\System32 directory. We will show you how you can change the automatic maintenance settings in Windows 8 and how you can start and stop the maintenance manually. NOTE: It seems that you cannot turn off the automatic maintenance in Windows 8. You can only change the settings and start and stop it manually. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • Pure virtual or abstract, what's in a name?

    - by Steven Jeuris
    While discussing a question about virtual functions on Stack Overflow, I wondered whether there was any official naming for pure (abstract) and non-pure virtual functions. I always relied on wikipedia for my information, which states that pure and non-pure virtual functions are the general term. Unfortunately, the article doesn't back it up with a origin or references. To quote Jon Skeet's answer to my reply that pure and non-pure are the general term used: @Steven: Hmm... possibly, but I've only ever seen it in the context of C++ before. I suspect anyone talking about them is likely to have a C++ background :) Did the terms originate from C++, or were they first defined or implemented in a earlier language, and are they the 'official' scientific terms?

    Read the article

  • Does "diff" exist for images?

    - by moose
    You can compare two text files very easy with diff and even better with meld: If you use diff for images, you get an example like this: $ diff zivi-besch.tif zivildienst.tif Binary files zivi-besch.tif and zivildienst.tif differ Here is an example: Original from http://commons.wikimedia.org/wiki/File:Tux.svg Edited: I've added a white background to both images and applied GIMPs "Difference" filter to get this: It is a very simple method how a diff could work, but I can imagine much better (and more complicated) ones. Do you know a program which works for images like meld does for texts? (If a program existed that could give a percentage (0% the same image - 100% the same image) I would also be interested in it, but I am looking for one that gives me visual hints where differences are.)

    Read the article

  • Silverlight Cream for February 26, 2011 -- #1052

    - by Dave Campbell
    In this Issue: Mark Monster, Gill Cleeren, Pencho Popadiyn, Kevin Dockx, Joost van Schaik, Jesse Liberty, John Papa, Jeremy Likness, Arik Poznanski(-2-), Page Brooks, Deborah Kurata, Mike Snow, Alfred Astort, Samuel Jack, XAMLNinja, and Shawn Wildermuth. Above the Fold: Silverlight: "Asynchronous Callbacks with Rx" Jesse Liberty WP7: "Phoney Windows Phone 7 Project Now Available!" Shawn Wildermuth MVVM: "Validating our ViewModel" Mark Monster Shoutouts: Shawn Wildermuth has a video up of his FadingMessage class to show it off: Introducing Phoney's FadingMessage Class From SilverlightCream.com: Validating our ViewModel Mark Monster discusses Validation in his latest post... using INotifyDataErrorInfo and his own implementation of a ViewModel base that supports it and INPC. Getting ready for Microsoft Silverlight Exam 70-506 (Part 7) Gill Cleeren hits part 7 of his series at SilverlightShow on a great walk through Silverlight and getting ready for the exam. This is the final part and concentrates on deploying apps. Windows Phone 7–Creating Custom Keyboard Pencho Popadiyn has a post at SilverlightShow discussing problems with WP7 keyboards in his native Bulgaria, and his solution to the problem... create his own. 360 Degrees Feedback by Kevin Dockx Kevin Dockx produced a white paper for his company about an employee review solution they did in Silverlight. The white paper is available, and SilverlightShow interviewd Kevin to answer questions about the app. Extended Windows Phone 7 page for handling rotation, focused element updates and back key press Looks like Joost van Schaik has a few posts I've missed... and I'm not going to get to them all today! ... this one is about the base class he uses for WP7 apps... a bunch of utilities he uses... definitely worth a look (and a take). Asynchronous Callbacks with Rx Jesse Liberty has his 8th post in the Rx series up and this one's on Asynchronous Callbacks... if you haven't seen this before, you should definitely look into it... cool stuff, Jesse! Silverlight TV 63: Exploring National Instruments' App Using Data and Business Features John Papa has Silverlight TV number 63 up and is talking to Steve Lasker about National Instruments and their Lab View product. Great demo and discussion. Jounce Part 11: Debugging MEF Jeremy Likness's latest (number 11) in his series on his MVVM framework Jounce is out, and he's discussing how to debug MEF, which Jounce handles nicely through the logging he provides... and you can use it externally to Jounce. Get Twitter Trends on Windows Phone 7 Arik Poznanski has a couple Twitter for WP7 posts up... first is one for pulling Twitter trends from whatthetrend.com... plus the code to do it. Searching Twitter on Windows Phone 7 In his next post, Arik Poznanski shows how to search twitter from your WP7 ... again with code. Tiled Background Control in Silverlight Page Brooks shows how to get a tiled background control in Silverlight ... did you know there was one in the JetPack them? Silverlight Charting: Displaying Data Above the Column Deborah Kurata continues her charting posts with this one displaying the column value above the column. I like this... it has a clean look and all the data is available at a glance. Silverlight: Tasks on the Win7 Mobile Phone Mike Snow has a list of the WP7 tasks available and an example of using them... looks like a pretty good reference! 10 of 10 - Aesthetics and alignment matter Alfred Astort discusses aesthetics and WP7 dev... looks like it's the same as any app development, but if you're not doing it, you should be. Simon Squared – We have Multi-player: Days 4, 5 and (ahem!) 6 Samuel Jack details the completion of his multi-player game for WP7 utilizing Azure, in the hour-by-hour detail he's done the rest... plus a video of the final product! Who ate all the pies!! XAMLNinja has a very good discussion/link set of Charting posts all leading up to a portrait-only version of charting for WP7 with labels that looks looks great Phoney Windows Phone 7 Project Now Available! Shawn Wildermuth has a collection of classes he always uses with WP7 dev, and he's sharing them with all of us a "Phoney" Tools project on Codeplex... and now has a NuGet project also. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How to Fix the “Firefox Is Already Running” Error

    - by Chris Hoffman
    The “Firefox is already running, but is not responding” error has haunted Firefox users for years. You don’t have to restart your computer when you see this error – you can usually fix it with a quick trip to the Task Manager. This error occurs when Firefox is closed but is still running in the background. Firefox is either in the process of closing or is frozen and hasn’t quit properly. In rare situations, there may be a problem with your profile. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Why version of chrome does not matter much more then firefox and firefox does not matter much as IE

    - by anirudha
    Everything not perfect. in software the software make and growth by user feedback like what user expected from the software and want in next version of software. In a chrome Event i hear about the Chromium. you can find some interesting things here Video 1 Video 2 come to the point. when i hear about some good website of india. many of them talking a little thing in common that. We are #1 because we not thing that we make a great application and deploy them and think that we finished own works preharps in a small days we make a small website deploy them and improve them always latter. what the point they all talking about:- the conclusion is that software make by user feedback. they tell that he not spent much time and wait for a long time when their project was finish and they launch their website. preharps they tell that they make a small website in a small time and launched them. make a research on them later and make them better later and website growth as they thing. if they are late then someone else can win even their project was much good them other. not more but a little story:-  before few month i hear about a great website who sold many of books daily i myself purchase some from them to track how they work and how they provided service. i not found any problem with their service. the service they provided is good but when i see their website i found that the mockup code was very badly designed. i am not know the matter how they growth because they used very other stuff who make their website slow. when i research something more i found that their is very hard to implement the website look like them. on their blog they writing about a mail they have. the clone of them make by many other but not goes good as well as they make. after few month later website is looking great. many thing they improved and make them better as  other thing. a another conclusion that same as another story that user feedback. well now come to the point. we talking about Chrome,firefox and IE. what thing is goes common that they all are browser. but something goes different that Chrome is a one of the best browser. from a month many of issue submitted to chrome that user found when they use them. so what is make this different the different is that when feedback goes to someone they take a action and think to make them better so improvement of chrome based on feedback user put using many things. secondly because it's goes open-source many of developer contribute them and make them real browser not real [tape] browser as like IE [a good example]. as you see in video they talking about silent update in chrome and futurecoming chromium. the thing they implement is too good. because by this thing user not worry about a new version. i myself never find a problem that you need to user new version as we found same problem in other application. Well think are great in chrome and now talking about Firefox. Firefox is a best option for development as well as chrome best for surfing the internet. in firefox many thing are great like plugin [ex: Firebug] , addons personas themes and many other thing and customization in firefox make them really a browser not like a joker [IE a good example]. well now come to IE. are IE really great no. someone from Microsoft can say that ha ha hi hi because they can't see the power of open-source. they thing that they make a software and they never need user feedback because they produced windows who really great for user because they used them. example :- before few month Microsoft shipped Windows live. when i use them that i found that their is no sense make for using this one software. suppose you need to write a post through Live writer. the old version are great i myself have no problem but in 2011 i found that they changed everything in user interface. so learn a new thing and spent sometime more to learn a new version whenever need are same and feature are same so why user spent a little time more to learn a lesson who they want to teach even their is no sense to learn them. the problem in 2011 Live not only of mine their are many other have same problem as mine and forget live 2011 after the see a badly design user interface. even they tell we maked in WPF yeah yeah WPF we make in .net. are you can say that what is the matter .net for user. the user have no problem to use WPF based application even you make them fool as we make them in WPF 2020 they are future technologies and we launch it 10 year before only for you yeah you dear customer of mine. yeah they thing WPF is best and thing to implement every software they make even they forget to make better user interface but they also remember to make them next version in WPF. the IE 9 Rc release on 10 febuary. but are they really cool. how much feedback they take and take action of them. their is no answer because they thing to launch a software they never thing what user want and off-course not care of user feedback. as we mention in Firefox and in chrome user feedback have a big matter because sound come from a public and user who use the software not only who make them software as IE 9 have. so feedback take a opportunities to make their software better and less hassel to use them in user hands not only in developer hands. so IE9 is not a good guys who still need of user if they really want a experience. well what Microsoft implemented in IE. i am not talking about that furthure more but i found in article last days[why not say reading a google blog]  yeah see them in http://googleblog.blogspot.com/2011/02/microsofts-bing-uses-google-search.html Well their is nothing good for developer in IE9. the blah blah blah they can always said on MSDN and many other site they have. many from public talking about them because they never can see a good software outside Microsoft. they never talking about Firebug even in books they never show you that. well i know competitor never show you a stuff of competitor i have same issue from Yahoo. on a days i hear from newsletter from them they write a subline on the bottom that USE IE or Firefox to exerience better Web. i am agree with Firefox and i am not know they really talking about IE or joking but i never believe they forget to put chrome. well i know their is corporate rule everyone should follow first. so no problem yahoo i know the matter. well IE:- so what is IE and Why We should use IE. well their is no sense to use IE. the thing we expect from IE but never found that:- first thing is that as a developer we thing the customization as well as other browser have like in chrome have it's own customization and firefox is also great in this matter. but IE really for Web development. are you joking:- the thing they mention in their blog is that IE9 have a new developer tool who have three new panel or tabs. are this joke whenever Firefox and chrome have everyday a new plugin or great upgrade of old plugin they tell we add three new panel first is network second is blah third is blah. well nice joke you make all MSDN blogger i like the way you talking about IE.  even we know what matter the browser have. i thing whenever they make IE 6 they talking about IE as same as they talking today. Secondly their is no other tool to use with IE deveoper tool like Firebug is avilable in IE but not make by IE. firebug team themselves make them for IE. because many of developer thing to use firebug but can't use because they still goes mad about IE because day and night they only hear about tools maked by Microsoft. so no plugin [even very small developer tool] no customized no personas on themse. no update yeah why forget these topic come with us and share a little thing more. IE launch IE 6 after 7 after 8 and now 9 [even in future] but what they do. they do nothing on user feedback they still thing WPF is great because colors make user cool and they forget to implement other things as other already provide. Chrome and Firefox are come after IE. Mozilla firefox come in 2004 and chrome is late in 2008. even they are late they still focus on Developer and thing they feel first is that customization like developer tool , themese and perfsonas and many other great things. are they can find in IE even next i means 10 yeah IE10 never because they thing only making a software or force user to use new version of OS. i am confused that why not wait and force user to purchase windows 8 instead of 7. so IE have no customization even small developer tool i thing that they make a customizable interface like in firefox who configure by about:config. so thing is discussed about really not a point we thing to goes but now it's clear what is making no matter for version in Firefox and chrome. because chrome and firefox not wait for  a long time and explode a bomb to make publicity. they still work and make upgrade possible to user as soon as possible. [chrome never tell about they goes old they himself update them].so update comes soon in Firefox and in chrome but in IE their is a long time to wait and they make them without feedback. so IE really not for human and not really for us. whenver you found a bug in chrome and in firefox you report them and found that they are work in progressed and can be see in next version of firefox. but what you see whenever you see IE. you found that what the bug can found in IE whenver they not implemented same feature in IE. well IE 9 is next IE6 for developer. conclusion:-  after reading a whole post you find that i hate all thing about IE. why are i write a big post on a small pity software IE. why i open the poll of IE. are their anything in IE break my heart. are their is something goes wrong with me and with my IE9. are their is anything i got with IE9. why i write a big post. well as a developer play a trick that give sometime to chrome to make them better and some other to make firefox better and feel something you contribute really have a matter as a contribute you find some other and their thought on same software. some are great maybe some of them blah blah. but are their is true that outside Microsoft their is no good sollution can make because it's outside Microsoft. their is not true. the thing developer make not have matter even using Microsoft technologies or outside technologies of MS. so stop this i not want to talking some other things just stop it. i means their is no more blah i want to talking with you for IE.i still hate them and believe it is next IE6 for Web. Answers: if you still need a answer in lines that the answer is that IE late update as long as they can and also make force user to upgrade IE9 because they want to promote windows first then thing about IE and chrome and firefox not do that as same as IE. so IE is late and user forced software. in firefox and chrome upgrade come soon as soon as they possible. Thanks to give me a great time and red my blah on Blah i means IE9 Thanks again Anirudha

    Read the article

  • Switch interface implementation using configuration

    - by Marcos
    We want to allow the same core service to be either fully implemented or, as other option, to be a proxy toward a client legacy system (via a WSDL for example). In that way, we have both implementation (proxy & full) and we switch which one to use in the configuration of the app. So in a nutshell, Some desired features: Two different implementation (proxy, full) instead of one implementation with a switch inside Switch implementation using configuration: dependency injection? reflection? Nice-to-have: the packaged delivered to the client doesn’t have to change depending on the choice between proxy or full Nice-to-have: Client can develop their custom implementation of the Core Interface and configure the applciation to use that one With this background, the question is: What alternatives we have to choose one implementation or other of an interface just changing configuration? Thanks

    Read the article

  • Bing Desktop Automatically Downloads Bing Wallpapers to Your Computer

    - by Jason Fitzpatrick
    Windows 7: Bing Desktop is a new and lightweight offering from Microsoft that automatically swaps your desktop background every day and offers quick access to the Bing search engine. In addition to downloading the Bing wallpaper, Bing Desktop also includes a small search box that allows you to search Bing from your desktop–although most users will likely grab the app simply to get the daily wallpaper update. Hit up the link below to download a copy. Bing Desktop is free, Windows 7 only. Bing Desktop [via Quick Online Tips] How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • The Krewe App Post-Mortem

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/23/the-krewe-app-post-mortem.aspxNow that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates. At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback. I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements. Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth. The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later. Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card. I just had to figure out how to get people to use the feature. Why not just give someone a business card? Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier... When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time. Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database. Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation. All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels. Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.

    Read the article

  • Why does Clojure neglect the uniform access principle?

    - by Alexey
    My background is Ruby, C#, JavaScript and Java. And now I'm learning Clojure. What makes me feel uncomfortable about the later is that idiomatic Clojure seems to neglect the Uniform access principle (wiki, c2) and thus to a certain degree encapsulation as well by suggesting to use maps instead of some sort of "structures" or "classes". It feels like step back. So a couple of questions, if anyone informed: Which other design decisions/concerns it conflicted with and why it was considered less important? Did you have the same concern as well and how it end up when you switched from a language supporting UAP by default (Ruby, Eiffel, Python, C#) to Clojure?

    Read the article

  • Any Good Cocos2d Pause Menu Library

    - by Mahbubur R Aaman
    Background : From http://code.google.com/p/cocos2d-iphone/issues/detail?id=173 Scenes/Nodes doesn't support the CocosNodeOpacity protocol. From http://playsnackgames.com/blog/2011/09/cocos2d-tutorial-creating-a-reusable-pause-layer/ Cocos2d offers a simple method to pause and resume itself, but these methods stop the CCDirector (the class that manages most aspects of a Cocos2d’s app lifetime) from running actions and lower the fps to 5 to conserve battery life. Related issues http://www.cocos2d-iphone.org/forum/topic/4368 http://www.cocos2d-iphone.org/forum/topic/151 http://stackoverflow.com/questions/5852354/cocos2d-engine-pause-resume http://stackoverflow.com/questions/11878450/how-to-pause-a-layer-in-cocos2d-2-0 Question : Is there any Good Cocos2d Pause Menu Library solving these tricky issues? This will save many hours of Game Developer's life.

    Read the article

  • Applicability of the Joel Test to web development companies

    - by dreftymac
    QUESTION: How can you re-write the questions of the Joel test to apply to web developers? 1. Do you use source control? (source control for all aspects of your app, including configuration, database and user-based settings?) 2. Can you make a build in one step? (can you deploy a site from staging to prod in 1 step?) ... 10. Do you have testers? (how do you test AJAX and CSS?) BACKGROUND: This is for people who work in a shop that does some web development but also uses some off-the-shelf tools like Drupal and Wordpress, but doing custom development on top of that. RELATED LINKS: http://www.joelonsoftware.com/articles/fog0000000043.html What do you think about the Joel Test?

    Read the article

  • What discipline does Computer Science belong to?

    - by Macneil
    Is Computer Science science, applied mathematics, engineering, art, philosophy? "Other"? To provide background, here is Steven Wartik's blog posting for Scientific American titled "I'm not a real scientist, and that's okay." The article covers some good topics for this question, but it leaves open more than it answers. If you can think of the discipline, how would computer science fit into its definition? Should the discipline for Computer Science be based on what programmers do, or what academics do? What kind of answers do you get from people who've seemed to think deeply about this? What reasons do they give?

    Read the article

  • Help with algorithmic complexity in custom merge sort implementation

    - by bitcycle
    I've got an implementation of the merge sort in C++ using a custom doubly linked list. I'm coming up with a big O complexity of n^2, based on the merge_sort() slice operation. But, from what I've read, this algorithm should be n*log(n), where the log has a base of two. Can someone help me determine if I'm just determining the complexity incorrectly, or if the implementation can/should be improved to achieve n*log(n) complexity? If you would like some background on my goals for this project, see my blog. I've added comments in the code outlining what I understand the complexity of each method to be. Clarification - I'm focusing on the C++ implementation with this question. I've got another implementation written in Python, but that was something that was added in addition to my original goal(s).

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • How Windows 8's Backup System Differs From Windows 7's

    - by Chris Hoffman
    Windows 8 contains a completely revamped backup system. Windows 8’s File History replaces Windows 7’s Windows Backup – if you use Windows Backup and update to Windows 8, you’ll find quite a few differences. Microsoft redesigned Windows’ backup features because less than 5% of PCs used Windows Backup. The new File History system is designed to be simple to set up and work automatically in the background. This post will focus on the differences between File History and the Windows Backup feature you may be familiar with from Windows 7 – check out our full walkthrough of File History for more information. HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >