Search Results

Search found 7229 results on 290 pages for 'solaris tips'.

Page 258/290 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • Network Sniffing and Hubs

    - by Chris_K
    This will likely seem naive to the experts... but it has been on my mind lately. For years I've been using ntop and a cheap 4 port hub to sniff client networks to determine who's doing what -- and how much. Great way to see what's going on when they call and say "Geeze, the network seems really slow today." No need to bring in a managed switch (or access the existing one) and no need to configure spanning or mirroring. I just drop in the hub inline where I want to measure. Lately I noticed it is just about impossible to buy a real honest-to-goodness hub anymore. While looking for a new one, I had someone tell me that I should be sure to get a full-duplex hub or I'd only be seeing half the traffic when I monitor. Really? I've been using a crusty old Netgear DS104 all this time. No clue if it is half or FD. Have I really been understating my measurements? I'm just not bright enough about the physical layer to really know... Side note: Just ordered a Dualcomm Ethernet Switch TAP as a hub replacement. Seems like a nifty gadget. Any notes or tips about it would be welcome in the comments :-)

    Read the article

  • Java constructor using generic types

    - by user37903
    I'm having a hard time wrapping my head around Java generic types. Here's a simple piece of code that in my mind should work, but I'm obviously doing something wrong. Eclipse reports this error in BreweryList.java: The method initBreweryFromObject() is undefined for the type <T> The idea is to fill a Vector with instances of objects that are a subclass of the Brewery class, so the invocation would be something like: BreweryList breweryList = new BreweryList(BrewerySubClass.class, list); BreweryList.java package com.beerme.test; import java.util.Vector; public class BreweryList<T extends Brewery> extends Vector<T> { public BreweryList(Class<T> c, Object[] j) { super(); for (int i = 0; i < j.length; i++) { T item = c.newInstance(); // initBreweryFromObject() is an instance method // of Brewery, of which <T> is a subclass (right?) c.initBreweryFromObject(); // "The method initBreweryFromObject() is undefined // for the type <T>" } } } Brewery.java package com.beerme.test; public class Brewery { public Brewery() { super(); } protected void breweryMethod() { } } BrewerySubClass.java package com.beerme.test; public class BrewerySubClass extends Brewery { public BrewerySubClass() { super(); } public void androidMethod() { } } I'm sure this is a complete-generics-noob question, but I'm stuck. Thanks for any tips!

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • 100% CPU when doing 4 or more concurrent requests with Magento

    - by pancake
    Currently I'm having trouble with a server running Magento, it's unbelievably slow. It's a VPS with a few Magento installations on it used for development, so I'm the only one using them. When I do 4 request all 2 seconds after each other I'm finished in 10 seconds. Slow, but still within the limits of my patience. When I do 4 "concurrent" requests, however (opening 4 tabs in a row, very quickly) all four cores go to 100% and stay there for like a minute. How is this possible? I know that there are a lot of possibilities here, so any tips on how to make an Apache/PHP server go faster are also welcome. It used to go a lot faster before, and I've also tried APC, but it kept causing problems (PHP errors, something with memory pools) so I've disabled it. By the way, the Magento cache is off and compiling is also off. I know this makes Magento slower than usual, but I don't think a 60 second response time is normal for any Magento installation. Virtual hardware: 4 Cores and 4096MB RAM Swap is never used (checked with htop) 100GB disk space, of which 10% is in use Software: Debian 6 DirectAdmin and apache custombuild PHP 5.2.17 (CLI) If you need more info, please tell me how to get it, because I probably don't know how. I do know how to use the command line in linux and the usage of quite a few commands, but my experience with managing a server is limited.

    Read the article

  • 4GB Memory Upgrade for Acer Aspire 5102WLMi

    - by Richard Slater
    I have bought a 4GB memory upgrade (2x 2GB PC2-5300 SODIMM) for my Acer Aspire 5102WLMi (Aspire 5100 Series) laptop, I installed the two memory modules correctly however with 4GB installed the laptop refuses to POST. I have tried the following: Tried both 2GB SODIMMs without the other (Worked Fine) Tried the original 512MB SODIMMs (Worked Fine) Tried with original 512MB SODIMM and new 2GB SODIMM (Worked Fine) Tried swapping over the 2GB SODIMs (Didn't Boot) Left the computer for 10 minutes with both 2GB SODIMMs installed (Didn't Boot) Checked latest BIOS installed (No Change) The Crucial website said that the laptop supported 4GB of RAM as do several other sites through found through Google, up until now I was fairly confident this would work. Couple of questions that would be good to have answered: Question: Has anyone got an Acer Aspire 5100 Series running with 4GB RAM? Answer: Yes, I have now got one working with 3.75GB Usable, the rest is occupied utilized by the Graphics Card. Question: Any tips on getting this to work; is there a CMOS reset switch? Answer: Yes there is, if both SODIMMs are removed two very small interlocking PCB tracks are revealed. If these are shorted together with a screwdriver the BIOS will be reset. Thanks.

    Read the article

  • Windows xp blinking under score after bios

    - by heyjoe
    so this is for an older pc I have to repair for a friend. The pc has an hdd of about 60 something gb, It uses win xp and let's say 60-70% of the boots it hangs on showing only an underscore bilking line after bios screen, rest of the times it boots fine or the computer shuts down on xp loading screen. Sometimes if you let it alone while the underscore is blinking, it will boot after a while, like a few minutes, some times it won't boot at all even if you give him more time, like one hour. When it boots successfully the pc seems to work fine. I think it's a bad hard disk and i'm about to suggest buying a new one and switching it but I don't have enough experience and i would hate making him buy a new hdd and not solving the problem. anyone has any tips? I know there are other topics about blinking underscores or cursors while xp is booting but the issues about the pc shutting itself down or sometimes booting really freaks me out. Can't format everything and re install until about 10 days from now, cause the dude has some program for his business on this pc and I have to migrate it when the next computer arrives, however he needs to use it until then. so please advise, thx.

    Read the article

  • Website with login: everything works. Website without login: menu items don't redirect to content

    - by user3660755
    I wanted to put the website online. I saw it still had a login screen. It needs to be accesible for everyone. unpublishing the module did not work. I checked the user access view and took the login url out of the template manager. After this the login screen was gone at the page. But when I click on any of the website menu items, it doesn't redirect to the content. When I do have the login screen and put in the username and password all the menu's work just fine. I checked the url it is the same with or without login. How is this possible? I have been asking a lot of people, but no ones seems to be able to give me a hint. I have been searching for the solution myself for more than a week, I just don't know anymore. My guess is that there must be a conflict in the redirection, but I am not skilled enough to recognize it I am affraid. Any tips would be more than welcome. Thank you

    Read the article

  • Partition Magic 8 made TrueCrypt partition invisible

    - by gmancoda
    Partition Magic 8 took a dump on my TrueCrypt partition... and I let it happen! And now I am left with cleaning up the mess. In short, my encrypted partition is now invisible. TestDisk analysis says of the disk containing the encrypted partition: "Space conflict between the following two partitions". From the googling and searching on various sites, I have learned the following: Hex editing is beyond me. Partition recovery tools are useless. I am not ready to drop the big bucks for professional help. ... that I should have kept an external backup of the Volume header. Now, to get back the volume header, I am planning on recreating the exact same partitions on a new disk of the exact same model, and then encrypting it with the exact same password/keyfiles, and then exporting its volume header to a file. Finally, I hope to be able to restore this volume header on to my damaged drive. Before I undertake this plan, I would like to know if anyone else out there has tried it and, if so, how successful they were. All other suggestions and tips and welcome!! Thanks.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Regarding playing media file in Android media player application

    - by Mangesh
    Hi. I am new to android development. I just started with creating my own media player application by looking at the code samples given in Android SDK. While I am trying to play a local media file (m.3gp), I am getting IOException error :: error(1,-4). Please can somebody help me in this regard. Here is my code. package com.mediaPlayer; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.media.MediaPlayer; import android.media.MediaPlayer.OnBufferingUpdateListener; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnPreparedListener; import android.media.MediaPlayer.OnVideoSizeChangedListener; import android.view.SurfaceHolder; import android.util.Log; public class MediaPlayer1 extends Activity implements OnBufferingUpdateListener, OnCompletionListener,OnPreparedListener, OnVideoSizeChangedListener,SurfaceHolder.Callback { private static final String TAG = "MediaPlayerByMangesh"; // Widgets in the application private Button btnPlay; private Button btnPause; private Button btnStop; private MediaPlayer mMediaPlayer; private String path = "m.3gp"; private SurfaceHolder holder; private int mVideoWidth; private int mVideoHeight; private boolean mIsVideoSizeKnown = false; private boolean mIsVideoReadyToBePlayed = false; // For the id of radio button selected private int radioCheckedId = -1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { Log.d(TAG, "Entered OnCreate:"); super.onCreate(savedInstanceState); setContentView(R.layout.main); Log.d(TAG, "Creatinging Buttons:"); btnPlay = (Button) findViewById(R.id.btnPlay); btnPause = (Button) findViewById(R.id.btnPause); // On app load, the Pause button is disabled btnPause.setEnabled(false); btnStop = (Button) findViewById(R.id.btnStop); btnStop.setEnabled(false); /* * Attach a OnCheckedChangeListener to the radio group to monitor radio * buttons selected by user */ Log.d(TAG, "Watching for Click"); /* Attach listener to the Calculate and Reset buttons */ btnPlay.setOnClickListener(mClickListener); btnPause.setOnClickListener(mClickListener); btnStop.setOnClickListener(mClickListener); } /* * ClickListener for the Calculate and Reset buttons. Depending on the * button clicked, the corresponding method is called. */ private OnClickListener mClickListener = new OnClickListener() { @Override public void onClick(View v) { switch (v.getId()) { case R.id.btnPlay: Log.d(TAG, "Clicked Play Button"); Log.d(TAG, "Calling Play Function"); Play(); break; case R.id.btnPause: Pause(); break; case R.id.btnStop: Stop(); break; } } }; /** * Play the Video. */ private void Play() { // Create a new media player and set the listeners mMediaPlayer = new MediaPlayer(); Log.d(TAG, "Entered Play function:"); try { mMediaPlayer.setDataSource(path); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setDisplay(holder); try { mMediaPlayer.prepare(); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setOnBufferingUpdateListener(this); mMediaPlayer.setOnCompletionListener(this); mMediaPlayer.setOnPreparedListener(this); //mMediaPlayer.setOnVideoSizeChangedListener(this); //mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); } public void onBufferingUpdate(MediaPlayer arg0, int percent) { Log.d(TAG, "onBufferingUpdate percent:" + percent); } public void onCompletion(MediaPlayer arg0) { Log.d(TAG, "onCompletion called"); } public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { Log.v(TAG, "onVideoSizeChanged called"); if (width == 0 || height == 0) { Log.e(TAG, "invalid video width(" + width + ") or height(" + height + ")"); return; } mIsVideoSizeKnown = true; mVideoWidth = width; mVideoHeight = height; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void onPrepared(MediaPlayer mediaplayer) { Log.d(TAG, "onPrepared called"); mIsVideoReadyToBePlayed = true; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) { Log.d(TAG, "surfaceChanged called"); } public void surfaceDestroyed(SurfaceHolder surfaceholder) { Log.d(TAG, "surfaceDestroyed called"); } public void surfaceCreated(SurfaceHolder holder) { Log.d(TAG, "surfaceCreated called"); Play(); } private void startVideoPlayback() { Log.v(TAG, "startVideoPlayback"); holder.setFixedSize(176, 144); mMediaPlayer.start(); } /** * Pause the Video */ private void Pause() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Stop the Video. */ private void Stop() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Shows the error message in an alert dialog * * @param errorMessage * String the error message to show * @param fieldId * the Id of the field which caused the error. This is required * so that the focus can be set on that field once the dialog is * dismissed. */ private void showErrorAlert(String errorMessage, final int fieldId) { new AlertDialog.Builder(this).setTitle("Error") .setMessage(errorMessage).setNeutralButton("Close", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { findViewById(fieldId).requestFocus(); } }).show(); } } Thanks, Mangesh Kumar K.

    Read the article

  • DBD::Oracle and utf8

    - by golemwashere
    Hello, I have some troubles inserting an UTF8 string into an oracle 10 database on Solaris, using the latest DBD::Oracle on perl v5.8.4. This are my DB settings > --------SELECT * from NLS_DATABASE_PARAMETERS------------------------------- > NLS_NCHAR_CHARACTERSET AL16UTF16 > NLS_LANGUAGE AMERICAN > NLS_TERRITORY AMERICA NLS_CURRENCY $ > NLS_ISO_CURRENCY AMERICA > NLS_NUMERIC_CHARACTERS ., > NLS_CHARACTERSET UTF8 > NLS_CALENDAR GREGORIAN > NLS_DATE_FORMAT DD-MON-RR > NLS_DATE_LANGUAGE AMERICAN > NLS_SORT BINARY > NLS_TIME_FORMAT HH.MI.SSXFF AM > NLS_TIMESTAMP_FORMAT DD-MON-RR > HH.MI.SSXFF AM > NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR > NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR > HH.MI.SSXFF AM TZR NLS_DUAL_CURRENCY $ > NLS_COMP BINARY > NLS_LENGTH_SEMANTICS CHAR > NLS_NCHAR_CONV_EXCP FALSE > NLS_RDBMS_VERSION 10.2.0.4.0 > -------------------------------------------------------------------------- This are my perl $dbh-ora_nls_parameters() $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'CHAR', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'UTF8', 'NLS_CURRENCY' => '$' }; In my script I have: use utf-8; $ENV{NLS_LANG}='AMERICAN_AMERICA.UTF8'; .. $sth->bind_param(5, $myclobfield, {ora_type => ORA_CLOB, ora_csform => SQLCS_NCHAR}); .. The string prints out 1 on print Encode::is_utf8($myclobfield); But characters like òàè are not correctly inserted into the DB. (I tested with a utf8 compliant client that can correctly insert and read them) Can anyone suggest the best way to do it? Thanks

    Read the article

  • Occasional Date or timezone discrepancy in hudson or maven with jodatime

    - by TheStijn
    hi, I hope following explanation will make sense because it's a weird problem we're facing and hard to describe. We have a maven project which gets build in hudson and that contains some unit tests where dates are used and asserted. The hudson server runs on solaris. Now, occasionally (like 30% of the times) the unit tests using dates fail because 3,5 hours are deducted from the specified time in the unit test and hence asserts start failing. The other 70% everything works fine although nothing at all changed in the code and we run the hudson job several times an hour. I add following code to a unittest to check the time: @Test public void testDate() { System.out.println("new DateMidnight(2011, 1, 5).toDate();"); System.out.println(new DateMidnight(2011, 1, 5).toDate()); System.out.println(new DateMidnight(2011, 1, 5).toDate().getTime()); Calendar cal = Calendar.getInstance(); cal.set(Calendar.YEAR, 2011); cal.set(Calendar.MONTH, 0); cal.set(Calendar.DAY_OF_MONTH, 5); cal.set(Calendar.HOUR, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); System.out.println("cal.getTime();"); System.out.println(cal.getTime()); System.out.println(cal.getTime().getTime()); } So basically it should print the same thing when using jodatime or plain old Calendar. This is the case in 70% of the runs; for the other 30% I get following printouts: Running TestSuite new DateMidnight(2011, 1, 5).toDate(); Tue Jan 04 21:30:00 MET 2011 1294173000000 cal.getTime(); Wed Jan 05 12:00:00 MET 2011 1294225200000 Local maven tests never appear the pose this problem and we can't figure out what could be the cause of it. Especially, we can't think of a single reason why the tests sometimes pass and sometimes fail without changing any code nor hudson or server setting. Also, we run the maven install with cobertura which means that the unit tests are run twice. It happens also that they pass the first time and fail the second time or the other way around or that they fail both times. Thanks for any help, Stijn

    Read the article

  • Problems with CGI wrapper for PHP

    - by user205878
    I'm having a bugger of a time with a CGI wrapper for PHP. I know very little about CGI and PHP as CGI. Here's what I know about the system: Solaris 10 on a 386 Suhosin PHP normally running as CGI, with cgiwrap (http://cgiwrap.sourceforge.net/). I am not able to find an example wrapper.cgi on the server to look at. Shared hosting (virtual host), so I don't have access to Apache config. But the admins are not helpful. Switching hosts is not an option. Options directive cannot be overridden in .htaccess (ExecCGI, for example). .htaccess: AddHandler php-handler .php Action php-handler "/bin/test.cgi" ~/public_html/bin/test.cgi: #!/usr/bin/sh # Without these 2 lines, I get an Internal Server Error echo "Content-type: text/html" echo "" exec "/path/to/php-cgi" 'foo.php'; /bin/foo.php: <?php echo "this is foo.php!"; Output of http://mysite.com/bin/test.cgi: X-Powered-By: PHP/5.2.11 Content-type: text/html echo "Content-type: text/html" echo "" exec "/path/to//php-cgi" 'foo.php'; Output of http:/ /mysite.com/anypage.php: X-Powered-By: PHP/5.2.11 Content-type: text/html echo "Content-type: text/html" echo "" exec "/path/to//php-cgi" 'foo.php'; The things I note are: PHP is being executed, as noted by the X-Powered-By ... header. The source of /bin/test.cgi is output in the results. No matter what I put as the second argument of exec, it isn't passed to the php binary. I've tried '-i' to get phpinfo, '-v' to get the version... When I execute test.cgi via the shell, I get the expected results (the argument is passed to php, and it is reflected in the output). Any ideas about how to get this to work? UPDATE It appears that the reason the source of the test.cgi was appearing was due to errors. Anytime fatal error occurred, either within the cgi itself or with the command being executed by exec, it would cause the source of the cgi to appear. Within test.cgi, I can get the proper output with exec "/path/to/php-cgi" -h (I get the same thing as I would from CLI).

    Read the article

  • is appassembler plugin broken for java service wrapper on windows 64bit?

    - by Paul McKenzie
    Hi I'm developing on 32bit windows and am using appassembler to create a java service wrapper assembly, and it works ok. But I need to also create a 64bit assembly for deployment to a dev server. In the following config I have substituted the 32bit platform with the 64bit, see the <includes> section. But it no longer places the wrapper jar and dll in the lib folder. If I omit the includes completely, I get linux, solaris, Mac OSX and Win32 libraries, but no win64. Anyone got this working? <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>appassembler-maven-plugin</artifactId> <version>1.1-SNAPSHOT</version> <configuration> <target>${project.build.directory}/appassembler</target> <repositoryLayout>flat</repositoryLayout> <defaultJvmSettings> <initialMemorySize>256M</initialMemorySize> <maxMemorySize>1024M</maxMemorySize> </defaultJvmSettings> <daemons> <daemon> <id>MyApp</id> <mainClass>com.foo.AppMain</mainClass> <platforms> <platform>jsw</platform> </platforms> <generatorConfigurations> <generatorConfiguration> <generator>jsw</generator> <includes> <include>windows-x86-64</include> </includes> <configuration> <property> <name>set.default.REPO_DIR</name> <value>../../repo</value> </property> </configuration> </generatorConfiguration> </generatorConfigurations> </daemon> </daemons> </configuration> <executions> <execution> <goals> <goal>generate-daemons</goal> <goal>create-repository</goal> </goals> </execution> </executions> </plugin>

    Read the article

  • dbms_xmlschema fail to validate with complexType

    - by Andrew
    Preface: This works on one Oracle 11gR1 (Solaris 64) database and not on a second and we can't figure out the difference between the two databases. Somehow the complexType causes the validation to fail with this error: ORA-31154: invalid XML document ORA-19202: Error occurred in XML processing LSX-00200: element "shiporder" not empty ORA-06512: at "SYS.XMLTYPE", line 354 ORA-06512: at line 13 But the schema is valid (passes this online test: http://www.xmlme.com/Validator.aspx) -- Cleanup any existing schema begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; -- Define the problem schema (adapted from http://www.w3schools.com/schema/schema_example.asp) begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder"> <xs:complexType> <xs:sequence> <xs:element name="orderperson" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema>',owner=>'SCOTT'); end; -- Attempt to validate declare bbb xmltype; begin bbb := XMLType('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> <orderperson>John Smith</orderperson> </shiporder>'); XMLType.schemaValidate(bbb); end; Now if I gut the schema definition and leave only a string in the XML then the validation passes: begin dbms_xmlschema.deleteschema('shiporder.xsd',dbms_xmlschema.DELETE_CASCADE); end; begin dbms_xmlschema.registerSchema('shiporder.xsd','<?xml version="1.0" encoding="ISO-8859-1" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="shiporder" type="xs:string"/> </xs:schema>',owner=>'SCOTT'); end; DECLARE xml XMLTYPE; BEGIN xml := XMLTYPE('<?xml version="1.0" encoding="ISO-8859-1"?> <shiporder xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="shiporder.xsd"> John Smith </shiporder>'); XMLTYPE.schemaValidate(xml); END;

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • How do make dependency generation work for C? (Also..decode this sed/make statement!)

    - by Derek
    Hi all. I have a make build system that I am trying to decipher that someone else wrote. I am getting an error when I run it on a redhat system, but not when I run it on my solaris system. The versions of gmake are the same major revision (one off on minor revision). This is for building a C project, and the make system has a global Makefile.global that is inherited by each directory's local Makefile The Makefile.global has all the targets in it, starting with all: $(LIB) $(BIN) where LIB builds libs and BIN builds binaries. jumping down the targets I have $(LIB) : $(GEN_LIB) $(GEN_LIB) : $(GEN_DEPS) $(GEN_OBJS) $(AR) $(ARFLAGS) $(GEN_LIB) $(GEN_OBJS) $(GEN_DEPS) : @set -e; rm -f $@; \ $(CC) $(CDEP_FLAG) $(CFLAGS) $(INCDIRS) `basename $@ | sed 's/\.d/\.c/' | sed 's,^,$(HOME_SRC)/,'` | sed 's,\(.*\)\.o: ,$(GEN_OBJDIR)/\1.o $@ :,g' > [email protected] ; \ cat [email protected] > $@ ; \ cat [email protected] | cut -d: -f2 | grep '\.h' | sed 's,\.h,.h :,g' >> $@ ; \ rm [email protected] $(GEN_OBJS) : $(CC) $(CFLAGS) $(INCDIRS) -c $(*F).c -lmpi -o $@ I think these are all the relevant targets I need to include to answer my question. Definitions of those variables: CC = icc CDEP_FLAG = -M CFLAGS = various compiler flags ifdef type flags INCDIRS = include directory where all .h files are GEN_OBJDIR = /lib/objs HOME_SRC = . GEN_LIB = lib/$(LIB) GEN_DEPDIR=/lib/deps GEN_DEPS = $(addprefix $(GEN_DEPDIR)/,$(addsuffix .d,$(basename $(OBJS)))) I think this has everything covered you need. Basically self explanatory from the names. Now as best I can tell, this is generating in /lib/deps a .d file that has the object and source dependencies in it. In other words, for the utilities.a library, I will get a utils.o and utils.c dependency stack, all in the file utils.d There is some syntax error that is being generated in that file I think, because I get the following error: ../lib/deps/util.d:25: *** target pattern contains no '%'. Stop. gmake[2]: *** [all] Error 2 gmake[1]: *** [all] Error 2 gmake: *** [all] Error 2 I am not sure if my error is in the dependency generation, or some further down part, like the object generation target? If you need further info, let me know, I will add to post

    Read the article

  • April 14th Links: ASP.NET, ASP.NET MVC, ASP.NET Web API and Visual Studio

    - by ScottGu
    Here is the latest in my link-listing blog series: ASP.NET Easily overlooked features in VS 11 Express for Web: Good post by Scott Hanselman that highlights a bunch of easily overlooked improvements that are coming to VS 11 (and specifically the free express editions) for web development: unit testing, browser chooser/launcher, IIS Express, CSS Color Picker, Image Preview in Solution Explorer and more. Get Started with ASP.NET 4.5 Web Forms: Good 5-part tutorial that walks-through building an application using ASP.NET Web Forms and highlights some of the nice improvements coming with ASP.NET 4.5. What is New in Razor V2 and What Else is New in Razor V2: Great posts by Andrew Nurse, a dev on the ASP.NET team, about some of the new improvements coming with ASP.NET Razor v2. ASP.NET MVC 4 AllowAnonymous Attribute: Nice post from David Hayden that talks about the new [AllowAnonymous] filter introduced with ASP.NET MVC 4. Introduction to the ASP.NET Web API: Great tutorial by Stephen Walher that covers how to use the new ASP.NET Web API support built-into ASP.NET 4.5 and ASP.NET MVC 4. Comprehensive List of ASP.NET Web API Tutorials and Articles: Tugberk Ugurlu links to a huge collection of articles, tutorials, and samples about the new ASP.NET Web API capability. Async Mashups using ASP.NET Web API: Nice post by Henrik on how you can use the new async language support coming with .NET 4.5 to easily and efficiently make asynchronous network requests that do not block threads within ASP.NET. ASP.NET and Front-End Web Development Visual Studio 11 and Front End Web Development - JavaScript/HTML5/CSS3: Nice post by Scott Hanselman that highlights some of the great improvements coming with VS 11 (including the free express edition) for front-end web development. HTML5 Drag/Drop and Async Multi-file Upload with ASP.NET Web API: Great post by Filip W. that demonstrates how to implement an async file drag/drop uploader using HTML5 and ASP.NET Web API. Device Emulator Guide for Mobile Development with ASP.NET: Good post from Rachel Appel that covers how to use various device emulators with ASP.NET and VS to develop cross platform mobile sites. Fixing these jQuery: A Guide to Debugging: Great presentation by Adam Sontag on debugging with JavaScript and jQuery.  Some really good tips, tricks and gotchas that can save a lot of time. ASP.NET and Open Source Getting Started with ASP.NET Web Stack Source on CodePlex: Fantastic post by Henrik (an architect on the ASP.NET team) that provides step by step instructions on how to work with the ASP.NET source code we recently open sourced. Contributing to ASP.NET Web Stack Source on CodePlex: Follow-on to the post above (also by Henrik) that walks-through how you can submit a code contribution to the ASP.NET MVC, Web API and Razor projects. Overview of the WebApiContrib project: Nice post by Pedro Reys on the new open source WebApiContrib project that has been started to deliver cool extensions and libraries for use with ASP.NET Web API. Entity Framework Entity Framework 5 Performance Improvements and Performance Considerations for EF5:  Good articles that describes some of the big performance wins coming with EF5 (which will ship with both .NET 4.5 and ASP.NET MVC 4). Automatic compilation of LINQ queries will yield some significant performance wins (up to 600% faster). ASP.NET MVC 4 and EF Database Migrations: Good post by David Hayden that covers the new database migrations support within EF 4.3 which allows you to easily update your database schema during development - without losing any of the data within it. Visual Studio What's New in Visual Studio 11 Unit Testing: Nice post by Peter Provost (from the VS team) that talks about some of the great improvements coming to VS11 for unit testing - including built-in VS tooling support for a broad set of unit test frameworks (including NUnit, XUnit, Jasmine, QUnit and more) Hope this helps, Scott

    Read the article

  • Control to Control Binding in WPF/Silverlight

    - by psheriff
    In the past if you had two controls that you needed to work together, you would have to write code. For example, if you want a label control to display any text a user typed into a text box you would write code to do that. If you want turn off a set of controls when a user checks a check box, you would also have to write code. However, with XAML, these operations become very easy to do. Bind Text Box to Text Block As a basic example of this functionality, let’s bind a TextBlock control to a TextBox. When the user types into a TextBox the value typed in will show up in the TextBlock control as well. To try this out, create a new Silverlight or WPF application in Visual Studio. On the main window or user control type in the following XAML. <StackPanel>  <TextBox Margin="10" x:Name="txtData" />  <TextBlock Margin="10"              Text="{Binding ElementName=txtData,                             Path=Text}" /></StackPanel> Now run the application and type into the TextBox control. As you type you will see the data you type also appear in the TextBlock control. The {Binding} markup extension is responsible for this behavior. You set the ElementName attribute of the Binding markup to the name of the control that you wish to bind to. You then set the Path attribute to the name of the property of that control you wish to bind to. That’s all there is to it! Bind the IsEnabled Property Now let’s apply this concept to something that you might use in a business application. Consider the following two screen shots. The idea is that if the Add Benefits check box is un-checked, then the IsEnabled property of the three “Benefits” check boxes will be set to false (Figure 1). If the Add Benefits check box is checked, then the IsEnabled property of the “Benefits” check boxes will be set to true (Figure 2). Figure 1: Uncheck Add Benefits and the Benefits will be disabled. Figure 2: Check Add Benefits and the Benefits will be enabled. To accomplish this, you would write XAML to bind to each of the check boxes in the “Benefits To Add” section to the check box named chkBenefits. Below is a fragment of the XAML code that would be used. <CheckBox x:Name="chkBenefits" /> <CheckBox Content="401k"           IsEnabled="{Binding ElementName=chkBenefits,                               Path=IsChecked}" /> Since the IsEnabled property is a boolean type and the IsChecked property is also a boolean type, you can bind these two together. If they were different types, or if you needed them to set the IsEnabled property to the inverse of the IsChecked property then you would need to use a ValueConverter class. SummaryOnce you understand the basics of data binding in XAML, you can eliminate a lot code. Connecting controls together is as easy as just setting the ElementName and Path properties of the Binding markup extension. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "SL – Basic Control Binding" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free eBook on "Fundamentals of N-Tier".

    Read the article

  • Cocos2d-xna memory management for WP8

    - by Arkiliknam
    I recently upgraded to VS2012 and try my in dev game out on the new WP8 emulators but was dismayed to find out the emulator now crashes and throws an out of memory exception during my sprite loading procedure (funnily, it still works in WP7 emulators and on my WP7). Regardless of whether the problem is the emulator or not, I want to get a clear understanding of how I should be managing memory in the game. My game consists of a character whom has 4 or more different animations. Each animation consists of 4 to 7 frames. On top of that, the character has up to 8 stackable visualization modifications (eg eye type, nose type, hair type, clothes type). Pre memory issue, I preloaded all textures for each animation frame and customization and created animate action out of them. The game then plays animations using the customizations applied to that current character. I re-looked at this implementation when I received the out of memory exceptions and have started playing with RenderTexture instead, so instead of pre loading all possible textures, it on loads textures needed for the character, renders them onto a single texture, from which the animation is built. This means the animations use 1/8th of the sprites they were before. I thought this would solve my issue, but it hasn't. Here's a snippet of my code: var characterTexture = CCRenderTexture.Create((int)width, (int)height); characterTexture.BeginWithClear(0, 0, 0, 0); // stamp a body onto my texture var bodySprite = MethodToCreateSpecificSprite(); bodySprite.Position = centerPoint; bodySprite.Visit(); bodySprite.Cleanup(); bodySprite = null; // stamp eyes, nose, mouth, clothes, etc... characterTexture.End(); As you can see, I'm calling CleanUp and setting the sprite to null in the hope of releasing the memory, though I don't believe this is the right way, nor does it seem to work... I also tried using SharedTextureCache to load textures before Stamping my texture out, and then clearing the SharedTextureCache with: CCTextureCache.SharedTextureCache.RemoveAllTextures(); But this didn't have an effect either. Any tips on what I'm not doing? I used VS to do a memory profile of the emulation causing the crash. Both WP7.1 and WP8 emulators peak at about 150mb of usage. WP8 crashes and throws an out of memory exception. Each customisation/frame is 15kb at the most. Lets say there are 8 layers of customisation = 120kb but I render then onto one texture which I would assume is only 15kb again. Each animation is 8 frames at the most. That's 15kb for 1 texture, or 960kb for 8 textures of customisation. There are 4 animation sets. That's 60Kb for 4 sets of 1 texture, or 3.75MB for 4 sets of 8 textures of customisation. So even if its storing every layer, its 3.75MB.... no where near the 150mb breaking point my profiler seems to suggest :( WP 7.1 Memory Profile (max 150MB) WP8 Memory Profile (max 150MB and crashes)

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • PowerShell Control over Nikon D3000 Camera

    My wife got me a Nikon D3000 camera for Christmas last year, and Im loving it but still trying to wrap my head around some of its features.  For instance, when you plug it into a computer via USB, it doesnt show up as a drive like most cameras Ive used to, but rather it shows up as Computer\D3000.  After a bit of research, Ive learned that this is because it implements the MTP/PTP protocol, and thus doesnt actually let Windows mount the cameras storage as a drive letter.  Nikon describes the use of the MTP and PTP protocols in their cameras here. What Im really trying to do is gain access to the cameras file system via PowerShell.  Ive been using a very handy PowerShell script to pull pictures off of my cameras and organize them into folders by date.  Id love to be able to do the same thing with my Nikon D3000, but so far I havent been able to figure out how to get access to the files in PowerShell.  If you know, Id appreciate any links/tips you can provide.  All I could find is a shareware product called PTPdrive, which Im not prepared to shell out money for (yet).  (and yes you can do much the same thing with Windows 7s Import Pictures and Videos wizard, which is pretty good too) However, in my searching, I did find some really cool stuff you can do with PowerShell and one of these cameras, like actually taking pictures via PowerShell commands.  Credit for this goes to James ONeill and Mark Wilson.  Heres what I was able to do: Taking Pictures via PowerShell with D3000 First, connect your camera, turn it on, and launch PowerShell.  Execute the following commands to see what commands your device supports.  $dialog = New-Object -ComObject "WIA.CommonDialog" $device = $dialog.ShowSelectDevice() $device.Commands You should see something like this: Now, to take a picture, simply point your camera at something and then execute this command: $device.ExecuteCommand("{AF933CAC-ACAD-11D2-A093-00C04F72DC3C}") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Imagine my surprise when this actually took a picture (with auto-focus): Imagine what you could do with a camera completely under the control of your computer  Time-lapse photography would be pretty simple, for instance, with a very simple loop that takes a picture and then sleeps for a minute (or whatever time period).  Hooked up to a laptop for portability (and an A/C power supply), this would be pretty trivial to implement.  I may have to give it a shot and report back. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How did my email end up in spam? Spam only filters this specific email, other email contents work

    - by mugetsu
    My website has users buy our products and when the purchase completes, it sends the user an email. However, this email always ends up in spam! When the user first registers, the site also sends an email, this email however is not filtered and goes into the normal inbox. I'm not quite sure why this is so, gmail vaguely tells me that " It's similar to messages that were detected by our spam filters." So I'm thinking that I need to reword the following email better. Can I get some tips? Or could something else be causing this? thanks! here's the unformatted email: Delivered-To: [email protected] Received: by 10.112.32.98 with SMTP id h2csp61953lbi; Tue, 20 Mar 2012 21:09:13 -0700 (PDT) Received: by 10.180.79.72 with SMTP id h8mr22836827wix.1.1332302953175; Tue, 20 Mar 2012 21:09:13 -0700 (PDT) Return-Path: <[email protected]> Received: from mail26.elasticemail.org (mail26.elasticemail.org. [178.32.180.26]) by mx.google.com with SMTP id 6si518487wiz.41.2012.03.20.21.09.12; Tue, 20 Mar 2012 21:09:12 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 178.32.180.26 as permitted sender) client-ip=178.32.180.26; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 178.32.180.26 as permitted sender) [email protected]; dkim=pass [email protected] DKIM-Signature: v=1; a=rsa-sha1; bh=qjc8jxQuGy9pLN1YV9TR2PHQYKg=; c=relaxed/relaxed; d=website.com; s=api; h=DomainKey-Signature:MIME-Version:From:To:List-Unsubscribe:Subject:Date:Reply-To:Message-ID:Content-Type; b=Odt+nYhjntXPl7JPVHeJWjkStemt6so+FPVYY6oMKziMFzmW8YiLhN8WwSLY0faMcn/rirKsO2dOm/kvcHlqUJC7ldhaydE6bPekkBDa9kBovlGwPNm6xy9QWPP9I1fXDLDCwqqeAXv8kN0daXbh3pVyqWNUOk5cgQ35OgpQpKI= DomainKey-Signature: q=dns; a=rsa-sha1; c=simple; d=website.com; s=api; h=MIME-Version:X-Mailer:From:To:X-Priority:List-Unsubscribe:Subject:Date:Reply-To:Message-ID:Content-Type; b=F7NNZIEyEV+64uYD8pVpe91WLP19Tw2Whk4OKpkLeAfkmrNIA7AjP0XYU1JWTlEyibHQJjjbhR62I3MvVJBSGp75eWfOuwb2AqYWZ/jAlMWznnfQLVv7OlYJsErGxYP6GUNNcuJaqlTPFDanJwtaEvR+tqXZRB7xrUisMd8lq2I= MIME-Version: 1.0 X-Mailer: email.website.com From: "Website Contact" <[email protected]> To: [email protected] X-Priority: 3 (Normal) List-Unsubscribe: <http://email.website.com/tracking/unsubscribe?msgid=su6g-8kfd0s0g>, <mailto:[email protected]?subject=unsubscribe> Subject: Website Tickets: event Date: Wed, 21 Mar 2012 04:09:17 +0000 Reply-To: "Website Contact" <[email protected]> Message-ID: <[email protected]> Content-Type: multipart/alternative; boundary="----=_NextPart_000_3F77_7A0DF805.A8C886C0" ------=_NextPart_000_3F77_7A0DF805.A8C886C0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGVsbG8hIAoKIEhlcmUgYXJlIHlvdXIgdGlja2V0KHMpIGZvciBDVEFTIGVDc1RBU3kgMjAxMjog CgpodHRwczovL2NhbXB1c2FtcC5jb20vP3RpY2tldHMvNy95aGloZ3Znd3Z3cWR3cXhtdnQKClNp bXBseSBicmluZyBpdCB3aXRoIHlvdSBvbiB5b3VyIHNtYXJ0cGhvbmUsIG9yIHByaW50IHRoZSB0 aWNrZXQgb3V0IHRvIGJlIHNjYW5uZWQgYXQgdGhlIGV2ZW50LiBFbmpveSwgYW5kIHdlIGFwcHJl Y2lhdGUgeW91ciBwdXJjaGFzZS4KClNpbmNlcmVseSwKVGhlIENhbXB1c0FtcCBUZWFt ------=_NextPart_000_3F77_7A0DF805.A8C886C0 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 SGVsbG8hIDxici8+PGJyLz4gSGVyZSBhcmUgeW91ciB0aWNrZXQocykgZm9yIENUQVMgZUNzVEFT eSAyMDEyOjxici8+PGEgaHJlZj0iaHR0cDovL2VtYWlsLmNhbXB1c2FtcC5jb20vdHJhY2tpbmcv Y2xpY2s/bXNnaWQ9c3U2Zy04a2ZkMHMwZyZ0YXJnZXQ9aHR0cHMlM2ElMmYlMmZjYW1wdXNhbXAu Y29tJTJmJTNmdGlja2V0cyUyZjclMmZ5aGloZ3Znd3Z3cWR3cXhtdnQiPiBodHRwczovL2NhbXB1 c2FtcC5jb20vP3RpY2tldHMvNy95aGloZ3Znd3Z3cWR3cXhtdnQgIDwvYT4gPGJyLz48YnIvPlNp bXBseSBicmluZyBpdCB3aXRoIHlvdSBvbiB5b3VyIHNtYXJ0cGhvbmUsIG9yIHByaW50IHRoZSB0 aWNrZXQgb3V0IHRvIGJlIHNjYW5uZWQgYXQgdGhlIGV2ZW50LiBFbmpveSwgYW5kIHdlIGFwcHJl Y2lhdGUgeW91ciBwdXJjaGFzZS48YnIvPjxici8+U2luY2VyZWx5LDxici8+VGhlIENhbXB1c0Ft cCBUZWFtPGltZyBzcmM9Imh0dHA6Ly9lbWFpbC5jYW1wdXNhbXAuY29tL3RyYWNraW5nL29wZW4/ bXNnaWQ9c3U2Zy04a2ZkMHMwZyIgc3R5bGU9IndpZHRoOjFweDtoZWlnaHQ6MXB4IiBhbHQ9IiIg Lz4= ------=_NextPart_000_3F77_7A0DF805.A8C886C0--

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >