Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 57/222 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • What kind of storage with two-way replication for multi site C# application?

    - by twk
    Hi I have a web-based system written using asp.net backed by mssql. A synchronized replica of this system is to be run on mobile locations and must be available regardless of the state of the connection to the main system (few hours long interruptions happens). For now I am using a copy of the main web application and a copy of the mssql server with merge replication to the main system. This works unreliably, and setting the replication is a pain. The amount of data the system contains is not huge, so I can migrate to different storage type. For the new version of this system I would like to implement a new replication system. I am considering migration to db4o for storage with it's replication support. I am thinking about other possible solutions like couchdb which had native replication support. I would like to stay with C#. Could you recommend a way to go for such a distributed environment? PS. Master-Slave replication is not an option: any side must be allowed to add/update data.

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Are C++ exceptions sufficient to implement thread-local storage?

    - by Potatoswatter
    I was commenting on an answer that thread-local storage is nice and recalled another informative discussion about exceptions where I supposed The only special thing about the execution environment within the throw block is that the exception object is referenced by rethrow. Putting two and two together, wouldn't executing an entire thread inside a function-catch-block of its main function imbue it with thread-local storage? It seems to work fine: #include <iostream> #include <pthread.h> using namespace std; struct thlocal { string name; thlocal( string const &n ) : name(n) {} }; thlocal &get_thread() { try { throw; } catch( thlocal &local ) { return local; } } void print_thread() { cerr << get_thread().name << endl; } void *kid( void *local_v ) try { thlocal &local = * static_cast< thlocal * >( local_v ); throw local; } catch( thlocal & ) { print_thread(); return NULL; } int main() try { thlocal local( "main" ); throw local; } catch( thlocal & ) { print_thread(); pthread_t th; thlocal kid_local( "kid" ); pthread_create( &th, NULL, &kid, &kid_local ); pthread_join( th, NULL ); print_thread(); return 0; } Is this novel or well-characterized? Was my initial premise correct? What kind of overhead does get_thread incur in, say, GCC and VC++? It would require throwing only exceptions derived from struct thlocal, but altogether this doesn't feel like an unproductive insomnia-ridden Sunday morning…

    Read the article

  • write a batch file to copy files from one folder to another folder

    - by user73628
    I am having a storage folder on network in which all users will store their active data on a server now that server is going to be replaced by new one due to place problem so I need to copy sub folders files from the old server storage folder to new server storage folder. I have below ex: from \Oldeserver\storage\data & files to \New server\storage\data & files.

    Read the article

  • Oracle Systems and Solutions at OpenWorld Tokyo 2012

    - by ferhat
    Oracle OpenWorld Tokyo and JavaOne Tokyo will start next week April 4th. We will cover Oracle systems and Oracle Optimized Solutions in several keynote talks and general sessions. Full schedule can be found here. Come by the DemoGrounds to learn more about mission critical integration and optimization of complete Oracle stack. Our Oracle Optimized Solutions experts will be at hand to discuss 1-1 several of Oracle's systems solutions and technologies. Oracle Optimized Solutions are proven blueprints that eliminate integration guesswork by combing best in class hardware and software components to deliver complete system architectures that are fully tested, and include documented best practices that reduce integration risks and deliver better application performance. And because they are highly flexible by design, Oracle Optimized Solutions can be implemented as an end-to-end solution or easily adapted into existing environments. Oracle Optimized Solutions, Servers,  Storage, and Oracle Solaris  Sessions, Keynotes, and General Session Talks DAY TIME TITLE Notes Session Wednesday  April 4 9:00 - 11:15 Keynote: ENGINEERED FOR INNOVATION - Engineered Systems Mark Hurd,  President, Oracle Takao Endo, President & CEO, Oracle Corporation Japan John Fowler, EVP of Systems, Oracle Ed Screven, Chief Corporate Architect, Oracle English Session K1-01 11:50 - 12:35 Simplifying IT: Transforming the Data Center with Oracle's Engineered Systems Robert Shimp, Group VP, Product Marketing, Oracle English Session S1-01 15:20 - 16:05 Introducing Tiered Storage Solution for low cost Big Data Archiving S1-33 16:30 - 17:15 Simplifying IT - IT System Consolidation that also Accelerates Business Agility S1-42 Thursday  April 5 9:30 - 11:15 Keynote: Extreme Innovation Larry Ellison, Chief Executive Officer, Oracle English Session K2-01 11:50 - 13:20 General Session: Server and Storage Systems Strategy John Fowler, EVP of Systems, Oracle English Session G2-01 16:30 - 17:15 Top 5 Reasons why ZFS Storage appliance is "The cloud storage" by SAKURA Internet Inc L2-04 16:30 - 17:15 The UNIX based Exa* Performance IT Integration Platform - SPARC SuperCluster S2-42 17:40 - 18:25 Full stack solutions of hardware and software with SPARC SuperCluster and Oracle E-Business Suite  to minimize the business cost while maximizing the agility, performance, and availability S2-53 Friday April 6 9:30 - 11:15 Keynote: Oracle Fusion Applications & Cloud Robert Shimp, Group VP, Product Marketing Anthony Lye, Senior VP English Session K3-01 11:50 - 12:35 IT at Oracle: The Art of IT Transformation to Enable Business Growth English Session S3-02 13:00-13:45 ZFS Storagge Appliance: Architecture of high efficient and high performance S3-13 14:10 - 14:55 Why "Niko Niko doga" chose ZFS Storage Appliance to support their growing requirements and storage infrastructure By DWANGO Co, Ltd. S3-21 15:20 - 16:05 Osaka University: Lower TCO and higher flexibility for student study by Virtual Desktop By Osaka University S3-33 Oracle Developer Sessions with Oracle Systems and Oracle Solaris DAY TIME TITLE Notes LOCATION Friday April 6 13:00 - 13:45 Oracle Solaris 11 Developers D3-03 13:00 - 14:30 Oracle Solaris Tuning Contest Hands-On Lab D3-04 14:00 - 14:35 How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris English Session D3-13 15:00 - 15:45 IT Assets preservation and constructive migration with Oracle Solaris virtualization D3-24 16:00 - 17:30 The best packaging system for cloud environment - Creating an IPS package D3-34 Follow Oracle Infrared at Twitter, Facebook, Google+, and LinkedIn  to catch the latest news, developments, announcements, and inside views from  Oracle Optimized Solutions.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • How do i get the data from a surveillance camera to a storage i can stream from?

    - by radbyx
    Hi my sisters house was robbed chrismas evening :( I talked with her about making a surveillance system for her. The idea is to have a system that detects intruders and then send a SMS to you while streaming it to a private website. The hard part: How and where do I storage the data from the camera so it's streamable? I think i can manage to do the streaming, website and SMS server, but i need the data (fundamentation). Thanks, any help is much appriciated.

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by Graeme Donaldson
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • More information wanted on error: CREATE ASSEMBLY for assembly failed because assembly failed verif

    - by turnip.cyberveggie
    I have a small application that uses SQL Server 2005 Express with CLR stored procedures. It has been successfully installed and runs on many computers running XP and Vista. To create the assembly the following SQL is executed (names changed to protect the innocent): CREATE ASSEMBLY myAssemblyName FROM 'c:\pathtoAssembly\myAssembly.dll' On one computer (a test machine that reflects other computers targeted for installation) that is running Vista and has some very aggressive security policy restrictions I receive the following error: << Start Error Message Msg 6218, Level 16, State 2, Server domain\servername, Line 2 CREATE ASSEMBLY for assembly 'myAssembly' failed because assembly 'myAssembly' failed verification. Check if the referenced assemblies are up-to-date and trusted (for external_access or unsafe) to execute in the database. CLR Verifier error messages if any will follow this message [ : myProcSupport.Axis::Proc1][mdToken=0x6000004] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc2][mdToken=0x6000005] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc3][mdToken=0x6000006] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::.ctor][mdToken=0x600000a] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc4][mdToken=0x6000001] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc5][mdToken=0x6000002] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc6][mdToken=0x6000007] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc7][mdToken=0x6000008] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc8][mdToken=0x6000009] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc8][mdToken=0x600000b] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc9][mdToken=0x600000c] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation.... << End Error Message The C# DLL is defined as “Safe” as it only uses data contained in the database. The DLL is not normally signed, but I provided a signed version to test and received the same results. The installation is being done by someone else, and I don’t have access to the box, but they are executing scripts that I provided and work on other computers. I have tried to find information about this error beyond what the results of the script provide, but I haven’t found anything helpful. The person executing the script to create the assembly is logged in with an Admin account, is running CMD as admin, is connecting to the DB via Windows Authentication, has been added to the dbo_owner role, and added to the server role SysAdmin with the hopes that it is a permissions issue. This hasn't changed anything. Do I need to configure SQL Server 2005 Express differently for this environment? Is this error logged anywhere other than just the output from SQLCMD? What could cause this error? Could Vista security policies cause this? I don’t have access to the computer (the customer is doing the testing) so I can’t examine the box myself. TIA

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • Can't make AWUS036H work in Ubuntu 12.10

    - by sfrj
    I am using 64 bit Ubuntu 12.10. This is my kernel version: Linux 3.5.0-19-generic #30-Ubuntu SMP Tue Nov 13 17:48:01 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux My wireless card is an AWUSU36H The first thing I do to install the driver is copy the driver from the CD to the Downloads folder. cd /media/me/AWUS036H/Drivers/RTL8187L/Unix (Linux)/Linux driver for kernel 2.6.X$ cp rtl8187_linux_26.1025.0328.2007.tar.gz ~/Downloads/ Then I extract the tar tar xvfz rtl8187_linux_26.1025.0328.2007.tar.gz I navigate into the extracted folder, and I try to follow the instructions in the Readme.txt cd rtl8187_linux_26.1025.0328.2007 This are the contents of the folder: drv.tar.gz makedrv stack.tar.gz wlan0rmv ieee80211 ReadMe.txt wlan0dhcp wlan0up ifcfg-wlan0 rtl8187 wlan0down wpa_supplicant-0.4.9 This is what the Readme.txt says: Release Date: 2006-02-09, ver 1.2^M RTL8187 Linux driver version 1.2^M ^M --This driver supports RealTek RTL8187 Wireless LAN driver for ^M Fedora Core 2/3/4/5, Debian 3.1, Mandrake 10.2/Mandriva 2006, ^M SUSE 9.3/10.1/10.2, Gentoo 3.1, etc.^M - Support Client mode for either infrastructure or adhoc mode^M - Support WEP and WPAPSK connection^M ^M < Component >^M The driver is composed of several parts:^M 1. Module source code^M stack.tar.gz^M drv.tar.gz^M ^M 2. Script ot build the modules^M makedrv^M ^M 3. Script to load/unload modules^M wlan0up^M wlan0down ^M ^M 4. Script and configuration for DHCP^M "ReadMe.txt" [readonly] 140 lines, 4590 characters So what I do know is extract both of the compressed files: sudo tar xvfz drv.tar.gz sudo tar xvfz stack.tar.gz This 2 commands will add some data to the folders ieee80211 and rtl8187 At this point I get lost, and I don't know what to do. If I go in each of this 2 folders and I run the sudo make command then I get errors like this one: sudo makemake -C /lib/modules/3.5.0-19-generic/build M=/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187 modules make[1]: Entering directory `/usr/src/linux-headers-3.5.0-19-generic' CC [M] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.c:64:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187.h:29:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o] Error 1 make[1]: *** [_module_/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-19-generic' make: *** [modules] Error 2 If I try to run any of the script ./makedrv that the instructions describe, then I also get an error: ~/Downloads/rtl8187_linux_26.1025.0328.2007$ sudo ./makedrv [sudo] password for me: ieee80211/ ieee80211/license ieee80211/ieee80211_crypt.c ieee80211/ieee80211_tx.c ieee80211/ieee80211_softmac.c ieee80211/ieee80211_softmac_wx.c ieee80211/ieee80211_module.c ieee80211/ieee80211_crypt_ccmp.c ieee80211/ieee80211_rx.c ieee80211/tags ieee80211/ieee80211_crypt_tkip.c ieee80211/Makefile ieee80211/readme ieee80211/.tmp_versions/ ieee80211/.tmp_versions/ieee80211-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt_wep-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt_tkip-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt_ccmp-rtl.mod ieee80211/ieee80211_crypt_wep.c ieee80211/ieee80211.h ieee80211/ieee80211_wx.c ieee80211/ieee80211_crypt.h rtl8187/ rtl8187/license rtl8187/r8180_rtl8225z2.c rtl8187/r8180_rtl8225.h rtl8187/r8187_led.c rtl8187/r8180_93cx6.h rtl8187/r8180_wx.h rtl8187/r8180_hw.h rtl8187/copying rtl8187/r8187_led.h rtl8187/r8180_pm.h rtl8187/tags rtl8187/r8187.h rtl8187/Makefile rtl8187/r8180_rtl8225.c rtl8187/readme rtl8187/install rtl8187/.tmp_versions/ rtl8187/.tmp_versions/r8187.mod rtl8187/changes rtl8187/r8180_wx.c rtl8187/r8180_pm.c rtl8187/r8187_core.c rtl8187/r8180_93cx6.c rtl8187/authors rtl8187/ieee80211.h rtl8187/ieee80211_crypt.h rm -f *.mod.c *.mod *.o .*.cmd *.ko *~ rm -rf /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/tmp make -C /lib/modules/3.5.0-19-generic/build M=/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211 modules make[1]: Entering directory `/usr/src/linux-headers-3.5.0-19-generic' CC [M] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.o In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:1019:24: error: field ‘ps_task’ has incomplete type /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_scan_wq’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:421:2: warning: passing argument 2 of ‘queue_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:371:12: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_stop_scan’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:495:3: warning: passing argument 1 of ‘cancel_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:410:20: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_associate_abort’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:915:2: warning: passing argument 2 of ‘queue_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:371:12: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_rx_frame_softmac’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:1527:3: error: implicit declaration of function ‘tasklet_schedule’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_stop_protocol_rtl’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2120:2: warning: passing argument 1 of ‘cancel_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:410:20: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_init’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2229:78: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2229:2: error: ‘INIT_WORK’ undeclared (first use in this function) /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2229:2: note: each undeclared identifier is reported only once for each function it appears in /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2230:88: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2231:94: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2232:96: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2233:82: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2234:82: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2244:2: error: implicit declaration of function ‘tasklet_init’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_free’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2255:2: warning: passing argument 1 of ‘cancel_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:410:20: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_wpa_set_encryption’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2489:3: error: implicit declaration of function ‘request_module’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2518:3: error: implicit declaration of function ‘try_module_get’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: At top level: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2663:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2663:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2663:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2664:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2664:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2664:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2665:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2665:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2665:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2666:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2666:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2666:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2667:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2667:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2667:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2668:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2668:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2668:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2669:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2669:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2669:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2670:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2670:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2670:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2671:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2671:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2671:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2672:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2672:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2672:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2673:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2673:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2673:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2674:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2674:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2674:1: warning: parameter names (without types) in function declaration [enabled by default] In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:1212:37: warning: ‘netdev_priv’ is static but used in inline function ‘ieee80211_priv’ which is not static [enabled by default] cc1: some warnings being treated as errors make[2]: *** [/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.o] Error 1 make[1]: *** [_module_/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-19-generic' make: *** [modules] Error 2 rm -f *.mod.c *.mod *.o .*.cmd *.ko *~ rm -rf /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/tmp make -C /lib/modules/3.5.0-19-generic/build M=/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187 modules make[1]: Entering directory `/usr/src/linux-headers-3.5.0-19-generic' CC [M] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.c:64:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187.h:29:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o] Error 1 make[1]: *** [_module_/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-19-generic' make: *** [modules] Error 2 Can somebody give me a hand finding out what I need to do to make my wifi card work? Update This is the output of the lsusb command lsusb Bus 003 Device 002: ID 147e:1000 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    Read the article

  • django app using amazon aws s3 storage in stead of DB?

    - by farble1670
    new to python here so bear with me ... i'm looking at django for a rapid prototype to a photo sharing app with an amazon aws s3 storage back end. however, as far as i can tell, django is tailored toward the typical database MVC type of pattern. is there a way to for example provide a custom django model implementation that talks to s3 in stead of a DB? a custom DB engine? would either of these be practical, or am i looking in the wrong direction? thanks.

    Read the article

  • How can I remove .NET isolated storage setting folders during WiX uninstallation?

    - by Luke
    I would like to remove the isolated storage folders that are created by a .NET application when using My.Settings etc. The setting files are stored in a location like C:\Users\%Username%\AppData\Roaming\App\App.exe_Url_r0q1rvlnrqsgjkcosowa0vckbjarici4 As per this question StackOverflow: Removing files when uninstalling Wix I can uninstall a folder using: <Directory Id="AppDataFolder" Name="AppDataFolder"> <Directory Id="MyAppFolder" Name="My"> <Component Id="MyAppFolder" Guid="YOURGUID-7A34-4085-A8B0-8B7051905B24"> <CreateFolder /> <RemoveFile Id="PurgeAppFolder" Name="*.*" On="uninstall" /> </Component> </Directory> </Directory> <!-- LocalAppDataFolder--> This doesn't support sub-folders etc. Is the only option a custom .NET action or is there a more simple approach for removing these .NET generated setting folders?

    Read the article

  • Opening Office 2007 Documents from in memory storage - How?

    - by John S
    Hi there, I'm a C++ developer wrestling with updating an application that had made extensive use of the IStorage interface to open pre-Office 2007 documents from in-memory storage (via ILockBytes). If you are still following me so far, you probably know that the new Office Document formats are incompatible with IStorage containers. The application I'm trying to update, relied upon the IPersistStorage interface that all Office applications have, and the code as written calls the load method of IPersistStorage to read in a document from IStorage interface. So the question is.... What kind of COM interfaces are available to me to read in, from an in memory container, an Office 2007 document? John "S"

    Read the article

  • I want to use the fleximage gem and s3 for storage, but don't want dev/qa/test env's to use s3

    - by Kevin Bedell
    I have a rails app that I'm going to host on engineyard and want to store image files on s3. But I don't know if I want all developer machines to beusing s3 for storage of all our test and dev images. Maybe it's not an issue -- but it seems like a waste to have everyone storing all our images in s3. I've heard of some ppl who store images on s3 'hacking' dev environments to store images locally on the file system -- and then using s3 in prod only. What are other people doing?

    Read the article

  • What mail storage should I choose for our web application; IMAP, key-valud store, rdbms, ...

    - by tvrtko
    I have to store e-mail messages for use with our application. I have "metadata" for all messages inside a relational database, but I don't feel comfortable keeping message content (gigabytes and terabytes of email data) inside a database. I'm currently using IMAP as a storage, but I have my doubts if I choose correctly. First of all there is a problem of uidvalidity and how to keep a permanent reference to message inside IMAP. Second, I'm not sure if this is the most robust solution in terms of backup/restore strategies, corruption of store, replication ... Positive side is that I can query IMAP using the headers because the data is mostly indexed. I don't know if key-value stores are a better approach (Casandra, Tokyo cabinet, redis). How they handle storing 1KB and 50MB of data. How they prevent corruption and when corruption or device failure happens how can I repair the store.

    Read the article

  • What is a valid and reasonable alternative to a massive storage approach?

    - by Backo
    I am using Ruby on Rails 3.2.2 and MySQL. After my previous question on "how to handle massive storage of records in database for user authorization purposes", since related answers (on how to solve the issue or how to accomplish to that I am looking for) aren't sufficiently detailed or require to much resources (at least for me), I would like to know what are valid and reasonable alternatives to that approach. In few words, this question could be phrase as: how to handle "complex" (at level of SQL querying) user authorizations when you have to fetch "authorized" records? That is, for example, how to retrieve records when you would use code like the following (the following code would be used mostly in index controller actions): Article.readable_by_user(@current_user) # => Returns all articles readable by the current user.

    Read the article

  • Which file types are worth compressing (zipping) for remote storage? For which of them the compresse

    - by user193655
    I am storing documents in sql server in varbinary(max) fileds, I use filestream optionally when a user has: (DB_Size + Docs_Size) ~> 0.8 * ExpressEdition_Max_DB_Size I am currently zipping all the files, anyway this is done because the Document Read/Write work was developed 10 years ago where Storage was more expensive than now. Many files when zipped are almost as big as the original (a zipped pdf is about 95% of original size). And anyway unzipping has some overhead, that becomes twice when I need also to "Check-in"/Update the file because I need to zip it. So I was thinking of giving to the users the option to choose whether the file type will be zipped or not by providing some meaningful default values. For my experience I would impose the following rules: 1) zip by default: txt, bmp, rtf 2) do not zip by default: jpg, jpeg, Microsoft Office files, Open Office files, png, tif, tiff Could you suggest other file types chosen among the most common or comment on the ones I listed here?

    Read the article

  • How to implement collection with covariance when delegating to another collection for storage?

    - by memelet
    I'm trying to implement a type of SortedMap with extended semantics. I'm trying to delegate to SortedMap as the storage but can't get around the variance constraints: class IntervalMap[A, +B](implicit val ordering: Ordering[A]) //extends ... { var underlying = SortedMap.empty[A, List[B]] } Here is the error I get. I understand why I get the error (I understand variance). What I don't get is how to implement this type of delegation. And yes, the covariance on B is required. error: covariant type B occurs in contravariant position in type scala.collection.immutable.SortedMap[A,List[B]] of parameter of setter underlying_=

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >