Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 115/367 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • links for 2010-04-22

    - by Bob Rhubart
    Barry N. Perkins: Unique Business Value vs. Unique IT "Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution." -- Barry N. Perkins, VP Oracle Modernization & Oracle Integrated Solutions (tags: oracle otn enterprisearchitecture modernization) Paul Homchick: The Information Driven Value Chain - Part 2 Paul Homchick continues his series with a look "at the way investments have been made in enterprise software in an effort to create and manage value, and how systems are moving from a controlled-process approach design towards gathering and using dynamically using information." (tags: oracle otn enterprisearchitecture) @vambenepe: The battle of the Cloud Frameworks: Application Servers redux? "The battle of the Cloud Frameworks has started," says William Vambenepe, "and it will look a lot like the battle of the Application Servers which played out over the last decade and a half." (tags: oracle otn cloud frameworks appserver) @ORACLENERD: COLLABORATE: Day 4 Wrap Up Oraclenerd feesses up: "The day started out with the realization that I pulled off the best (COLLABORATE - self annointed) prank ever. Twitter was...all atwitter about the fact that Mark Rittman was Oracle's Person of the Year. Of course it wasn't true. If you look at the picture, you'll realize that he's wearing exactly the same clothes in the magazine cover as he is in real life." (tags: collaborate2010 oracleace) Oracle's Hal Stern at Cloud Expo: "We've Moved from 'What' to 'How'" | Cloud Computing Journal "Hal also spoke a bit about building 'a sustainable IT model.' By this, he said he didn't mean the various Green IT and similar efforts that 'are all about data center efficiency. I think the operational model is just as important. Many enterprises are managing a tremendous amount of complexity, and it's hard to make this sustainable.'" -- Cloud News Desk (tags: oracle cloud cloudexpo halstern) @ORACLENERD: COLLABORATE: The Beach Party "Then tiki statues somehow were incorporated into various dances" -- Oracle ACE Chet "oraclenerd" Justice (tags: 0racle otn oracleace collaborate2010 oaug ioug lasvegas) David Andrews: Collaborate Day Two "Collaborate 2010 has focused on helping attendees understand what is already available and how to make more effective use of it. This does not sound exciting but it is extremely valuable. Most customers use only a small fraction of the capability of the products they already own. Helping them understand all the additional things they could be doing without buying anything more is very valuable." -- David Andrews (tags: oracle oaug collaborate2010 ioug)

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by J Swaroop
    Cross-posting Dain Hansen's excellent recap of the Big Data/Fast Data announcement during OOW: For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details.

    Read the article

  • Enterprise Manager 12c: New DSS Demos Available

    - by Javier Puerta
    Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade     Enterprise Manager Cloud Control 12c Application Replay Demo Now Available! We are pleased to announce the availability of the Oracle Application Replay demo that showcases some of the key capabilities of performing realistic, production scale testing of your web and packaged Oracle applications. This demo specifically focuses on capturing production web traffic from an E-Business Suite application and replaying the captured workload on a test E-Business Suite application to assess the impact of an application infrastructure change on the workload. The target audiences are application developers, quality assurance teams, IT managers and production control staff that deal in day-to-day change management activities and trouble shooting of production environments. Demo Highlights: Enterprise Manager 12c workflows for capturing application workload Seamless integration of Application Replay with Real User Experience Insight for application workload capture Enterprise Manager 12c centralized workflows for replaying captured application workloads in a test environment Demonstrates how to minimize risk when deploying a complex EBusiness Suite application infrastructure change. Rich reporting capability for performance analysis and problem detection User Experience Monitoring with Enterprise Manager Cloud Control 12c and Real User Experience Insight 12R1 Now Available! We are pleased to announce the availability of the Oracle Real User Experience Insight demo that showcases some of the key capabilities of user experience monitoring. This demo specifically focuses on business reporting, integrated performance diagnostics, tracking of customer journey’s through RUEI’s userflow tracking capabilities and it’s Key Performance Indicators tracking and configuration. Demo Highlights: Application-centric dashboard Integration with Oracle Enterprise Manager 12c – JVMD, ADP and BTM Session diagnostics and user session replay Monitoring through “Key Performance Indicators” (KPI) --- create alerts/incidents FUSION Application centric dashboards & integrated BI Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo upgrade DSS is pleased to announce an upgrade to the Oracle Enterprise Manager Cloud Control 12c: Database Management Packs demo. While retaining the content from the initial release of the demo—Diagnostic and Tuning Packs, Test Data Management and Data Masking, and Real Application Testing—the demo now includes a new Data Masking for Real Application Testing scenario. Demo Features: Diagnostic and Tuning Packs SQL Performance Analyzer Database Replay Data Masking Masking Real Application Testing workloads Testing pending Optimizer statistics Test Data Management

    Read the article

  • Software engineering project idea feedback [on hold]

    - by Chris Sewell
    I'm a third year student currently undergoing my project/dissertation section of my degree. I have drafted a proposal for my final year project and would appreciate any feedback. The feedback can be anything constructive either specific to this proposal, the area that I will be working and researching in or my ideas. I will accept all input. Aims My aim is to attempt a proof of concept and prototype a runtime-as-a-service (RaaS). This cloud based runtime will allow clients to dynamically offload tasks or create cloud applications. Currently software-as-a-service (SaaS) cloud applications are purpose built and have a predefined scope in which they can assist or serve the client; this scope cannot be changed without physical alteration to the client and server software. With RaaS the client potentially could define any task it wanted at any time depending on its environment variables, the client and server would then communicate parameters and returns for that task. For the client to utilize a RaaS it must be able to conceive and then define a task using an appropriate XML vocabulary. As the scope of the cloud solution is defined by the client at its runtime, the cloud solution only has to exist for as long as the client requires it to as opposed to a client using a dedicated service. Deliverables The crux of the project will require an XML vocabulary in which the client and server will communicate. I’ll prototype the server application that will dynamically create and manage cloud solutions. The solution will be coded using an interpreted language, such as python or javascript, which can evaluate expressions in runtime or a language that can dynamically compile such as C# or Java. As a further proof of concept I will also produce a mock client that offloads tasks to the server. The report will attempt to explain the different flavours of cloud computing solutions including infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and SaaS including real world examples and where the use of a RaaS could have improved the overall example solution. Disclaimer: I'm not requesting stakeholders in my project nor am I delegating work. Any materials other than feedback, advice or directions will not be utilized.

    Read the article

  • Which one to select for my future career; Java, C#, Azure or Apex?

    - by user636195
    Hi folks, This time, I am going to studying Masters in Computer Science in U.S.A after a week. I have been doing my B.Sc for the past three years and after my freshman year I started working on projects (in C# and very rarely in Java) for the past two years.(i.e while I was a second and third year student). Now I am in a college where all of the programming courses are going to be taken in Java only (using Eclipse) and I am going to stay in this college for 8 months on campus and then fully employed for two years in other companies as a CPT. I really love to work on Microsoft products because, for me, they are simple and easy to use and understand. My future plan is to work in Cloud computing and be a Cloud based business owner in the near future. Since the college is going to teach us and let us do every project in Java, I was confused which programming language to use that will help me and enhance me in my career, and of course I wanted to select the one I liked to do everytime. I also heard a lot about Azure (Microsoft’s ) and Apex (Salesforce.com’s cloud computing programming language). Would you please give me your advice and recommendation based on my situation? Should I have to study only Java, or should I have to study C# or Azure beside Java on my own? The reason I asked this is because, since I have no clue how Azure works and how long it will take me to know the language, I am really confused which one to select (Java Vs C# and Azure Vs Apex or if there is any popular and mostly used Cloud Computing langauge). Do you think I can get a job in cloud computing if I study Azure or Apex by my own without experience? There is also one issue I want to consider which is a short term issue is. i.e Salary. Since I have to pay my student loan, I also need to get a good job which will let me pay my loan within two years. But, as I said, my long term plan is, get experience in Cloud Computing (from programming to administrative part,i.e every area of cloud computing) and then have my own business may be within 5-10 years. What do you think? Thank you for your time.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • Java EE 7 Roadmap

    - by Linda DeMichiel
    The Java EE 6 Platform, released in December 2009, has seen great uptake from the community with its POJO-based programming model, lightweight Web Profile, and extension points. There are now 13 Java EE 6 compliant appserver implementations today! When we announced the Java EE 7 JSR back in early 2011, our plans were that we would release it by Q4 2012. This target date was slightly over three years after the release of Java EE 6, but at the same time it meant that we had less than two years to complete a fairly comprehensive agenda — to continue to invest in significant enhancements in simplification, usability, and functionality in updated versions of the JSRs that are currently part of the platform; to introduce new JSRs that reflect emerging needs in the community; and to add support for use in cloud environments. We have since announced a minor adjustment in our dates (to the spring of 2013) in order to accommodate the inclusion of JSRs of importance to the community, such as Web Sockets and JSON-P. At this point, however, we have to make a choice. Despite our best intentions, our progress has been slow on the cloud side of our agenda. Partially this has been due to a lack of maturity in the space for provisioning, multi-tenancy, elasticity, and the deployment of applications in the cloud. And partially it is due to our conservative approach in trying to get things "right" in view of limited industry experience in the cloud area when we started this work. Because of this, we believe that providing solid support for standardized PaaS-based programming and multi-tenancy would delay the release of Java EE 7 until the spring of 2014 — that is, two years from now and over a year behind schedule. In our opinion, that is way too long. We have therefore proposed to the Java EE 7 Expert Group that we adjust our course of action — namely, stick to our current target release dates, and defer the remaining aspects of our agenda for PaaS enablement and multi-tenancy support to Java EE 8. Of course, we continue to believe that Java EE is well-suited for use in the cloud, although such use might not be quite ready for full standardization. Even today, without Java EE 7, Java EE vendors such as Oracle, Red Hat, IBM, and CloudBees have begun to offer the ability to run Java EE applications in the cloud. Deferring the remaining cloud-oriented aspects of our agenda has several important advantages: It allows Java EE Platform vendors to gain more experience with their implementations in this area and thus helps us avoid risks entailed by trying to standardize prematurely in an emerging area. It means that the community won't need to wait longer for those features that are ready at the cost of those features that need more time. Because we have already laid some of the infrastructure for cloud support in Java EE 7, including resource definition metadata, improved security configuration, JPA schema generation, etc., it will allow us to expedite a Java EE 8 release. We therefore plan to target the Java EE 8 Platform release for the spring of 2015. This shift in the scope of Java EE 7 allows us to better retain our focus on enhancements in simplification and usability and to deliver on schedule those features that have been most requested by developers. These include the support for HTML 5 in the form of Web Sockets and JSON-P; the simplified JMS 2.0 APIs; improved Managed Bean alignment, including transactional interceptors; the JAX-RS 2.0 client API; support for method-level validation; a much more comprehensive expression language; and more. We feel strongly that this is the right thing to do, and we hope that you will support us in this proposed direction.

    Read the article

  • How to study for an Informatics Olympiad [on hold]

    - by Cloud
    One of my goals for next year is to participate in the Australian Informatics Olympiad. As far as I'm aware, it is not too different from Informatics Olympiads in other countries. What would be the best way to study for this? What content should I pay particular attention to while learning in Python? I am currently using the book 'Learn Python the hard way', but are there any other books worthy of a mention? This is the link to their site: http://orac.amt.edu.au/aio/ It contains sample questions, so you can get an idea for the structure or nature of the competition. I know this isn't really a specific programming question, but it would be great if someone could share their experience or give some suggestions for me.

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • How Does a 724% Return on Your Salesforce Automation Investment Sound?

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that, a 724% return on investment (ROI) when they implemented these Oracle Cloud solutions in their fast-moving, rapidly-growing business. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Congratulations Apex IT! Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment - 724% Payback - 2 months Average annual benefit - $91,534 Cost: Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Please join us in congratulating Apex IT on this award and their business achievements. Want More Details? Don’t take our word for it. Read the full Apex IT ROI Case Study and learn more about Apex IT’s business—including their work with Oracle Sales and Marketing Cloud on behalf of their clients in leading Sales organizations. Learn More About Oracle Sales Cloud www.oracle.com/salescloud www.facebook.com/oraclesalescloud www.youtube.com/oraclesalescloud Oracle Customer Experience and Complementary Sales Solutions Oracle Configure, Price and Quote (CPQ) Cloud Oracle Marketing Cloud Oracle Customer Experience /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How do i get the data from a surveillance camera to a storage i can stream from?

    - by radbyx
    Hi my sisters house was robbed chrismas evening :( I talked with her about making a surveillance system for her. The idea is to have a system that detects intruders and then send a SMS to you while streaming it to a private website. The hard part: How and where do I storage the data from the camera so it's streamable? I think i can manage to do the streaming, website and SMS server, but i need the data (fundamentation). Thanks, any help is much appriciated.

    Read the article

  • Improving TCP performance over a gigabit network lots of connections and high traffic for storage and streaming services

    - by Linux Guy
    I have two servers, Both servers hardware Specification are Processor : Dual Processor RAM : over 128 G.B Hard disk : SSD Hard disk Outging Traffic bandwidth : 3 Gbps network cards speed : 10 Gbps Server A : for Encoding videos Server B : for storage videos andstream videos over web interface like youtube The inbound bandwidth between two servers is 10Gbps , the outbound bandwidth internet bandwidth is 500Mpbs Both servers using public ip addresses in public and private network Both servers transfer and connection on nginx port , and the server B used for streaming media , like youtube stream videos Both servers in same network , when i do ping from Server A to Server B i got high time latency above 1.0ms , the time range time=52.7 ms to time=215.7 ms - This is the output of iftop utility 353Mb 707Mb 1.04Gb 1.38Gb 1.73Gb mqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqqvqqqqqqqqqqqqqqqqqqqqqqqqqqq server.example.com => ip.address 6.36Mb 4.31Mb 1.66Mb <= 158Kb 94.8Kb 35.1Kb server.example.com => ip.address 1.23Mb 4.28Mb 1.12Mb <= 17.1Kb 83.5Kb 21.9Kb server.example.com => ip.address 395Kb 3.89Mb 1.07Mb <= 6.09Kb 109Kb 28.6Kb server.example.com => ip.address 4.55Mb 3.83Mb 1.04Mb <= 55.6Kb 45.4Kb 13.0Kb server.example.com => ip.address 649Kb 3.38Mb 1.47Mb <= 9.00Kb 38.7Kb 16.7Kb server.example.com => ip.address 5.00Mb 3.32Mb 1.80Mb <= 65.7Kb 55.1Kb 29.4Kb server.example.com => ip.address 387Kb 3.13Mb 1.06Mb <= 18.4Kb 39.9Kb 15.0Kb server.example.com => ip.address 3.27Mb 3.11Mb 1.01Mb <= 81.2Kb 64.5Kb 20.9Kb server.example.com => ip.address 1.75Mb 3.08Mb 2.72Mb <= 16.6Kb 35.6Kb 32.5Kb server.example.com => ip.address 1.75Mb 2.90Mb 2.79Mb <= 22.4Kb 32.6Kb 35.6Kb server.example.com => ip.address 3.03Mb 2.78Mb 1.82Mb <= 26.6Kb 27.4Kb 20.2Kb server.example.com => ip.address 2.26Mb 2.66Mb 1.36Mb <= 51.7Kb 49.1Kb 24.4Kb server.example.com => ip.address 586Kb 2.50Mb 1.03Mb <= 4.17Kb 26.1Kb 10.7Kb server.example.com => ip.address 2.42Mb 2.49Mb 2.44Mb <= 31.6Kb 29.7Kb 29.9Kb server.example.com => ip.address 2.41Mb 2.46Mb 2.41Mb <= 26.4Kb 24.5Kb 23.8Kb server.example.com => ip.address 2.37Mb 2.39Mb 2.40Mb <= 28.9Kb 27.0Kb 28.5Kb server.example.com => ip.address 525Kb 2.20Mb 1.05Mb <= 7.03Kb 26.0Kb 12.8Kb qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq TX: cum: 102GB peak: 1.65Gb rates: 1.46Gb 1.44Gb 1.48Gb RX: 1.31GB 24.3Mb 19.5Mb 18.9Mb 20.0Mb TOTAL: 103GB 1.67Gb 1.48Gb 1.46Gb 1.50Gb I check the transfer speed using iperf utility From Server A to Server B # iperf -c 0.0.0.2 -p 8777 ------------------------------------------------------------ Client connecting to 0.0.0.2, TCP port 8777 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 3] local 0.0.0.1 port 38895 connected with 0.0.0.2 port 8777 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.8 sec 528 KBytes 399 Kbits/sec My Current Connections in Server B # netstat -an|grep ":8777"|awk '/tcp/ {print $6}'|sort -nr| uniq -c 2072 TIME_WAIT 28 SYN_RECV 1 LISTEN 189 LAST_ACK 139 FIN_WAIT2 373 FIN_WAIT1 3381 ESTABLISHED 34 CLOSING Server A Network Card Information Settings for eth0: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: external Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes Server B Network Card Information Settings for eth2: Supported ports: [ FIBRE ] Supported link modes: 10000baseT/Full Supported pause frame use: No Supports auto-negotiation: No Advertised link modes: 10000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Direct Attach Copper PHYAD: 0 Transceiver: external Auto-negotiation: off Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes ifconfig server A eth0 Link encap:Ethernet HWaddr 00:25:90:ED:9E:AA inet addr:0.0.0.1 Bcast:0.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1202795665 errors:0 dropped:64334 overruns:0 frame:0 TX packets:2313161968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:893413096188 (832.0 GiB) TX bytes:3360949570454 (3.0 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2207544 errors:0 dropped:0 overruns:0 frame:0 TX packets:2207544 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:247769175 (236.2 MiB) TX bytes:247769175 (236.2 MiB) ifconfig Server B eth2 Link encap:Ethernet HWaddr 00:25:90:82:C4:FE inet addr:0.0.0.2 Bcast:0.0.0.2 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:39973046980 errors:0 dropped:1828387600 overruns:0 frame:0 TX packets:69618752480 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3013976063688 (2.7 TiB) TX bytes:102250230803933 (92.9 TiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1049495 errors:0 dropped:0 overruns:0 frame:0 TX packets:1049495 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:129012422 (123.0 MiB) TX bytes:129012422 (123.0 MiB) Netstat -i on Server B # netstat -i Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth2 9000 0 42098629968 0 2131223717 0 73698797854 0 0 0 BMRU lo 65536 0 1077908 0 0 0 1077908 0 0 0 LRU I Turn up send/receive buffers on the network card to 2048 and problem still persist I increase the MTU for server A and problem still persist and i increase the MTU for server B for better connectivity and transfer speed but it couldn't transfer at all The problem is : as you can see from iperf utility, the transfer speed from server A to server B slow when i restart network service in server B the transfer in server A at full speed, after 2 minutes , it's getting slow How could i troubleshoot slow speed issue and fix it in server B ? Notice : if there any other commands i should execute in servers for more information, so it might help resolve the problem , let me know in comments

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by Graeme Donaldson
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • ownCloud WebDAV interface seems to be broken

    - by Nobleleader13245
    I've been trying to host ownCloud on my server but everytime I try to it tells me this : Your web server is not yet properly setup to allow files synchronization because the WebDAV interface seems to be broken. Please double check the installation guides. This is my setup : Windows Server 2012 R2 IIS 8.5 PHP 5.5.11 ownCloud 6.0.3 MySQL 5.6.17 I tried google the error but I can't seem to find anything usefull. Some say I should try if this works : https://cloud.mcsoftworks.net/remote.php/webdav/ and yes I can navigate to this folder and I can open files from there. The calendar works and I can also just upload files from here https://cloud.mcsoftworks.net/ the only thing that doesn't seem to work is the sync client. The sync client doesn't say anything it just doesn't connect (Screenshot : http://prntscr.com/3p2apz) This is the error log : Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:56:00+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:47+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:55:34+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:37+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Fatal webdav Sabre_DAV_Exception_Forbidden: Path does not exist, or escaping from the base path was detected 2014-06-02T19:54:36+00:00 Warning core isWebDAVWorking: NO - Reason: [CURL] Error while making request: Could not resolve host: cloud.mcsoftworks.net (error code: 6) (Sabre_DAV_Exception) 2014-06-02T19:51:24+00:00 This is my php.ini : http://pastebin.com/es3MB8Uh Does anyone have any idea on how I should get this to work? I've been trying to get this to work for about 14 days now and it starts to annoy me =P

    Read the article

  • ESXi 4.1 host not recognising existing VMFS datastore

    - by ThatGraemeGuy
    Existing setup: host1 and host2, ESX 4.0, 2 HBAs each. lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here). This has been working just fine all along. I added host3, ESXi 4.1, 2 HBAs. If I view Configuration / Storage Adapters, I can see that both HBAs see both LUNs, but if I view Configuration / Storage, I only see 1 datastore. host1/2 can see both LUNs and I have VMs running on both too. I have rescanned, refreshed and even rebooted, but host3 refuses to acknowledge 1 of the datastores. Does anyone know what's going on? Update: I re-installed the host with ESX (not i) 4.0, same version as the existing hosts and it's still not recognising the vmfs. I think I'm going to SVmotion everything off that datastore then format it. Update2: I've created the LUN from scratch and the problem gets even weirder. I've presented the LUN to all 3 hosts, and I can see the LUN in the vSphere client's Configuration / Storage Adapters section on all 3 hosts. If I create a datastore on the LUN via the Configuration / Storage section on host1, it works fine and I can create an empty folder via datastore browser, but the datastore is not seen by the host2 and host3. I can use the Add Storage wizard on host2 and it will see the LUN. At this point the "VMFS Label" column has the label I gave with "(head)" appended. If I try the Add Storage wizard's "Keep the existing signature" option, it fails with an error "Cannot change the host configuration." and a dialog box that says 'Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storageSystem-17" on vCenter Server "vcenter.company.local" failed.' If I try the Add Storage wizard's "Assign a new signature" option on host2, it will complete and the VMFS label will have "snap-(hexnumber)-" prepended. At this point its also visible on host3, but not host1. I have a similar setup in a different datacenter which didn't give me all this trouble.

    Read the article

  • Announcing Windows Azure Mobile Services

    - by ScottGu
    I’m excited to announce a new capability we are adding to Windows Azure today: Windows Azure Mobile Services Windows Azure Mobile Services makes it incredibly easy to connect a scalable cloud backend to your client and mobile applications.  It allows you to easily store structured data in the cloud that can span both devices and users, integrate it with user authentication, as well as send out updates to clients via push notifications. Today’s release enables you to add these capabilities to any Windows 8 app in literally minutes, and provides a super productive way for you to quickly build out your app ideas.  We’ll also be adding support to enable these same scenarios for Windows Phone, iOS, and Android devices soon. Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services.  Or watch this video of me showing how to do it step by step. Getting Started If you don’t already have a Windows Azure account, you can sign up for a no-obligation Free Trial.  Once you are signed-up, click the “preview features” section under the “account” tab of the www.windowsazure.com website and enable your account to support the “Mobile Services” preview.   Instructions on how to enable this can be found here. Once you have the mobile services preview enabled, log into the Windows Azure Portal, click the “New” button and choose the new “Mobile Services” icon to create your first mobile backend.  Once created, you’ll see a quick-start page like below with instructions on how to connect your mobile service to an existing Windows 8 client app you have already started working on, or how to create and connect a brand-new Windows 8 client app with it: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app  that stores data in Windows Azure. Storing Data in the Cloud Storing data in the cloud with Windows Azure Mobile Services is incredibly easy.  When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based ODATA format) – without you having to write or deploy any custom server code.  Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions. This makes it incredibly easy to connect client applications to the cloud, and enables client developers who don’t have a server-code background to be productive from the very beginning.  They can instead focus on building the client app experience, and leverage Windows Azure Mobile Services to provide the cloud backend services they require.  Below is an example of client-side Windows 8 C#/XAML code that could be used to query data from a Windows Azure Mobile Service.  Client-side C# developers can write queries like this using LINQ and strongly typed POCO objects, which are then translated into HTTP REST queries that run against a Windows Azure Mobile Service.   Developers don’t have to write or deploy any custom server-side code in order to enable client-side code below to execute and asynchronously populate their client UI: Because Mobile Services is part of Windows Azure, developers can later choose to augment or extend their initial solution and add custom server functionality and more advanced logic if they want.  This provides maximum flexibility, and enables developers to grow and extend their solutions to meet any needs. User Authentication and Push Notifications Windows Azure Mobile Services also make it incredibly easy to integrate user authentication/authorization and push notifications within your applications.  You can use these capabilities to enable authentication and fine grain access control permissions to the data you store in the cloud, as well as to trigger push notifications to users/devices when the data changes.  Windows Azure Mobile Services supports the concept of “server scripts” (small chunks of server-side script that executes in response to actions) that make it really easy to enable these scenarios. Below are some tutorials that walkthrough common authentication/authorization/push scenarios you can do with Windows Azure Mobile Services and Windows 8 apps: Enabling User Authentication Authorizing Users  Get Started with Push Notifications Push Notifications to multiple Users Manage and Monitor your Mobile Service Just like with every other service in Windows Azure, you can monitor usage and metrics of your mobile service backend using the “Dashboard” tab within the Windows Azure Portal. The dashboard tab provides a built-in monitoring view of the API calls, Bandwidth, and server CPU cycles of your Windows Azure Mobile Service.   You can also use the “Logs” tab within the portal to review error messages.  This makes it easy to monitor and track how your application is doing. Scale Up as Your Business Grows Windows Azure Mobile Services now allows every Windows Azure customer to create and run up to 10 Mobile Services in a free, shared/multi-tenant hosting environment (where your mobile backend will be one of multiple apps running on a shared set of server resources).  This provides an easy way to get started on projects at no cost beyond the database you connect your Windows Azure Mobile Service to (note: each Windows Azure free trial account also includes a 1GB SQL Database that you can use with any number of apps or Windows Azure Mobile Services). If your client application becomes popular, you can click the “Scale” tab of your Mobile Service and switch from “Shared” to “Reserved” mode.  Doing so allows you to isolate your apps so that you are the only customer within a virtual machine.  This allows you to elastically scale the amount of resources your apps use – allowing you to scale-up (or scale-down) your capacity as your traffic grows: With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need.  This enables a super flexible model that is ideal for new mobile app scenarios, as well as startups who are just getting going.  Summary I’ve only scratched the surface of what you can do with Windows Azure Mobile Services – there are a lot more features to explore.  With Windows Azure Mobile Services you’ll be able to build mobile app experiences faster than ever, and enable even better user experiences – by connecting your client apps to the cloud. Visit the Windows Azure Mobile Services development center to learn more, and build your first Windows 8 app connected with Windows Azure today.  And read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • More information wanted on error: CREATE ASSEMBLY for assembly failed because assembly failed verif

    - by turnip.cyberveggie
    I have a small application that uses SQL Server 2005 Express with CLR stored procedures. It has been successfully installed and runs on many computers running XP and Vista. To create the assembly the following SQL is executed (names changed to protect the innocent): CREATE ASSEMBLY myAssemblyName FROM 'c:\pathtoAssembly\myAssembly.dll' On one computer (a test machine that reflects other computers targeted for installation) that is running Vista and has some very aggressive security policy restrictions I receive the following error: << Start Error Message Msg 6218, Level 16, State 2, Server domain\servername, Line 2 CREATE ASSEMBLY for assembly 'myAssembly' failed because assembly 'myAssembly' failed verification. Check if the referenced assemblies are up-to-date and trusted (for external_access or unsafe) to execute in the database. CLR Verifier error messages if any will follow this message [ : myProcSupport.Axis::Proc1][mdToken=0x6000004] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc2][mdToken=0x6000005] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc3][mdToken=0x6000006] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::.ctor][mdToken=0x600000a] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc4][mdToken=0x6000001] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc5][mdToken=0x6000002] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc6][mdToken=0x6000007] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc7][mdToken=0x6000008] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc8][mdToken=0x6000009] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc8][mdToken=0x600000b] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation. [ : myProcSupport.Axis::Proc9][mdToken=0x600000c] [HRESULT 0x8007000E] - Not enough storage is available to complete this operation.... << End Error Message The C# DLL is defined as “Safe” as it only uses data contained in the database. The DLL is not normally signed, but I provided a signed version to test and received the same results. The installation is being done by someone else, and I don’t have access to the box, but they are executing scripts that I provided and work on other computers. I have tried to find information about this error beyond what the results of the script provide, but I haven’t found anything helpful. The person executing the script to create the assembly is logged in with an Admin account, is running CMD as admin, is connecting to the DB via Windows Authentication, has been added to the dbo_owner role, and added to the server role SysAdmin with the hopes that it is a permissions issue. This hasn't changed anything. Do I need to configure SQL Server 2005 Express differently for this environment? Is this error logged anywhere other than just the output from SQLCMD? What could cause this error? Could Vista security policies cause this? I don’t have access to the computer (the customer is doing the testing) so I can’t examine the box myself. TIA

    Read the article

  • Windows Azure ASP.NET MVC 2 Role with Silverlight

    - by GeekAgilistMercenary
    I was working through some scenarios recently with Azure and Silverlight.  I immediately decided a quick walk through for setting up a Silverlight Application running in an ASP.NET MVC 2 Application would be a cool project. This walk through I have Visual Studio 2010, Silverlight 4, and the Azure SDK all installed.  If you need to download any of those go get em? now. Launch Visual Studio 2010 and start a new project.  Click on the section for cloud templates as shown below. After you name the project, the dialog for what type of Windows Azure Cloud Service Role will display.  I selected ASP.NET MVC 2 Web Role, which adds the MvcWebRole1 Project to the Cloud Service Solution. Since I selected the ASP.NET MVC 2 Project type, it immediately prompts for a unit test project.  Because I just want to get everything running first, I will probably be unit testing the Silverlight and just using the MVC Project as a host for the Silverlight for now, and because I would prefer to just add the unit test project later, I am going to select no here. Once you've created the ASP.NET MVC 2 project to host the Silverlight, then create another new project.  Select the Silverlight section under the Installed Templates in the Add New Project dialog.  Then select Silverlight Application. The next dialog that comes up will inquire about using the existing ASP.NET MVC Application I just created, which I do want it to use that so I leave it checked.  The options section however I do not want to check RIA Web Services, do not want a test page added to the project, and I want Silverlight debugging enabled so I leave that checked.  Once those options are appropriately set, just click on OK and the Silverlight Project will be added to the overall solution. The next steps now are to get the Silverlight object appropriately embedded in the web page.  First open up the Site.Master file in the ASP.NET MVC 2 Project located under the Veiws/Shared/ location.  After you open the file review the content of the <header></header> section.  In that section add another <contentplaceholder></contentplaceholder> tag as shown in the code snippet below. <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeaderContent" runat="server" /> </head> I usually put it toward the bottom of the header section.  It just seems the <title></title> should be on the top of the section and I like to keep it that way. Now open up the Index.aspx page under the ASP.NET MVC 2 Project located in the Views/Home/ directory.  When you open up that file add a <asp:Content><asp:Content> tag as shown in the next snippet. <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content>   <asp:Content ID=headerContent ContentPlaceHolderID=HeaderContent runat=server>   </asp:Content>   <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. </p> </asp:Content> In that center tag, I am now going to add what is needed to appropriately embed the Silverlight object into the page.  The first thing I needed is a reference to the Silverlight.js file. <script type="text/javascript" src="Silverlight.js"></script> After that comes a bit of nitty gritty Javascript.  I create another tag (and for those in the know, this is exactly like the generated code that is dumped into the *.html page generated with any Silverlight Project if you select to "add a test page that references the application".  The complete Javascript is below. function onSilverlightError(sender, args) { var appSource = ""; if (sender != null && sender != 0) { appSource = sender.getHost().Source; }   var errorType = args.ErrorType; var iErrorCode = args.ErrorCode;   if (errorType == "ImageError" || errorType == "MediaError") { return; }   var errMsg = "Unhandled Error in Silverlight Application " + appSource + "\n";   errMsg += "Code: " + iErrorCode + " \n"; errMsg += "Category: " + errorType + " \n"; errMsg += "Message: " + args.ErrorMessage + " \n";   if (errorType == "ParserError") { errMsg += "File: " + args.xamlFile + " \n"; errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } else if (errorType == "RuntimeError") { if (args.lineNumber != 0) { errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } errMsg += "MethodName: " + args.methodName + " \n"; }   throw new Error(errMsg); } I literally, since it seems to work fine, just use what is populated in the automatically generated page.  After getting the appropriate Javascript into place I put the actual Silverlight Object Embed code into the HTML itself.  Just so I know the positioning and for final verification when running the application I insert the embed code just below the Index.aspx page message.  As shown below. <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2> <%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website"> http://asp.net/mvc</a>. </p> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/CloudySilverlight.xap" /> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration: none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style: none" /> </a> </object> <iframe id="_sl_historyFrame" style="visibility: hidden; height: 0px; width: 0px; border: 0px"></iframe> </div> </asp:Content> I then open up the Silverlight Project MainPage.xaml.  Just to make it visibly obvious that the Silverlight Application is running in the page, I added a button as shown below. <UserControl x:Class="CloudySilverlight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400">   <Grid x:Name="LayoutRoot" Background="White"> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="48,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </Grid> </UserControl> Just for kicks, I added a message box that would popup, just to show executing functionality also. private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("It runs in the cloud!"); } I then executed the ASP.NET MVC 2 and could see the Silverlight Application in page.  With a quick click of the button, I got a message box.  Success! Now the next step is getting the ASP.NET MVC 2 Project and Silverlight published to the cloud.  As of Visual Studio 2010, Silverlight 4, and the latest Azure SDK, this is actually a ridiculously easy process. Navigate to the Azure Cloud Services web site. Once that is open go back in Visual Studio and right click on the cloud project and select publish. This will publish two files into a directory.  Copy that directory so you can easily paste it into the Azure Cloud Services web site.  You'll have to click on the application role in the cloud (I will have another blog entry soon about where, how, and best practices in the cloud). In the text boxes shown, select the application package file and the configuration file and place them in the appropriate text boxes.  This is the part were it comes in handy to have copied the directory path of the file location.  That way when you click on browser you can just paste that in, then hit enter.  The two files will be listed and you can select the appropriate file. Once that is done, name the service deployment.  Then click on publish.  After a minute or so you will see the following screen. Now click on run.  Once the MvcWebRole1 goes green (the little light symbol to the left of the status) click on the Web Site URL.  Be patient during this process too, it could take a minute or two.  The Silverlight application should again come up just like you ran it on your local machine. Once staging is up and running, click on the circular icon with two arrows to move staging to production.  Once you are done make sure the green light is again go for the production deploy, then click on the Web Site URL to verify the site is working.  At this point I had a successful development, staging, and production deployment. Thanks for reading, hope this was helpful.  I have more Windows Azure and other cloud related material coming, so stay tuned. Original Entry

    Read the article

  • Can't make AWUS036H work in Ubuntu 12.10

    - by sfrj
    I am using 64 bit Ubuntu 12.10. This is my kernel version: Linux 3.5.0-19-generic #30-Ubuntu SMP Tue Nov 13 17:48:01 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux My wireless card is an AWUSU36H The first thing I do to install the driver is copy the driver from the CD to the Downloads folder. cd /media/me/AWUS036H/Drivers/RTL8187L/Unix (Linux)/Linux driver for kernel 2.6.X$ cp rtl8187_linux_26.1025.0328.2007.tar.gz ~/Downloads/ Then I extract the tar tar xvfz rtl8187_linux_26.1025.0328.2007.tar.gz I navigate into the extracted folder, and I try to follow the instructions in the Readme.txt cd rtl8187_linux_26.1025.0328.2007 This are the contents of the folder: drv.tar.gz makedrv stack.tar.gz wlan0rmv ieee80211 ReadMe.txt wlan0dhcp wlan0up ifcfg-wlan0 rtl8187 wlan0down wpa_supplicant-0.4.9 This is what the Readme.txt says: Release Date: 2006-02-09, ver 1.2^M RTL8187 Linux driver version 1.2^M ^M --This driver supports RealTek RTL8187 Wireless LAN driver for ^M Fedora Core 2/3/4/5, Debian 3.1, Mandrake 10.2/Mandriva 2006, ^M SUSE 9.3/10.1/10.2, Gentoo 3.1, etc.^M - Support Client mode for either infrastructure or adhoc mode^M - Support WEP and WPAPSK connection^M ^M < Component >^M The driver is composed of several parts:^M 1. Module source code^M stack.tar.gz^M drv.tar.gz^M ^M 2. Script ot build the modules^M makedrv^M ^M 3. Script to load/unload modules^M wlan0up^M wlan0down ^M ^M 4. Script and configuration for DHCP^M "ReadMe.txt" [readonly] 140 lines, 4590 characters So what I do know is extract both of the compressed files: sudo tar xvfz drv.tar.gz sudo tar xvfz stack.tar.gz This 2 commands will add some data to the folders ieee80211 and rtl8187 At this point I get lost, and I don't know what to do. If I go in each of this 2 folders and I run the sudo make command then I get errors like this one: sudo makemake -C /lib/modules/3.5.0-19-generic/build M=/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187 modules make[1]: Entering directory `/usr/src/linux-headers-3.5.0-19-generic' CC [M] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.c:64:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187.h:29:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o] Error 1 make[1]: *** [_module_/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-19-generic' make: *** [modules] Error 2 If I try to run any of the script ./makedrv that the instructions describe, then I also get an error: ~/Downloads/rtl8187_linux_26.1025.0328.2007$ sudo ./makedrv [sudo] password for me: ieee80211/ ieee80211/license ieee80211/ieee80211_crypt.c ieee80211/ieee80211_tx.c ieee80211/ieee80211_softmac.c ieee80211/ieee80211_softmac_wx.c ieee80211/ieee80211_module.c ieee80211/ieee80211_crypt_ccmp.c ieee80211/ieee80211_rx.c ieee80211/tags ieee80211/ieee80211_crypt_tkip.c ieee80211/Makefile ieee80211/readme ieee80211/.tmp_versions/ ieee80211/.tmp_versions/ieee80211-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt_wep-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt_tkip-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt-rtl.mod ieee80211/.tmp_versions/ieee80211_crypt_ccmp-rtl.mod ieee80211/ieee80211_crypt_wep.c ieee80211/ieee80211.h ieee80211/ieee80211_wx.c ieee80211/ieee80211_crypt.h rtl8187/ rtl8187/license rtl8187/r8180_rtl8225z2.c rtl8187/r8180_rtl8225.h rtl8187/r8187_led.c rtl8187/r8180_93cx6.h rtl8187/r8180_wx.h rtl8187/r8180_hw.h rtl8187/copying rtl8187/r8187_led.h rtl8187/r8180_pm.h rtl8187/tags rtl8187/r8187.h rtl8187/Makefile rtl8187/r8180_rtl8225.c rtl8187/readme rtl8187/install rtl8187/.tmp_versions/ rtl8187/.tmp_versions/r8187.mod rtl8187/changes rtl8187/r8180_wx.c rtl8187/r8180_pm.c rtl8187/r8187_core.c rtl8187/r8180_93cx6.c rtl8187/authors rtl8187/ieee80211.h rtl8187/ieee80211_crypt.h rm -f *.mod.c *.mod *.o .*.cmd *.ko *~ rm -rf /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/tmp make -C /lib/modules/3.5.0-19-generic/build M=/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211 modules make[1]: Entering directory `/usr/src/linux-headers-3.5.0-19-generic' CC [M] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.o In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:1019:24: error: field ‘ps_task’ has incomplete type /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_scan_wq’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:421:2: warning: passing argument 2 of ‘queue_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:371:12: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_stop_scan’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:495:3: warning: passing argument 1 of ‘cancel_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:410:20: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_associate_abort’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:915:2: warning: passing argument 2 of ‘queue_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:371:12: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_rx_frame_softmac’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:1527:3: error: implicit declaration of function ‘tasklet_schedule’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_stop_protocol_rtl’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2120:2: warning: passing argument 1 of ‘cancel_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:410:20: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_init’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2229:78: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2229:2: error: ‘INIT_WORK’ undeclared (first use in this function) /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2229:2: note: each undeclared identifier is reported only once for each function it appears in /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2230:88: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2231:94: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2232:96: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2233:82: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2234:82: error: macro "INIT_WORK" passed 3 arguments, but takes just 2 /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2244:2: error: implicit declaration of function ‘tasklet_init’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_softmac_free’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2255:2: warning: passing argument 1 of ‘cancel_delayed_work’ from incompatible pointer type [enabled by default] In file included from include/linux/srcu.h:32:0, from include/linux/notifier.h:15, from /usr/src/linux-headers-3.5.0-19-generic/arch/x86/include/asm/uprobes.h:26, from include/linux/uprobes.h:35, from include/linux/mm_types.h:15, from include/linux/kmemcheck.h:4, from include/linux/skbuff.h:18, from include/linux/if_ether.h:134, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:26, from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17: include/linux/workqueue.h:410:20: note: expected ‘struct delayed_work *’ but argument is of type ‘struct work_struct *’ /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: In function ‘ieee80211_wpa_set_encryption’: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2489:3: error: implicit declaration of function ‘request_module’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2518:3: error: implicit declaration of function ‘try_module_get’ [-Werror=implicit-function-declaration] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c: At top level: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2663:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2663:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2663:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2664:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2664:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2664:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2665:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2665:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2665:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2666:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2666:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2666:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2667:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2667:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2667:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2668:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2668:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2668:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2669:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2669:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2669:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2670:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2670:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2670:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2671:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2671:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2671:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2672:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2672:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2672:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2673:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2673:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2673:1: warning: parameter names (without types) in function declaration [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2674:1: warning: data definition has no type or storage class [enabled by default] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2674:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL’ [-Wimplicit-int] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:2674:1: warning: parameter names (without types) in function declaration [enabled by default] In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.c:17:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211.h:1212:37: warning: ‘netdev_priv’ is static but used in inline function ‘ieee80211_priv’ which is not static [enabled by default] cc1: some warnings being treated as errors make[2]: *** [/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211/ieee80211_softmac.o] Error 1 make[1]: *** [_module_/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/ieee80211] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-19-generic' make: *** [modules] Error 2 rm -f *.mod.c *.mod *.o .*.cmd *.ko *~ rm -rf /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/tmp make -C /lib/modules/3.5.0-19-generic/build M=/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187 modules make[1]: Entering directory `/usr/src/linux-headers-3.5.0-19-generic' CC [M] /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o In file included from /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.c:64:0: /home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187.h:29:26: fatal error: linux/config.h: No such file or directory compilation terminated. make[2]: *** [/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187/r8187_core.o] Error 1 make[1]: *** [_module_/home/me/Downloads/rtl8187_linux_26.1025.0328.2007/rtl8187] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.5.0-19-generic' make: *** [modules] Error 2 Can somebody give me a hand finding out what I need to do to make my wifi card work? Update This is the output of the lsusb command lsusb Bus 003 Device 002: ID 147e:1000 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    Read the article

  • django app using amazon aws s3 storage in stead of DB?

    - by farble1670
    new to python here so bear with me ... i'm looking at django for a rapid prototype to a photo sharing app with an amazon aws s3 storage back end. however, as far as i can tell, django is tailored toward the typical database MVC type of pattern. is there a way to for example provide a custom django model implementation that talks to s3 in stead of a DB? a custom DB engine? would either of these be practical, or am i looking in the wrong direction? thanks.

    Read the article

  • How can I remove .NET isolated storage setting folders during WiX uninstallation?

    - by Luke
    I would like to remove the isolated storage folders that are created by a .NET application when using My.Settings etc. The setting files are stored in a location like C:\Users\%Username%\AppData\Roaming\App\App.exe_Url_r0q1rvlnrqsgjkcosowa0vckbjarici4 As per this question StackOverflow: Removing files when uninstalling Wix I can uninstall a folder using: <Directory Id="AppDataFolder" Name="AppDataFolder"> <Directory Id="MyAppFolder" Name="My"> <Component Id="MyAppFolder" Guid="YOURGUID-7A34-4085-A8B0-8B7051905B24"> <CreateFolder /> <RemoveFile Id="PurgeAppFolder" Name="*.*" On="uninstall" /> </Component> </Directory> </Directory> <!-- LocalAppDataFolder--> This doesn't support sub-folders etc. Is the only option a custom .NET action or is there a more simple approach for removing these .NET generated setting folders?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >