Search Results

Search found 2253 results on 91 pages for 'constant dean'.

Page 85/91 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Quaternion Camera Orbiting around a Sphere

    - by jessejuicer
    Background: I'm trying to create a game where the camera is always rotating around a single sphere. I'm using the DirectX D3DX math functions in C++ on Windows. The Problem: I cannot get both the camera position and orientation both working properly at the same time. Either one works but not both together. Here's the code for my quaternion camera that revolves around a sphere, always looking at the centerpoint of the sphere, ... as far as I understand it (but which isn't working properly): (I'm only going to present rotation around the X axis here, to simplify this post) Whenever the UP key is pressed or held down, the camera should rotate around the X axis, while looking at the centerpoint of the sphere (which is at 0,0,0 in the world). So, I build a quaternion that represents a small angle of rotation around the x axis like this (where 'deltaAngle' is a small enough number for a slow rotation): D3DXVECTOR3 rotAxis; D3DXQUATERNION tempQuat; tempQuat.x = 0.0f; tempQuat.y = 0.0f; tempQuat.z = 0.0f; tempQuat.w = 1.0f; rotAxis.x = 1.0f; rotAxis.y = 0.0f; rotAxis.z = 0.0f; D3DXQuaternionRotationAxis(&tempQuat, &rotAxis, deltaAngle); ...and I accumulate the result into the camera's current orientation quat, like this: D3DXQuaternionMultiply(&cameraOrientationQuat, &cameraOrientationQuat, &tempQuat); ...which all works fine. Now I need to build a view matrix to pass to DirectX SetTransform function. So I build a rotation matrix from the camera orientation quat as follows: D3DXMATRIXA16 rotationMatrix; D3DXMatrixIdentity(&rotationMatrix); D3DXMatrixRotationQuaternion(&rotationMatrix, &cameraOrientationQuat); ...Now (as seen below) if I just transpose that rotationMatrix and plug it into the 3x3 section of the view matrix, then negate the camera's position and plug it into the translation section of the view matrix, the rotation magically works. Perfectly. (even when I add in rotations for all three axes). There's no gimbal lock, just a smooth rotation all around in any direction. BUT- this works even though I never change the camera's position. At all. Which sorta blows my mind. I even display the camera position and can watch it stay constant at it's starting point (0.0, 0.0, -4000.0). It never moves, but the rotation around the sphere is perfect. I don't understand that. For proper view rotation, the camera position should be revolving around the sphere. Here's the rest of building the view matrix (I'll talk about the commented code below). Note that the camera starts out at (0.0, 0.0, -4000.0) and m_camDistToTarget is 4000.0: /* D3DXVECTOR3 vec1; D3DXVECTOR4 vec2; vec1.x = 0.0f; vec1.y = 0.0f; vec1.z = -1.0f; D3DXVec3Transform(&vec2, &vec1, &rotationMatrix); g_cameraActor->pos.x = vec2.x * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.y = vec2.y * g_cameraActor->m_camDistToTarget; g_cameraActor->pos.z = vec2.z * g_cameraActor->m_camDistToTarget; */ D3DXMatrixTranspose(&g_viewMatrix, &rotationMatrix); g_viewMatrix._41 = -g_cameraActor->pos.x; g_viewMatrix._42 = -g_cameraActor->pos.y; g_viewMatrix._43 = -g_cameraActor->pos.z; g_viewMatrix._44 = 1.0f; g_direct3DDevice9->SetTransform( D3DTS_VIEW, &g_viewMatrix ); ...(The world matrix is always an identity, and the perspective projection works fine). ...So, without the commented code being compiled, the rotation works fine. But to be proper, for obvious reasons, the camera position should be rotating around the sphere, which it currently is not. That's what the commented code is supposed to do. And when I add in that chunk of code to do that, and look at all the data as I hold the keys down (using UP, DOWN, LEFT, RIGHT to rotate different directions) all the values look correct! The camera position is rotating around the sphere just fine, and I can watch that happen visually too. The problem is that the camera orientation does not lookat the center of the sphere. It always looks straight forward down the z axis (toward positive z) as it revolves around the sphere. Yet the values of both the rotation matrix and the view matrix seem to be behaving correctly. (The view matrix orientation is the same as the rotation matrix, just transposed). For instance if I just hold down the key to spin around the x axis, I can watch the values of the three axes represented in the view matrix (x, y, and z axes)... view x-axis stays at (1.0, 0.0, 0.0), and view y-axis and z-axis both spin around the x axis just fine. All the numbers are changing as they should be... well, almost. As far as I can tell, the position of the view matrix is spinning around the sphere one direction (like clockwise), and the orientation (the axes in the view matrix) are spinning the opposite direction (like counter-clockwise). Which I guess explains why the orientation appears to stay straight ahead. I know the position is correct. It revolves properly. It's the orientation that's wrong. Can anyone see what am I doing wrong? Am I using these functions incorrectly? Or is my algorithm flawed? As usual I've been combing my code for simple mistakes for many hours. I'm willing to post the actual code, and a video of the behavior, but that will take much more effort. Thought I'd ask this way first.

    Read the article

  • Expectations + Rewards = Innovation

    - by D'Arcy Lussier
    “Innovation” is a heavy word. We regard those that embrace it as “Innovators”. We describe organizations as being “Innovative”. We hold those associated with the word in high regard, even though its dictionary definition is very simple: Introducing something new. What our culture has done is wrapped Innovation in white robes and a gold crown. Innovation is rarely just introducing something new. Innovations and innovators are typically associated with other terms: groundbreaking, genius, industry-changing, creative, leading. Being a true innovator and creating innovations are a big deal, and something companies try to strive for…or at least say they strive for. There’s huge value in being recognized as an innovator in an industry, since the idea is that innovation equates to increased profitability. IBM ran an ad a few years back that showed what their view of innovation is: “The point of innovation is to make actual money.” If the money aspect makes you feel uneasy, consider it another way: the point of innovation is to <insert payoff here>. Companies that innovate will be more successful. Non-profits that innovate can better serve their target clients. Governments that innovate can better provide services to their citizens. True innovation is not easy to come by though. As with anything in business, how well an organization will innovate is reliant on the employees it retains, the expectations placed on those employees, and the rewards available to them. In a previous blog post I talked about one formula: Right Employees + Happy Employees = Productive Employees I want to introduce a new one, that builds upon the previous one: Expectations + Rewards = Innovation  The level of innovation your organization will realize is directly associated with the expectations you place on your staff and the rewards you make available to them. Expectations We may feel uncomfortable with the idea of placing expectations on our staff, mainly because expectation has somewhat of a negative or cold connotation to it: “I expect you to act this way or else!” The problem is in the or-else part…we focus on the negative aspects of failing to meet expectations instead of looking at the positive side. “I expect you to act this way because it will produce <insert benefit here>”. Expectations should not be set to punish but instead be set to ensure quality. At a recent conference I spoke with some Microsoft employees who told me that you have five years from starting with the company to reach a “Senior” level. If you don’t, then you’re let go. The expectation Microsoft placed on their staff is that they should be working towards improving themselves, taking more responsibility, and thus ensure that there is a constant level of quality in the workforce. Rewards Let me be clear: a paycheck is not a reward. A paycheck is simply the employer’s responsibility in the employee/employer relationship. A paycheck will never be the key motivator to drive innovation. Offering employees something over and above their required compensation can spur them to greater performance and achievement. Working in the food service industry, this tactic was used again and again: whoever has the highest sales over lunch will receive a free lunch/gift certificate/entry into a draw/etc. There was something to strive for, to try beyond the baseline of what our serving jobs were. It was through this that innovative sales techniques would be tried and honed, with key servers being top sellers time and time again. At a code camp I spoke at, I was amazed to see that all the employees from one company receive $100 Visa gift cards as a thank you for taking time to speak. Again, offering something over and above that can give that extra push for employees. Rewards work. But what about the fairness angle? In the restaurant example I gave, there were servers that would never win the competition. They just weren’t good enough at selling and never seemed to get better. So should those that did work at performing better and produce more sales for the restaurant not get rewarded because those who weren’t working at performing better might get upset? Of course not! Organizations succeed because of their top performers and those that strive to join their ranks. The Expectation/Reward Graph While the Expectations + Rewards = Innovation formula may seem like a simple mathematics formula, there’s much more going under the hood. In fact there are three different outcomes that could occur based on what you put in as values for Expectations and Rewards. Consider the graph below and the descriptions that follow: Disgruntled – High Expectation, Low Reward I worked at a company where the mantra was “Company First, Because We Pay You”. Even today I still hear stories of how this sentiment continues to be perpetuated: They provide you a paycheck and a means to live, therefore you should always put them as your top priority. Of course, this is a huge imbalance in the expectation/reward equation. Why would anyone willingly meet high expectations of availability, workload, deadlines, etc. when there is no reward other than a paycheck to show for it? Remember: paychecks are not rewards! Instead, you see employees be disgruntled which not only affects the level of production but also the level of quality within an organization. It also means that you see higher turnover. Complacent – Low Expectation, Low Reward Complacency is a systemic problem that typically exists throughout all levels of an organization. With no real expectations or rewards, nobody needs to excel. In fact, those that do try to innovate, improve, or introduce new things into the organization might be shunned or pushed out by the rest of the staff who are just doing things the same way they’ve always done it. The bigger issue for the organization with low/low values is that at best they’ll never grow beyond their current size (and may shrink actually), and at worst will cease to exist. Entitled – Low Expectation, High Reward It’s one thing to say you have the best people and reward them as such, but its another thing to actually have the best people and reward them as such. Organizations with Entitled employees are the former: their organization provides them with all types of comforts, benefits, and perks. But there’s no requirement before the rewards are dolled out, and there’s no short-list of who receives the rewards. Everyone in the company is treated the same and is given equal share of the spoils. Entitlement is actually almost identical with Complacency with one notable difference: just try to introduce higher expectations into an entitled organization! Entitled employees have been spoiled for so long that they can’t fathom having rewards taken from them, or having to achieve specific levels of performance before attaining them. Those running the organization also buy in to the Entitled sentiment, feeling that they must persist the same level of comforts to appease their staff…even though the quality of the employee pool may be suspect. Innovative – High Expectation, High Reward Finally we have the Innovative organization which places high expectations but also provides high rewards. This organization gets it: if you truly want the best employees you need to apply equal doses of pressure and praise. Realize that I’m not suggesting crazy overtime or un-realistic working conditions. I do not agree with the “Glengary-Glenross” method of encouragement. But as anyone who follows sports can tell you, the teams that win are the ones where the coaches push their players to be their best; to achieve new levels of performance that they didn’t know they could receive. And the result for the players is more money, fame, and opportunity. It’s in this environment that organizations can focus on innovation – true innovation that builds the business and allows everyone involved to truly benefit. In Closing Organizations love to use the word “Innovation” and its derivatives, but very few actually do innovate. For many, the term has just become another marketing buzzword to lump in with all the other business terms that get overused. But for those organizations that truly get the value of innovation, they will be the ones surging forward while other companies simply fade into the background. And they will be the organizations that expect more from their employees, and give them their just rewards.

    Read the article

  • CodePlex Daily Summary for Monday, July 15, 2013

    CodePlex Daily Summary for Monday, July 15, 2013Popular ReleasesMVC Forum: MVC Forum v1.0: Finally version 1.0 is here! We have been fixing a few bugs, and making sure the release is as stable as possible. We have also changed the way configuration of the application works, mostly how to add your own code or replace some of the standard code with your own. If you download and use our software, please give us some sort of feedback, good or bad!SharePoint 2013 TypeScript Definitions: Release 1.1: TypeScript 0.9 support SharePoint TypeScript Definitions are now compliant with the new version of TypeScript TypeScript generics are now used for defining collection classes (derivatives of SP.ClientCollection object) Improved coverage Added mQuery definitions (m$) Added SPClientAutoFill definitions SP.Utilities namespace is now fully covered SP.UI modal dialog definitions improved CSR definitions improved, added some missing methods and context properties, mostly related to list ...GoAgent GUI: GoAgent GUI ??? 1.0.0: ????GoAgent GUI????,???????????.Net Framework 4.0 ???????: Windows 8 x64 Windows 8 x86 Windows 7 x64 Windows 7 x86 ???????: Windows 8.1 Preview (x86/x64) ???????: Windows XP ????: ????????GoAgent????,????????,?????????????,???????????????????,??????????,????。PiGraph: PiGraph 2.0.8.13: C?p nh?t:Các l?i dã s?a: S?a l?i không nh?p du?c s? âm. L?i tabindex trong giao di?n Thêm hàm Các l?i chua kh?c ph?c: L?i ghi chú nh?p nháy màu. L?i khung ghi chú vu?t ra kh?i biên khi luu file. Luu ý:N?u không kh?i d?ng duoc chuong trình, b?n nên c?p nh?t driver card d? h?a phiên b?n m?i nh?t: AMD Graphics Drivers NVIDIA Driver Xem yêu c?u h? th?ngD3D9Client: D3D9Client R12 for Orbiter Beta: D3D9Client release for orbiter BetaVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Project Server 2013 Event Handler Admin Tool: PSI Event Admin Tool: Download & exract the File. Use LoggerAdmin to upload the event handlers in project server 2013. PSIEventLogger\LoggerAdmin\bin\DebugGherkin editor: Gherkin Editor Beta 2: Fix issue #7 and add some refactoring and code cleanupNew-NuGetPackage PowerShell Script: New-NuGetPackage.ps1 PowerShell Script v1.2: Show nuget gallery to push to when prompting user if they want to push their package.Site Templates By Steve: SharePoint 2010 CORE Site Theme By Steve WSP: Great Site Theme to start with from Steve. See project home page for install instructions. This is a nice centered, mega-menu, fixed width masterpage with styles. Remember to update the mega menu lists.SharePoint Solution Installer: SharePoint Solution Installer V1.2.8: setup2013.exe now supports CompatibilityLevel to target specific hive Use setup.exe for SP2007 & SP2010. Use setup2013.exe for SP2013.TBox - tool to make developer's life easier.: TBox 1.021: 1)Add console unit tests runner, to run unit tests in parallel from console. Also this new sub tool can save valid xml nunit report. It can help you in continuous integration. 2)Fix build scripts.LifeInSharepoint Modern UI Update: Version 2: Some minor improvements, including Audience Targeting support for quick launch links. Also removing all NextDocs references.Virtual Photonics: VTS MATLAB Package 1.0.13 Beta: This is the beta release of the MATLAB package with examples of using the VTS libraries within MATLAB. Changes for this release: Added two new examples to vts_solver_demo.m that: 1) generates and plots R(lambda) at a given rho, and chromophore concentrations assuming a power law for scattering, and 2) solves inverse problem for R(lambda) at given rho. This example solves for concentrations of HbO2, Hb and H20 given simulated measurements created using Nurbs scaled Monte Carlo and inverted u...Advanced Resource Tab for Blend: Advanced Resource Tab: This is the first alpha release of the advanced resource tab for Blend for Visual Studio 2012.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...Outlook 2013 Add-In: Categories and Colors: This new version has a major change in the drawing of the list items: - Using owner drawn code to format the appointments using GDI (some flickering may occur, but it looks a little bit better IMHO, with separate sections). - Added category color support (if more than one category, only one color will be shown). Here, the colors Outlook uses are slightly different than the ones available in System.Drawing, so I did a first approach matching. ;-) - Added appointment status support (to show fr...Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsLINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneNew Projects[.Net Intl] harroc_c;mallar_a;olouso_f: The goal of this project is to create a web crawler and a web front who allows you to search in your index. You will create a mini (or large!) search engine basButterfly Storage: Butterfly Storage is a data access technology based on object-oriented database model for Windows Store applications.KaveCompany: KaveCompleave that girl alone: a team project!MyClrProfiler: This project helps you learn about and develop your own CLR profiler.NETDeob: Deobfuscate obfuscated .NET files easilyProgram stomatologie: SummarySimple Graph Library: Simple portable class library for graphs data structures. .NET, Silverlight 4/5, Windows Phone, Windows RT, Xbox 360T6502 Emulator: T6502 is a 6502 emulator written in TypeScript using AngularJS. The goal is well-organized, readable code over performance.WP8 File Access Webserver: C# HTTP server and web application on Windows Phone 8. Implements file access, browsing and downloading.wpadk: wpadk????wp7?????? ?????????,?????、SDK、wpadk?????????????。??????????????????。??????????????????,????wpadk?????????????????????????????????????。xlmUnit: xlmUnit, Unit Testing

    Read the article

  • Experiences with learning Chinese

    - by Greg Low
    I've had a few friends asking me about learning Chinese and what I've found works and doesn't work. I was answering a question on a mailing list today and I thought I should post this info where it might be useful to many. The question that was initially asked was whether Rosetta Stone was useful but I've provided much more info on learning the language here. I’ve used Rosetta Stone with Chinese but it’s really hard to know whether to recommend it or not. Rosetta Stone works the same way in all languages. They show you photos and then let you both see and hear the target language and get you to work out what they’re talking about. The thinking is that that’s how children learn. However, at first, I found it very frustrating. I’d be staring at photos trying to work out what they were really trying to get at. Sometimes it’s far from obvious. I could not have survived without Google Translate open at the same time. The other weird thing is that the photos are from a mixture of countries. While that’s good in a way, it also means that they are endlessly showing pictures of something that would never happen in the target language and culture. For any language, constant interaction with a speaker of the target language is needed. Rosetta Stone has a “Studio” option. That’s the best part of the program. In my case, it lets me connect around twice a week to a live online class from Beijing. Classes usually have the teacher plus two to four students. You get some Studio access with the initial packages but need to purchase it for ongoing use. I find it very inexpensive. It seems to work out to about $70 (AUD/USD) for six months. That’s a real bargain. The other downside to Rosetta Stone is that they tend to teach very formal language, but as with other languages, that’s not how the locals speak. It might have been correct at one point but no-one actually says that. As an example, Rosetta Stone teach Gonggòng qìche (pronounced roughly like “gong gong chee chure” for bus. Most of my friends from areas like Taiwan would just say Gongche. Google Translate says Zongxiàn (pronounced somewhat like “dzong sheean”) instead. Mind you, the Rosetta Stone option isn't really as bad as "omnibus"; it's more like saying "public bus". If you say the option they provide, people would understand you. I also listen to ChinesePod in the car. They also have SpanishPod. Each podcast is about five minutes of spoken conversation. It is very good for providing current language. Another resource I use is local Meetup groups. Most cities have these and for a variety of languages. It’s way less structured (just standard conversation) but good for getting interaction. The obvious challenge for Asian languages is reading/writing. The input editors for Chinese that are part of Windows are excellent. Many of my Chinese friends speak fluently but cannot read or write. I was determined to learn to do both. For writing, I’m talking about on a computer, not with a pen. (Mind you, I can barely write English with a pen nowadays). When using Rosetta Stone, you can choose to have the Chinese words displayed in pinyin (Wo xihuan xuéxí zhongguó) or in Chinese characters (???????) or both. This year, I’ve been forcing myself to just use the Chinese characters. I use a pinyin input editor in Windows though, as it’s very fast.  (The character recognition input in the iPad is also amazing). Notice from the example that I provided above that the pronunciation of the pinyin isn’t that obvious to us at first either.  Since changing to only using characters, I find I can now read many more Chinese characters fluently. It’s a major challenge though. I can read about 300 now and yet you need around 2,500 to be able to read a newspaper fairly well. Tones are a major issue for some Asian languages. Mandarin has four tones (plus a neutral tone) and there is a major difference in meaning between two words that are spelled the same in pinyin but with different tones. For example, Ma (3rd tone?) is a horse, Ma (1st tone?) is like “mom”, and ma (neutral tone?) is a question mark and so on. Clearly you don’t want to mix these up. As in English, they also have words that do sound the same but mean different things in different contexts. What’s interesting is that even though we see two words that differ only by tone as very similar, to a native speaker, if you say the right words with the wrong tone, you might as well have said a completely different word. My wife’s dialect of Chinese has eight tones. It’s much worse. The reason I’m so keen to learn to read/write Chinese is that even though the different dialects are pronounced so differently that speakers of one dialect often cannot understand another dialect, the writing is generally the same. The only difference is that many years ago, the Chinese government created a simplified set of characters for some of the most commonly used ones. Older Chinese and most Cantonese speakers often struggle with the simplified characters. This is the simplified form of “three apples”: ????   This is the traditional form of the same words: ????  Note that two of the characters are the same but the middle two are quite different. For most languages, the best thing is to watch current movies in the target language but to watch them with the target language as subtitles, not your native language. You want to know what they actually said, not what it roughly means (which is what the English subtitle would give you). The difficulty with Asian languages like Chinese is that you have the added challenge of understanding the subtitles when they are written in the target language. I wish there were Mandarin Chinese movies with pinyin subtitles. For learning to read characters, I also recommend HanCard on the iPad. It is targeted at the HSK language proficiency levels. (I’m intending to take the first HSK exam as soon as I’m ready). Hope that info helps someone get started.  

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Who could ask for more with LESS CSS? (Part 3 of 3&ndash;Clrizr)

    - by ToString(theory);
    Welcome back!  In the first two posts in this series, I covered some of the awesome features in CSS precompilers such as SASS and LESS, as well as how to get an initial project setup up and running in ASP.Net MVC 4. In this post, I will cover an actual advanced example of using LESS in a project, and show some of the great productivity features we gain from its usage. Introduction In the first post, I mentioned two subjects that I will be using in this example – constants, and color functions.  I’ve always enjoyed using online color scheme utilities such as Adobe Kuler or Color Scheme Designer to come up with a scheme based off of one primary color.  Using these tools, and requesting a complementary scheme you can get a couple of shades of your primary color, and a couple of shades of a complementary/accent color to display. Because there is no way in regular css to do color operations or store variables, there was no way to accomplish something like defining a primary color, and have a site theme cascade off of that.  However with tools such as LESS, that impossibility becomes a reality!  So, if you haven’t guessed it by now, this post is on the creation of a plugin/module/less file to drop into your project, plugin one color, and have your primary theme cascade from it.  I only went through the trouble of creating a module for getting Complementary colors.  However, it wouldn’t be too much trouble to go through other options such as Triad or Monochromatic to get a module that you could use off of that. Step 1 – Analysis I decided to mimic Adobe Kuler’s Complementary theme algorithm as I liked its simplicity and aesthetics.  Color Scheme Designer is great, but I do believe it can give you too many color options, which can lead to chaos and overload.  The first thing I had to check was if the complementary values for the color schemes were actually hues rotated by 180 degrees at all times – they aren’t.  Apparently Adobe applies some variance to the complementary colors to get colors that are actually more aesthetically appealing to users.  So, I opened up Excel and began to plot complementary hues based on rotation in increments of 10: Long story short, I completed the same calculations for Hue, Saturation, and Lightness.  For Hue, I only had to record the Complementary hue values, however for saturation and lightness, I had to record the values for ALL of the shades.  Since the functions were too complicated to put into LESS since they aren’t constant/linear, but rather interval functions, I instead opted to extrapolate the HSL values using the trendline function for each major interval, onto intervals of spacing 1. For example, using the hue extraction, I got the following values: Interval Function 0-60 60-140 140-270 270-360 Saturation and Lightness were much worse, but in the end, I finally had functions for all of the intervals, and then went the route of just grabbing each shades value in intervals of 1.  Step 2 – Mapping I declared variable names for each of these sections as something that shouldn’t ever conflict with a variable someone would define in their own file.  After I had each of the values, I extracted the values and put them into files of their own for hue variables, saturation variables, and lightness variables…  Example: /*HUE CONVERSIONS*/@clrizr-hue-source-0deg: 133.43;@clrizr-hue-source-1deg: 135.601;@clrizr-hue-source-2deg: 137.772;@clrizr-hue-source-3deg: 139.943;@clrizr-hue-source-4deg: 142.114;.../*SATURATION CONVERSIONS*/@clrizr-saturation-s2SV0px: 0;@clrizr-saturation-s2SV1px: 0;@clrizr-saturation-s2SV2px: 0;@clrizr-saturation-s2SV3px: 0;@clrizr-saturation-s2SV4px: 0;.../*LIGHTNESS CONVERSIONS*/@clrizr-lightness-s2LV0px: 30;@clrizr-lightness-s2LV1px: 31;@clrizr-lightness-s2LV2px: 32;@clrizr-lightness-s2LV3px: 33;@clrizr-lightness-s2LV4px: 34;...   In the end, I have 973 lines of mapping/conversion from source HSL to shade HSL for two extra primary shades, and two complementary shades. The last bit of the work was the file to compose each of the shades from these mappings. Step 3 – Clrizr Mapper The final step was the hardest to overcome as I was still trying to understand LESS to its fullest extent.  Imports As mentioned previously, I had separated the HSL mappings into different files, so the first necessary step is to import those for use into the Clrizr plugin: @import url("hue.less");@import url("saturation.less");@import url("lightness.less"); Extract Component Values For Each Shade Next, I extracted the necessary information for each shade HSL before shade composition: @clrizr-input-saturation: 1px+floor(saturation(@clrizr-input))-1;@clrizr-input-lightness: 1px+floor(lightness(@clrizr-input))-1; @clrizr-complementary-hue: formatstring("clrizr-hue-source-{0}", ceil(hue(@clrizr-input))); @clrizr-primary-2-saturation: formatstring("clrizr-saturation-s2SV{0}",@clrizr-input-saturation);@clrizr-primary-1-saturation: formatstring("clrizr-saturation-s1SV{0}",@clrizr-input-saturation);@clrizr-complementary-1-saturation: formatstring("clrizr-saturation-c1SV{0}",@clrizr-input-saturation); @clrizr-primary-2-lightness: formatstring("clrizr-lightness-s2LV{0}",@clrizr-input-lightness);@clrizr-primary-1-lightness: formatstring("clrizr-lightness-s1LV{0}",@clrizr-input-lightness);@clrizr-complementary-1-lightness: formatstring("clrizr-lightness-c1LV{0}",@clrizr-input-lightness); Here, you can see a couple of odd things…  On the first line, I am using operations to add units to the saturation and lightness.  This is due to some limitations in the operations that would give me saturation or lightness in %, which can’t be in a variable name.  So, I use first add 1px to it, which casts the result of the following functions as px instead of %, and then at the end, I remove that pixel.  You can also see here the formatstring method which is exactly what it sounds like – something like String.Format(string str, params object[] obj). Get Primary & Complementary Shades Now that I have components for each of the different shades, I can now compose them into each of their pieces.  For this, I use the @@ operator which will look for a variable with the name specified in a string, and then call that variable: @clrizr-primary-2: hsl(hue(@clrizr-input), @@clrizr-primary-2-saturation, @@clrizr-primary-2-lightness);@clrizr-primary-1: hsl(hue(@clrizr-input), @@clrizr-primary-1-saturation, @@clrizr-primary-1-lightness);@clrizr-primary: @clrizr-input;@clrizr-complementary-1: hsl(@@clrizr-complementary-hue, @@clrizr-complementary-1-saturation, @@clrizr-complementary-1-lightness);@clrizr-complementary-2: hsl(@@clrizr-complementary-hue, saturation(@clrizr-input), lightness(@clrizr-input)); That’s is it, for the most part.  These variables now hold the theme for the one input color – @clrizr-input.  However, I have one last addition… Perceptive Luminance Well, after I got the colors, I decided I wanted to also get the best font color that would go on top of it.  Black or white depending on light or dark color.  Now I couldn’t just go with checking the lightness, as that is half the story.  You see, the human eye doesn’t see ALL colors equally well but rather has more cells for interpreting green light compared to blue or red.  So, using the ratio, we can calculate the perceptive luminance of each of the shades, and get the font color that best matches it! @clrizr-perceptive-luminance-ps2: round(1 - ( (0.299 * red(@clrizr-primary-2) ) + ( 0.587 * green(@clrizr-primary-2) ) + (0.114 * blue(@clrizr-primary-2)))/255)*255;@clrizr-perceptive-luminance-ps1: round(1 - ( (0.299 * red(@clrizr-primary-1) ) + ( 0.587 * green(@clrizr-primary-1) ) + (0.114 * blue(@clrizr-primary-1)))/255)*255;@clrizr-perceptive-luminance-ps: round(1 - ( (0.299 * red(@clrizr-primary) ) + ( 0.587 * green(@clrizr-primary) ) + (0.114 * blue(@clrizr-primary)))/255)*255;@clrizr-perceptive-luminance-pc1: round(1 - ( (0.299 * red(@clrizr-complementary-1)) + ( 0.587 * green(@clrizr-complementary-1)) + (0.114 * blue(@clrizr-complementary-1)))/255)*255;@clrizr-perceptive-luminance-pc2: round(1 - ( (0.299 * red(@clrizr-complementary-2)) + ( 0.587 * green(@clrizr-complementary-2)) + (0.114 * blue(@clrizr-complementary-2)))/255)*255; @clrizr-col-font-on-primary-2: rgb(@clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2, @clrizr-perceptive-luminance-ps2);@clrizr-col-font-on-primary-1: rgb(@clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1, @clrizr-perceptive-luminance-ps1);@clrizr-col-font-on-primary: rgb(@clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps, @clrizr-perceptive-luminance-ps);@clrizr-col-font-on-complementary-1: rgb(@clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1, @clrizr-perceptive-luminance-pc1);@clrizr-col-font-on-complementary-2: rgb(@clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2, @clrizr-perceptive-luminance-pc2); Conclusion That’s it!  I have posted a project on clrizr.codePlex.com for this, and included a testing page for you to test out how it works.  Feel free to use it in your own project, and if you have any questions, comments or suggestions, please feel free to leave them here as a comment, or on the contact page!

    Read the article

  • CodePlex Daily Summary for Sunday, September 29, 2013

    CodePlex Daily Summary for Sunday, September 29, 2013Popular ReleasesAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features -------- list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed ----- case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download end...Activity Viewer 2012: Activity Viewer 2012 V 5.0.0.3: Planning to add new features: 1. Import/Export rules 2. Tabular mode multi servers connections.Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...OfflineBrowser: Release v1.2: This release includes some multi-threading support, a better progress bar, more JavaScript fixes, and a help system. This release is also portable (can run with no issues from a flash drive).CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.0.0.34288 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Stereoscopic 3D display support. Based on Firestorm viewer 4.4.2 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-0-0-34288-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of ...CrmSvcUtil Generate Attribute Constants: Generate Attribute Constants (1.0.5018.28159): Built against version 5.0.15 of the CRM SDK Fixed issue where constant for primary key attribute was being duplicated in all entity classes Added ability to override base class for entity classesC# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen v2.1.3.api release: This is for the brave at heart, this is the maint release to update to the new movie api. please send feedback on fix requests.FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...Rawr: Rawr 5.4.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Sample MVC4 EF Codefirst Architecture: RazMVCWebApp ver 1.1: Signal R sample is added.CODE Framework: 4.0.30923.0: See change notes in the documentation section for details on what's new. Note: If you download the class reference help file with, you have to right-click the file, pick "Properties", and then unblock the file, as many browsers flag the file as blocked during download (for security reasons) and thus hides all content.JayData -The unified data access library for JavaScript: JayData 1.3.2 - Indian Summer Edition: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like WebAPI, OData, MongoDB, WebSQL, SQLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with KendoUI, Angular.js, Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video KendoUI examples: JayData example site Examples for map integration JayData example site What's new in JayData 1.3.2 - Indian Summer Edition For detai...ZXing.Net: ZXing.Net 0.12.0.0: sync with rev. 2892 of the java version new PDF417 decoder improved Aztec decoder global speed improvements direct Kinect support for ColorImageFrame better Structured Append support many other small bug fixes and improvementsNew ProjectsCACHEDB: CLIENT-DATABASE || CLIENT_CACHEDB-DATABASEClassic WiX Burn Theme: A WiX Burn theme inspired by the classic WiX wizard user interface.CryptStr.Fody: A post-build weaver that encrypts literal strings in your .NET assemblies without breaking ClickOnce.Easy Code: A setting framework.EduSoft: This is a school eg.GameStuff: GameStuff is a library of Physics and Geometrics concepts for video game. Nekora Test Project: Nekora test projectPopCorn Console Game: Simple console gameRadioController: This project started from people installing Tablets in Mustangs. You would typically loose most control of the radio. This projects brings that back!Random searcher i pochodne: Wyszukiwarka plików multimedialnych i czego dusza zapragnie.SporkRandom: A .NET (C#, Visual Basic) interface for the true random number generator service of random.org

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • flash core engine by Dinesh [closed]

    - by hdinesh
    This post was a dump of the following code (without the highlights). No question, just a dump. Please update this q. with a real question to have it reopened. You (the asker) risk to be flagged as spammer (if not already) and a bad reputation. This is a q/a site, not a site to promote your own code libraries. package facers { import flash.display.*; import flash.events.*; import flash.geom.ColorTransform; import flash.utils.Dictionary; import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.events.FileLoadEvent; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; import caurina.transitions.*; public class Main extends Sprite { public var viewport :BasicView; public var displayObject :DisplayObject3D; private var light :PointLight3D; private var shadowPlane :Plane; private var dataArray :Array; private var material :BitmapFileMaterial; private var planeByContainer :Dictionary = new Dictionary(); private var paperSize :Number = 0.5; private var cloudSize :Number = 1500; private var rotSize :Number = 360; private var maxAlbums :Number = 50; private var num :Number = 0; public function Main():void { trace("START APPLICATION"); viewport = new BasicView(1024, 690, true, true, CameraType.FREE); viewport.camera.zoom = 50; viewport.camera.extra = { goPosition: new DisplayObject3D(),goTarget: new DisplayObject3D() }; addChild(viewport); displayObject = new DisplayObject3D(); viewport.scene.addChild(displayObject); createAlbum(); addEventListener(Event.ENTER_FRAME, onRenderEvent); } private function createAlbum() { dataArray = new Array("images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg", "images/thums/pic1.jpg", "images/thums/pic2.jpg", "images/thums/pic3.jpg", "images/thums/pic4.jpg", "images/thums/pic5.jpg", "images/thums/pic6.jpg", "images/thums/pic7.jpg", "images/thums/pic8.jpg", "images/thums/pic9.jpg", "images/thums/pic10.jpg"); for (var i:int = 0; i < dataArray.length; i++) { material = new BitmapFileMaterial(dataArray[i]); material.doubleSided = true; material.addEventListener(FileLoadEvent.LOAD_COMPLETE, loadMaterial); } } public function loadMaterial(event:Event) { var plane:Plane = new Plane(material, 300, 180); displayObject.addChild(plane); var _x:int = Math.random() * cloudSize - cloudSize/2; var _y:int = Math.random() * cloudSize - cloudSize/2; var _z:int = Math.random() * cloudSize - cloudSize/2; var _rotationX:int = Math.random() * rotSize; var _rotationY:int = Math.random() * rotSize; var _rotationZ:int = Math.random() * rotSize; Tweener.addTween(plane, { x:_x, y:_y, z:_z, rotationX:_rotationX, rotationY:_rotationY, rotationZ:_rotationZ, time:5, transition:"easeIn" } ); } protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; viewport.singleRender(); } } } package designLab.events { import flash.display.BlendMode; import flash.display.Sprite; import flash.events.Event; import flash.filters.BlurFilter; // Import designLab import designLab.layer.IntroLayer; import designLab.shadow.ShadowCaster; import designLab.utils.LayerConstant; // Import Papervision3D import org.papervision3d.cameras.*; import org.papervision3d.scenes.*; import org.papervision3d.objects.*; import org.papervision3d.objects.special.*; import org.papervision3d.objects.primitives.*; import org.papervision3d.materials.*; import org.papervision3d.materials.special.*; import org.papervision3d.materials.shaders.*; import org.papervision3d.materials.utils.*; import org.papervision3d.lights.*; import org.papervision3d.render.*; import org.papervision3d.view.*; import org.papervision3d.events.InteractiveScene3DEvent; import org.papervision3d.events.*; import org.papervision3d.core.utils.*; import org.papervision3d.core.geom.renderables.Vertex3D; public class CoreEnging extends Sprite { public var viewport :BasicView; // Create BasicView public var displayObject :DisplayObject3D; // Create DisplayObject public var shadowCaster :ShadowCaster; // Create ShadowCaster private var light :PointLight3D; // Create PointLight private var shadowPlane :Plane; // Create Plane private var layer :LayerConstant; // Create constant resource layer private static var instance :CoreEnging; // Create CoreEnging class static instance // CoreEnging class static instance mathod function public static function getinstance() { if (instance != null) return instance; else { instance = new CoreEnging(); return instance; } } // CoreEnging constrictor public function CoreEnging () { trace("INFO: Design Lab Application : Core Enging v0.1"); layer = new LayerConstant(); viewport = new BasicView(900, 600, true, true, CameraType.FREE); // pass the width, height, scaleToStage, interactive, cameraType to BasicView viewport.camera.zoom = 100; // Define the zoom level of camera addChild(viewport); createFloor(); // Create the floor displayObject = new DisplayObject3D(); // Create new instance of DisplayObject viewport.scene.addChild(displayObject); // Add the DisplayObject to the BasicView light = new PointLight3D(); // Create new instance of PointLight light.z = -50; // Position the Z of create instance light.x = 0; //Position the X of create instance light.rotationZ = 45; //Position the rotation angel of the Z of create instance light.y = 500; //Position the Y of create instance shadowCaster = new ShadowCaster("shadow", 0x000000, BlendMode.MULTIPLY, .1, [new BlurFilter(20, 20, 1)]); // pass shadowcaster name, color, blend mode, alpha and filters shadowCaster.setType(ShadowCaster.SPOTLIGHT); // Define the shadow type addEventListener(Event.ENTER_FRAME, onRenderEvent); // Add frame render event } // Start create floor public function createFloor() { var spr:Sprite = new Sprite(); // Create Sprite spr.graphics.beginFill(0xFFFFFF); // Define the fill color for sprite spr.graphics.drawRect(0, 0, 600, 600); // Define the X, Y, width, height of the sprite var sprMaterial:MovieMaterial = new MovieMaterial(spr, true, true, true); //Create a texture from an existing sprite instance shadowPlane = new Plane(sprMaterial, 2000, 2000, 1, 1); // create new instance of the Plane and pass the texture material, width, height, segmentsW and segmentsH shadowPlane.rotationX = 80; //Position the rotation angel of the X of Plane shadowPlane.y = -200; //Position the Y of Plane viewport.scene.addChild(shadowPlane); // Add the Plane to the BasicView } // switch method function of the page layer control public function addLayer(type:String) { switch (type) { case layer.INTRO: var intro:IntroLayer = new IntroLayer(); break; } } // Create get mathod function for DisplayObject public function getDisplayObject():DisplayObject3D { return displayObject; } // Create get mathod function for BasicView public function getViewport():BasicView { return viewport; } // Rendering function protected function onRenderEvent(event:Event):void { var rotY: Number = (mouseY-(stage.stageHeight/2))/(900/2)*(1200); var rotX: Number = (mouseX-(stage.stageWidth/2))/(600/2)*(-1200); displayObject.rotationY = viewport.camera.x + (rotX - viewport.camera.x) / 50; displayObject.rotationX = viewport.camera.y + (rotY - viewport.camera.y) / 30; // Remove the shadow shadowCaster.invalidate(); // create new shadow on DisplayObject move shadowCaster.castModel(displayObject, light, shadowPlane); viewport.singleRender(); } } } package designLab.layer { import flash.display.Sprite; import flash.events.Event; // Import designLab import designLab.materials.iBusinessCard; import designLab.events.CoreEnging; // Import Papervision3D import org.papervision3d.objects.primitives.Cube; import org.papervision3d.materials.ColorMaterial; import org.papervision3d.materials.MovieMaterial; public class IntroLayer { // IntroLayer constrictor public function IntroLayer() { trace("INFO: Load Intro layer"); var indexDP:DP_index = new DP_index(); //Create the library MovieClip var blackMaterial:MovieMaterial = new MovieMaterial(indexDP, true); //Create a texture from an existing library MovieClip instance blackMaterial.smooth = true; blackMaterial.doubleSided = false; var mycolor:ColorMaterial = new ColorMaterial(0x000000); //Create solid color material var mycard:iBusinessCard = new iBusinessCard(blackMaterial, blackMaterial, mycolor, 372, 10, 207); // Create custom 3D cube object to pass the Front, Back, All, CubeWidth, CubeDepth and CubeHeight CoreEnging.getinstance().getDisplayObject().addChild(mycard.create3DCube()); // Add the custom 3D cube to the DisplayObject } } } package designLab.materials { import flash.display.*; import flash.events.*; // Import Papervision3D import org.papervision3d.materials.*; import org.papervision3d.materials.utils.MaterialsList; import org.papervision3d.objects.primitives.Cube; public class iBusinessCard extends Sprite { private var materialsList :MaterialsList; private var cube :Cube; private var Front :MovieMaterial = new MovieMaterial(); private var Back :MovieMaterial = new MovieMaterial(); private var All :ColorMaterial = new ColorMaterial(); private var CubeWidth :Number; private var CubeDepth :Number; private var CubeHeight :Number; public function iBusinessCard(Front:MovieMaterial, Back:MovieMaterial, All:ColorMaterial, CubeWidth:Number, CubeDepth:Number, CubeHeight:Number) { setFront(Front); setBack(Back); setAll(All); setCubeWidth(CubeWidth); setCubeDepth(CubeDepth); setCubeHeight(CubeHeight); } public function create3DCube():Cube { materialsList = new MaterialsList(); materialsList.addMaterial(Front, "front"); materialsList.addMaterial(Back, "back"); materialsList.addMaterial(All, "left"); materialsList.addMaterial(All, "right"); materialsList.addMaterial(All, "top"); materialsList.addMaterial(All, "bottom"); cube = new Cube(materialsList, CubeWidth, CubeDepth, CubeHeight); cube.x = 0; cube.y = 0; cube.z = 0; cube.rotationY = 180; return cube; } public function setFront(Front:MovieMaterial) { this.Front = Front; } public function getFront():MovieMaterial { return Front; } public function setBack(Back:MovieMaterial) { this.Back = Back; } public function getBack():MovieMaterial { return Back; } public function setAll(All:ColorMaterial) { this.All = All; } public function getAll():ColorMaterial { return All; } public function setCubeWidth(CubeWidth:Number) { this.CubeWidth = CubeWidth; } public function getCubeWidth():Number { return CubeWidth; } public function setCubeDepth(CubeDepth:Number) { this.CubeDepth = CubeDepth; } public function getCubeDepth():Number { return CubeDepth; } public function setCubeHeight(CubeHeight:Number) { this.CubeHeight = CubeHeight; } public function getCubeHeight():Number { return CubeHeight; } } } package designLab.shadow { import flash.display.Sprite; import flash.filters.BlurFilter; import flash.geom.Point; import flash.geom.Rectangle; import flash.utils.Dictionary; import org.papervision3d.core.geom.TriangleMesh3D; import org.papervision3d.core.geom.renderables.Triangle3D; import org.papervision3d.core.geom.renderables.Vertex3D; import org.papervision3d.core.math.BoundingSphere; import org.papervision3d.core.math.Matrix3D; import org.papervision3d.core.math.Number3D; import org.papervision3d.core.math.Plane3D; import org.papervision3d.lights.PointLight3D; import org.papervision3d.materials.MovieMaterial; import org.papervision3d.objects.DisplayObject3D; import org.papervision3d.objects.primitives.Plane; public class ShadowCaster { private var vertexRefs:Dictionary; private var numberRefs:Dictionary; private var lightRay:Number3D = new Number3D() private var p3d:Plane3D = new Plane3D(); public var color:uint = 0; public var alpha:Number = 0; public var blend:String = ""; public var filters:Array; public var uid:String; private var _type:String = "point"; private var dir:Number3D; private var planeBounds:Dictionary; private var targetBounds:Dictionary; private var models:Dictionary; public static var DIRECTIONAL:String = "dir"; public static var SPOTLIGHT:String = "spot"; public function ShadowCaster(uid:String, color:uint = 0, blend:String = "multiply", alpha:Number = 1, filters:Array=null) { this.uid = uid; this.color = color; this.alpha = alpha; this.blend = blend; this.filters = filters ? filters : [new BlurFilter()]; numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); planeBounds = new Dictionary(true); models = new Dictionary(true); } public function castModel(model:DisplayObject3D, light:PointLight3D, plane:Plane, faces:Boolean = true, cull:Boolean = false):void{ var ar:Array; if(models[model]) { ar = models[model]; }else{ ar = new Array(); getChildMesh(model, ar); models[model] = ar; } var reset:Boolean = true; for each(var t:TriangleMesh3D in ar){ if(faces) castFaces(light, t, plane, cull, reset); else castBoundingSphere(light, t, plane, 0.75, reset); reset = false; } } private function getChildMesh(do3d:DisplayObject3D, ar):void{ if(do3d is TriangleMesh3D) ar.push(do3d); for each(var d:DisplayObject3D in do3d.children) getChildMesh(d, ar); } public function setType(type:String="point"):void{ _type = type; } public function getType():String{ return _type; } public function castBoundingSphere(light:PointLight3D, target:TriangleMesh3D, plane:Plane, scaleRadius:Number=0.8, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); var b:BoundingSphere = target.geometry.boundingSphere; var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var tbounds:Object = targetBounds[target]; if(!tbounds){ tbounds = target.boundingBox(); targetBounds[target] = tbounds; } var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); var tlp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); var center:Number3D = new Number3D(tbounds.min.x+tbounds.size.x*0.5, tbounds.min.y+tbounds.size.y*0.5, tbounds.min.z+tbounds.size.z*0.5); var dif:Number3D = Number3D.sub(lp, center); dif.normalize(); var other:Number3D = new Number3D(); other.x = -dif.y; other.y = dif.x; other.z = 0; other.normalize(); var cross:Number3D = Number3D.cross(new Number3D(plane.transform.n12, plane.transform.n22, plane.transform.n32), p3d.normal); cross.normalize(); //cross = new Number3D(-dif.y, dif.x, 0); //cross.normalize(); cross.multiplyEq(b.radius*scaleRadius); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } //numberRefs = new Dictionary(true); var pos:Number3D; var c2d:Point; var r2d:Point; //_type = SPOTLIGHT; pos = projectVertex(new Vertex3D(center.x, center.y, center.z), lp, inv, target.world); c2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); pos = projectVertex(new Vertex3D(center.x+cross.x, center.y+cross.y, center.z+cross.z), lp, inv, target.world); r2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); var dx:Number = r2d.x-c2d.x; var dy:Number = r2d.y-c2d.y; var rad:Number = Math.sqrt(dx*dx+dy*dy); castClip.graphics.beginFill(color); castClip.graphics.moveTo(c2d.x, c2d.y); castClip.graphics.drawCircle(c2d.x, c2d.y, rad); castClip.graphics.endFill(); } public function getCastClip(plane:Plane):Sprite{ var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); var castClip:Sprite;// = new Sprite(); if(planeMovie.getChildByName("castClip"+uid)) return Sprite(planeMovie.getChildByName("castClip"+uid)); else{ castClip = new Sprite(); castClip.name = "castClip"+uid; castClip.scrollRect = new Rectangle(0, 0, movieSize.x, movieSize.y); //castClip.alpha = 0.4; planeMovie.addChild(castClip); return castClip; } } public function castFaces(light:PointLight3D, target:TriangleMesh3D, plane:Plane, cull:Boolean=false, clear:Boolean = true):void{ var planeVertices:Array = plane.geometry.vertices; //convert to target space? var world:Matrix3D = plane.world; var inv:Matrix3D = Matrix3D.inverse(plane.transform); var lp:Number3D = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(inv, lp); var tlp:Number3D; if(cull){ tlp = new Number3D(light.x, light.y, light.z); Matrix3D.multiplyVector(Matrix3D.inverse(target.world), tlp); } //Matrix3D.multiplyVector(Matrix3D.inverse(target.transform), tlp); //p3d.setThreePoints(planeVertices[0].getPosition(), planeVertices[1].getPosition(), planeVertices[2].getPosition()); p3d.setNormalAndPoint(plane.geometry.faces[0].faceNormal, new Number3D()); if(_type == DIRECTIONAL){ var oPos:Number3D = new Number3D(target.x, target.y, target.z); Matrix3D.multiplyVector(target.world, oPos); Matrix3D.multiplyVector(inv, oPos); dir = new Number3D(oPos.x-lp.x, oPos.y-lp.y, oPos.z-lp.z); } var bounds:Object = planeBounds[plane]; if(!bounds){ bounds = plane.boundingBox(); planeBounds[plane] = bounds; } var castClip:Sprite = getCastClip(plane); castClip.blendMode = this.blend; castClip.filters = this.filters; castClip.alpha = this.alpha; var planeMovie:Sprite = Sprite(MovieMaterial(plane.material).movie); var movieSize:Point = new Point(planeMovie.width, planeMovie.height); if(clear) castClip.graphics.clear(); vertexRefs = new Dictionary(true); //numberRefs = new Dictionary(true); var pos:Number3D; var p2d:Point; var s2d:Point; var hitVert:Number3D = new Number3D(); for each(var t:Triangle3D in target.geometry.faces){ if( cull){ hitVert.x = t.v0.x; hitVert.y = t.v0.y; hitVert.z = t.v0.z; if(Number3D.dot(t.faceNormal, Number3D.sub(tlp, hitVert)) <= 0) continue; } castClip.graphics.beginFill(color); pos = projectVertex(t.v0, lp, inv, target.world); s2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.moveTo(s2d.x, s2d.y); pos = projectVertex(t.v1, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); pos = projectVertex(t.v2, lp, inv, target.world); p2d = get2dPoint(pos, bounds.min, bounds.size, movieSize); castClip.graphics.lineTo(p2d.x, p2d.y); castClip.graphics.lineTo(s2d.x, s2d.y); castClip.graphics.endFill(); } } public function invalidate():void{ invalidateModels(); invalidatePlanes(); } public function invalidatePlanes():void{ planeBounds = new Dictionary(true); } public function invalidateTargets():void{ numberRefs = new Dictionary(true); targetBounds = new Dictionary(true); } public function invalidateModels():void{ models = new Dictionary(true); invalidateTargets(); } private function get2dPoint(pos3D:Number3D, min3D:Number3D, size3D:Number3D, movieSize:Point):Point{ return new Point((pos3D.x-min3D.x)/size3D.x*movieSize.x, ((-pos3D.y-min3D.y)/size3D.y*movieSize.y)); } private function projectVertex(v:Vertex3D, light:Number3D, invMat:Matrix3D, world:Matrix3D):Number3D{ var pos:Number3D = vertexRefs[v]; if(pos) return pos; var n:Number3D = numberRefs[v]; if(!n){ n = new Number3D(v.x, v.y, v.z); Matrix3D.multiplyVector(world, n); Matrix3D.multiplyVector(invMat, n); numberRefs[v] = n; } if(_type == SPOTLIGHT){ lightRay.x = light.x; lightRay.y = light.y; lightRay.z = light.z; }else{ lightRay.x = n.x-dir.x; lightRay.y = n.y-dir.y; lightRay.z = n.z-dir.z; } pos = p3d.getIntersectionLineNumbers(lightRay, n); vertexRefs[v] = pos; return pos; } } } package designLab.utils { public class LayerConstant { public const INTRO:String = "INTRO"; // Intro layer string constant } }*emphasized text*

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • Autodetect/mount SDCards and run script for them on Linux

    - by Brendan
    Hey Everyone, I'm currently running SME Server, and need to have a script run upon the attachment of SD Cards to my server. The script itself works fine (it copies the contents of the cards), but the automounting and execution of the script is where I'm having issues. The I have a USB hub consisting of 10 USB ports; that shows up as: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 (The hub is the TERMINUS TECHNOLOGY INC entries) As I cannot plug SD Cards directly into the server; I use a USB to SD card attachement (10 of them) plugged into the hub to read the cards. Upon pluggig the 10 attachments (without cards) into the hub; lsusb yields the following: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 073: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 072: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 071: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 070: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 069: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 068: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 067: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 066: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 065: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 064: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 As you can see, the readers are the "Gensys Logic, Inc" entries. Plugging in an SD card to a reader doesn't affect lsusb (it reads exactly as above), however my system recognises the cards fine; as indicated by dmesg: Attached scsi generic sg11 at scsi54, channel 0, id 0, lun 0, type 0 USB Mass Storage device found at 73 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 If I manually mount sdd1 (mount /dev/sdd1 /somedirectory/) this works fine. What I'm really after is a solution that automounts each of the cards as they are inputted into the reader; and executes a script for them (this will involve copying their contents to another directory). My problem is that I don't know how to do this; I don't think udev will work as the USB devices don't change; if I could somehow get udev working with /dev/disk/by-path/ however I think this is doable (it seems to keep constant entries). ls /dev/disk returns: pci-0000:00:1d.7-usb-0:4.1.1.1:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0 pci-0000:0b:01.0-scsi-0:0:1:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0-part2 From above, we can see I have only one card plugged into the reader (pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1). Going mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 Works and places the card under /media/usbdisk/, however: mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 slot1/ doesn't work, and returns "mount: can't get address for /dev/disk/by-path/pci-0000" Any ideas and solutions would be great, I've seen the knowledge of a lot of the guys on here before so I'm hopeful someone can help me out. Thanks

    Read the article

  • Why is the latency on one LVM volume consistently higher?

    - by David Schmitt
    I've got a server with LVM over RAID1. One of the volumes has a consistently higher IO latency (as measured by the diskstats_latency munin plugin) than the other volumes from the same group. As you can see, the dark orange /root volume has consistently high IO latency. Actually ten times the average latency of the physical devices. It also has the highest Min and Max values. My main concern are not the peaks, which occur under high load, but the constant load on (semi-)idle. The server is running Debian Squeeze with the VServer kernel and has four VServer containers and one KVM guest. I'm looking for ways to fix - or at least understand - this situation. Here're some parts of the system configuration: root@kvmhost2:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/system--host-root 19G 3.8G 14G 22% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 224K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/md0 942M 37M 858M 5% /boot /dev/mapper/system--host-isos 28G 19G 8.1G 70% /srv/isos /dev/mapper/system--host-vs_a 30G 23G 6.0G 79% /var/lib/vservers/a /dev/mapper/system--host-vs_b 5.0G 594M 4.1G 13% /var/lib/vservers/b /dev/mapper/system--host-vs_c 5.0G 555M 4.2G 12% /var/lib/vservers/c /dev/loop0 4.4G 4.4G 0 100% /media/debian-6.0.0-amd64-DVD-1 /dev/loop1 4.4G 4.4G 0 100% /media/debian-6.0.0-i386-DVD-1 /dev/mapper/system--host-vs_d 74G 55G 16G 78% /var/lib/vservers/d root@kvmhost2:~# cat /proc/mounts rootfs / rootfs rw 0 0 none /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 none /proc proc rw,nosuid,nodev,noexec,relatime 0 0 none /dev devtmpfs rw,relatime,size=16500836k,nr_inodes=4125209,mode=755 0 0 none /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 /dev/mapper/system--host-root / ext3 rw,relatime,errors=remount-ro,data=ordered 0 0 tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0 tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0 fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0 /dev/md0 /boot ext3 rw,sync,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-isos /srv/isos ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_a /var/lib/vservers/a ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_b /var/lib/vservers/b ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/mapper/system--host-vs_c /var/lib/vservers/c ext3 rw,relatime,errors=continue,data=ordered 0 0 /dev/loop0 /media/debian-6.0.0-amd64-DVD-1 iso9660 ro,relatime 0 0 /dev/loop1 /media/debian-6.0.0-i386-DVD-1 iso9660 ro,relatime 0 0 /dev/mapper/system--host-vs_d /var/lib/vservers/d ext3 rw,relatime,errors=continue,data=ordered 0 0 root@kvmhost2:~# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda2[0] sdb2[1] 975779968 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 979840 blocks [2/2] [UU] unused devices: <none> root@kvmhost2:~# iostat -x Linux 2.6.32-5-vserver-amd64 (kvmhost2) 06/28/2012 _x86_64_ (8 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 3.09 0.14 2.92 1.51 0.00 92.35 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 23.25 161.12 7.46 37.90 855.27 1596.62 54.05 0.13 2.80 1.76 8.00 sdb 22.82 161.36 7.36 37.66 850.29 1596.62 54.35 0.54 12.01 1.80 8.09 md0 0.00 0.00 0.00 0.00 0.14 0.02 38.44 0.00 0.00 0.00 0.00 md1 0.00 0.00 53.55 198.16 768.01 1585.25 9.35 0.00 0.00 0.00 0.00 dm-0 0.00 0.00 0.48 20.21 16.70 161.71 8.62 0.26 12.72 0.77 1.60 dm-1 0.00 0.00 3.62 10.03 28.94 80.21 8.00 0.19 13.68 1.59 2.17 dm-2 0.00 0.00 0.00 0.00 0.00 0.00 9.17 0.00 9.64 6.42 0.00 dm-3 0.00 0.00 6.73 0.41 53.87 3.28 8.00 0.02 3.44 0.12 0.09 dm-4 0.00 0.00 17.45 18.18 139.57 145.47 8.00 0.42 11.81 0.76 2.69 dm-5 0.00 0.00 2.50 46.38 120.50 371.07 10.06 0.69 14.20 0.46 2.26 dm-6 0.00 0.00 0.02 0.10 0.67 0.81 12.53 0.01 75.53 18.58 0.22 dm-7 0.00 0.00 0.00 0.00 0.00 0.00 7.99 0.00 11.24 9.45 0.00 dm-8 0.00 0.00 22.69 102.76 407.25 822.09 9.80 0.97 7.71 0.39 4.95 dm-9 0.00 0.00 0.06 0.08 0.50 0.62 8.00 0.07 481.23 11.72 0.16 root@kvmhost2:~# ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 59 May 11 11:19 control lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm1 -> ../dm-4 lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm2 -> ../dm-3 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-isos -> ../dm-2 lrwxrwxrwx 1 root root 7 May 11 11:19 system--host-root -> ../dm-0 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-swap -> ../dm-9 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_d -> ../dm-8 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_b -> ../dm-6 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_c -> ../dm-7 lrwxrwxrwx 1 root root 7 Jun 5 15:06 system--host-vs_a -> ../dm-5 lrwxrwxrwx 1 root root 7 Jun 5 15:08 system--host-kvm3 -> ../dm-1 root@kvmhost2:~#

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Set the JAXB context factory initialization class to be used

    - by user1902288
    I have updated our projects (Java EE based running on Websphere 8.5) to use a new release of a company internal framework (and Ejb 3.x deployment descriptors rather than the 2.x ones). Since then my integration Tests fail with the following exception: [java.lang.ClassNotFoundException: com.ibm.xml.xlxp2.jaxb.JAXBContextFactory] I can build the application with the previous framework release and everything works fine. While debugging i noticed that within the ContextFinder (javax.xml.bind) there are two different behaviours: Previous Version (Everything works just fine): None of the different places brings up a factory class so the default factory class gets loaded which is com.sun.xml.internal.bind.v2.ContextFactory (defined as String constant within the class). Upgraded Version (ClassNotFound): There is a resource "META-INF/services/javax.xml.bind.JAXBContext" beeing loaded successfully and the first line read makes the ContextFinder attempt to load "com.ibm.xml.xlxp2.jaxb.JAXBContextFactory" which causes the error. I now have two questions: What sort is that resource? Because inside our EAR there is two WARs and none of those two contains a folder services in its META-INF directory. Where could that value be from otherwise? Because a filediff showed me no new or changed properties files. No need to say i am going to read all about the JAXB configuration possibilities but if you have first insights on what could have gone wrong or help me out with that resource (is it a real file i have to look for?) id appreciate a lot. Many Thanks! EDIT (according to comments Input/Questions): Out of curiosity, does your framework include JAXB JARs? Did the old version of your framework include jaxb.properties? Indeed (i am a bit surprised) the framework has a customized eclipselink-2.4.1-.jar inside the EAR that includes both a JAXB implementation and a jaxb.properties file that shows the following entry in both versions (the one that finds the factory as well as in the one that throws the exception): javax.xml.bind.context.factory=org.eclipse.persistence.jaxb.JAXBContextFactory I think this is has nothing to do with the current issue since the jar stayed exactly the same in both EARs (the one that runs/ the one with the expection) It's also not clear to me why the old version of the framework was ever selecting the com.sun implementation There is a class javax.xml.bind.ContextFinder which is responsible for initializing the JAXBContextFactory. This class searches various placess for the existance of a jaxb.properties file or a "javax.xml.bind.JAXBContext" resource. If ALL of those places dont show up which Context Factory to use there is a deault factory loaded which is hardcoded in the class itself: private static final String PLATFORM_DEFAULT_FACTORY_CLASS = "com.sun.xml.internal.bind.v2.ContextFactory"; Now back to my problem: Building with the previous version of the framework (and EJB 2.x deployment descriptors) everything works fine). While debugging i can see that there is no configuration found and thatfore above mentioned default factory is loaded. Building with the new version of the framework (and EJB 3.x deployment descriptors so i can deploy) ONLY A TESTCASE fails but the rest of the functionality works (like i can send requests to our webservice and they dont trigger any errors). While debugging i can see that there is a configuration found. This resource is named "META-INF/services/javax.xml.bind.JAXBContext". Here are the most important lines of how this resource leads to the attempt to load 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' which then throws the ClassNotFoundException. This is simplified source of the mentioned javax.xml.bind.ContextFinder class: URL resourceURL = ClassLoader.getSystemResource("META-INF/services/javax.xml.bind.JAXBContext"); BufferedReader r = new BufferedReader(new InputStreamReader(resourceURL.openStream(), "UTF-8")); String factoryClassName = r.readLine().trim(); The field factoryClassName now has the value 'com.ibm.xml.xlxp2.jaxb.JAXBContextFactory' (The day i understand how to format source code on stackoverflow will be my biggest step ahead.... sorry for the formatting after 20 mins it still looks the same :() Because this has become a super lager question i will also add a bounty :)

    Read the article

  • WCF net.tcp windows service - call duration and calls outstanding increases over time

    - by Brook
    I have a windows service which uses the ServiceHost class to host a WCF Service using the net.tcp binding. I have done some tweaking to the config to throttle sessions as well as number of connections, but it seems that every once in a while my "Calls outstanding" and "Call duration" shoot up and stay up in perfmon. It seems to me I have a leak somewhere, but the code I have is all fairly minimal, I'm relying on ServiceHost to handle the details. Here's how I start my service ServiceHost host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); My Faulted event just does the following (more or less, logging etc removed) if (host.State == CommunicationState.Faulted) { host.Abort(); } else { host.Close(); } host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); Here's some snippets from my app.config to show some of the things I've tried <runtime> <gcConcurrent enabled="true" /> <generatePublisherEvidence enabled="false" /> </runtime> ......... <behaviors> <serviceBehaviors> <behavior name="Throttled"> <serviceThrottling maxConcurrentCalls="300" maxConcurrentSessions="300" maxConcurrentInstances="300" /> .......... <services> <service name="MyService" behaviorConfiguration="Throttled"> <endpoint address="net.tcp://localhost:49001/MyService" binding="netTcpBinding" bindingConfiguration="Tcp" contract="IMyService"> </endpoint> </service> </services> .......... <netTcpBinding> <binding name="Tcp" openTimeout="00:00:10" closeTimeout="00:00:10" portSharingEnabled="true" receiveTimeout="00:5:00" sendTimeout="00:5:00" hostNameComparisonMode="WeakWildcard" listenBacklog="1000" maxConnections="1000"> <reliableSession enabled="false"/> <security mode="None"/> </binding> </netTcpBinding> .......... <!--for my diagnostics--> <diagnostics performanceCounters="ServiceOnly" wmiProviderEnabled="true" /> There's obviously some resource getting tied up, but I thought I covered everything with my config. I'm only getting about ~150 clients so I don't think I'm coming up against my "300" limit. "Calls per second" stays constant at anywhere from 2-5 calls per second. The service will run for hours and hours with 0-2 "calls outstanding" and very low "call duration" and then eventually it will shoot up to 30 calls oustanding and 20s call duration. Any tips on what might be causing my "calls outstanding" and "call duration" to spike? Where am I leaking? Point me in the right direction?

    Read the article

  • Rails 2.3.5 and oauth-plugin

    - by pgb
    I started a rails application from scratch, using Rails 2.3.5, and installed oauth-plugin. The installation was done by running script/plugin install git://github.com/pelle/oauth-plugin.git. Now, when I try to start the server, I get the following errors: => Rails 2.3.5 application starting on http://0.0.0.0:3000 /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:440:in `load_missing_constant': uninitialized constant Rails::Railtie (NameError) from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:in `const_missing' from /Users/Pablo/Projects/test.oauth/vendor/plugins/oauth-plugin/lib/oauth-plugin.rb:16 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /Users/Pablo/Projects/test.oauth/vendor/plugins/oauth-plugin/rails/init.rb:1:in `evaluate_init_rb' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin.rb:158:in `evaluate_init_rb' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin.rb:154:in `evaluate_init_rb' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin.rb:48:in `load' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:38:in `load_plugins' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:37:in `each' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/plugin/loader.rb:37:in `load_plugins' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:369:in `load_plugins' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:165:in `process' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/Pablo/Projects/test.oauth/config/environment.rb:9 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:84 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 I can't figure out why this is failing. Am I missing a dependency? What other information I can provide to others help me figure this out?

    Read the article

  • Java: micro-optimizing array manipulation

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91  | Next Page >