Search Results

Search found 10789 results on 432 pages for 'experience'.

Page 401/432 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Oracle Cloud Applications: The Right Ingredients Baked In

    - by yaldahhakim
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Oracle Cloud Applications: The Right Ingredients Baked In Eggs, flour, milk, and sugar. The magic happens when you mix these ingredients together. The same goes for the hottest technologies fast changing how IT impacts our organizations today: cloud, social, mobile, and big data. By themselves they’re pretty good; combining them with a great recipe is what unlocks real transformation power. Choosing the right cloud can be very similar to choosing the right cake. First consider comparing the core ingredients that go into baking a cake and the core design principles in building a cloud-based application. For instance, if flour is the base ingredient of a cake, then rich functionality that spans complete business processes is the base of an enterprise-grade cloud. Cloud computing is more than just consuming an "application as service", and having someone else manage it for you. Rather, the value of cloud is about making your business more agile in the marketplace, and shortening the time it takes to deliver and adopt new innovation. It’s also about improving not only the efficiency at which we communicate but the actual quality of the information shared as well. Data from different systems, like ingredients in a cake, must also be blended together effectively and evaluated through a consolidated lens. When this doesn’t happen, for instance when data in your sales cloud doesn't seamlessly connect with your order management and other “back office” applications, the speed and quality of information can decrease drastically. It’s like mixing ingredients in a strainer with a straw – you just can’t bring it all together without losing something. Mixing ingredients is similar to bringing clouds together, and co-existing cloud applications with traditional on premise applications. This is where a shared services  platform built on open standards and Service Oriented Architecture (SOA) is critical. It’s essentially a cloud recipe that calls for not only great ingredients, but also ingredients you can get locally or most likely already have in your kitchen (or IT shop.) Open standards is the best way to deliver a cost effective, durable application integration strategy – regardless of where your apps are deployed. It’s also the best way to build your own cloud applications, or extend the ones you consume from a third party. Just like using standard ingredients and tools you already have in your kitchen, a standards based cloud enables your IT resources to ensure a cloud works easily with other systems. Your IT staff can also make changes using tools they are already familiar with. Or even more ideal, enable business users to actually tailor their experience without having to call upon IT for help at all. This frees IT resources to focus more on developing new innovative services for the organization vs. run and maintain. Carrying the cake analogy forward, you need to add all the ingredients in before you bake it. The same is true with a modern cloud. To harness the full power of cloud, you can’t leave out some of the most important ingredients and just layer them on top later. This is what a lot of our niche competitors have done when it comes to social, mobile, big data and analytics, and other key technologies impacting the way we do business. The transformational power of these technology trends comes from having a strategy from the get-go that combines them into a winning recipe, and delivers them in a unified way. In looking at ways Oracle’s cloud is different from other clouds – not only is breadth of functionality rich across functional pillars like CRM, HCM, ERP, etc. but it embeds social, mobile, and rich intelligence capabilities where they make the most sense across business processes. This strategy enables the Oracle Cloud to uniquely deliver on all three of these dimensions to help our customers unlock the full power of these transformational technologies.

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • Navigate Quickly with JustCode and Ctrl+Click

    Ctrl + Click is a widely used shortcut for Go To Definition in many development environments but not in Visual Studio. We, the JustCode team, find it really useful so we added it to Visual Studio. But we didn't stop there - we improved it even further. Read on to find the details. With JustCode you get an enhanced Go To Definition. By default you can execute it in the Visual Studio editor using one of the following shortcuts: Middle Click, Ctrl+Left Click, F12, Ctrl+Enter, Ctrl+B. The first usage of this feature is not much different from the default Visual Studio Go To Definition command use it where a member, type, method, property, etc is used to navigate to the definition of that item. For example, if you have this method:         public void Start()         {             lion = new Lion();             lion.Roar();         } If you hold Ctrl and click on the usage of the lion you will go to the lion member definition. If you hold Ctrl and click on the Lion you will go to the Lion class definition. What we added is the ability to easily find all the usages of the item you just navigated to. For example:     public class Lion     {         public void Roar()         {             Console.WriteLine("Rhaaaar");         }     }   If you hold Ctrl and click on the Lion definition you will see all the usages of the Lion type; if you click on the Roar method definition you will see all the usages of the Roar method: And if there is only one usage you will get automatically to that usage. In the examples I use C#, but it works also in VB.NET, JavaScript, ASP.NET and XAML. Why we like this feature? Let me first start with how the Ctrl+Click (or Go To Definition command) is used. We noticed that developers use it especially in what we call "code browsing sessions". In simple words this is when you browse around the code looking for a bug, just reading the code or searching for something. Sounds familiar? In our experience when you go to the definition of some item you often want to know more about it and the first thing you need is to find its usages. With JustCode this is just one click away. Why Ctrl+Click/Middle Click over F12/Ctrl+Enter/Ctrl+B? Actually you can use all of them. But during these "code browsing sessions" we noticed that most developers use the mouse. So the mouse is already in use and pressing Ctrl+Click (or the Middle Click) is so natural. During heavy coding sessions or if you are a keyboard type developer F12 (or any of the other keyboard shortcuts) is the key. We really use heavily this feature not only in our team but in the whole company. It saves us a bit of time many times a day. And it adds up. We hope you will like it too. Your feedback is more than welcome for us. P.S. If you dont want JustCode to capture the Ctrl+Click and the Middle Click in the editor, you can change that in JustCode->Options->General in the Navigation group. Keyboard shortcuts can be reassigned using the Visual Studio keyboard shortcuts editor.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • My own personal use of Oracle Linux

    - by wcoekaer
    It always is easier to explain something with examples... Many people still don't seem to understand some of the convenient things around using Oracle Linux and since I personally (surprise!) use it at home, let me give you an idea. I have quite a few servers at home and I also have 2 hosted servers with a hosted provider. The servers at home I use mostly to play with random Linux related things, or with Oracle VM or just try out various new Oracle products to learn more. I like the technology, it's like a hobby really. To be able to have a good installation experience and use an officially certified Linux distribution and not waste time trying to find the right libraries, I, of course, use Oracle Linux. Now, at least I can get a copy of Oracle Linux for free (even if I was not working for Oracle) and I can/could use that on as many servers at home (or at my company if I worked elsewhere) for testing, development and production. I just go to http://edelivery.oracle.com/linux and download the version(s) I want and off I go. Now, I also have the right (and not because I am an employee) to take those images and put them on my own server and give them to someone else, I in fact, just recently set up my own mirror on my own hosted server. I don't have to remove oracle-logos, I don't have to rebuild the ISO images, I don't have to recompile anything, I can just put the whole binary distribution on my own server without contract. Perfectly free to do so. Of course the source code of all of this is there, I have a copy of the UEK code at home, just cloned from https://oss.oracle.com/git/?p=linux-2.6-unbreakable.git. And as you can see, the entire changelog, checkins, merges from Linus's tree, complete overview of everything that got changed from kernel to kernel, from patch to patch, errata to errata. No obfuscating, no tar balls and spending time with diff, or go read bug reports to find out what changed (seems silly to me). Some of my servers are on the external network and I need to be current with security errata, but guess what, no problem, my servers are hooked up to http://public-yum.oracle.com which is open, free, and completely up to date, in a consistent, reliable way with any errata, security or bugfix. So I have nothing to worry about. Also, not because I am an employee. Anyone can. And, with this, I also can, and have, set up my own mirror site that hosts these RPMs. both binary and source rpms. Because I am free to get them and distribute them. I am quite capable of supporting my servers on my own, so I don't need to rely on the support organization so I don't need to have a support subscription :-). So I don't need to pay. Neither would you, at least not with Oracle Linux. Another cool thing. The hosted servers came (unfortunately) with Centos installed. While Centos works just fine as is, I tend to prefer to be current with my security errata(reliably) and I prefer to just maintain one yum repository instead of 2, I converted them over to Oracle Linux as well (in place) so they happily receive and use the exact same RPMs. Since Oracle Linux is exactly the same from a user/application point of view as RHEL, including files like /etc/redhat-release and no changes from .el. to .centos. I know I have nothing to worry about installing one of the RHEL applications. So, OL everywhere makes my life a lot easier and why not... Next! Since I run Oracle VM and I have -tons- of VM's on my machines, in some cases on my big WOPR box I have 15-20 VMs running. Well, no problem, OL is free and I don't have to worry about counting the number of VMs, whether it's 1, or 4, or more than 10 ... like some other alternatives started doing... and finally :) I like to try out new stuff, not 3 year old stuff. So with UEK2 as part of OL6 (and 6.3 in particular) I can play with a 3.0.x based kernel and it just installs and runs perfectly clean with OL6, so quite current stuff in an environment that I know works, no need to toy around with an unsupported pre-alpha upstream distribution with libraries and versions that are not compatible with production software (I have nothing against ubuntu or fedora or opensuse... just not what I can rely on or use for what I need, and I don't need a desktop). pretty compelling. I say... and again, it doesn't matter that I work for Oracle, if I was working elsewhere, or not at all, all of the above would still apply. Student, teacher, developer, whatever. contrast this with $349 for 2 sockets and oneguest and selfsupport per year to even just get the software bits.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • From the Classroom to the Boardroom

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Pens and Paper...these are the only things being a student and being a graduate / professional have in common. Walking in to the offices of Oracle South Africa as a graduate the first thing you notice is how polished and sleek all the people who work here look. 80% of the ladies wear sky-scraper heels and walk with the greatest grace. This was the first of many rude awakenings to remind me that I am no longer a student but a graduate. My first struggle was having to wake up at wee hours of the morning to prepare for work. As a student going to class was almost an optional thing, if you missed a morning class you could always attend an evening class to make up for it or simply attend with another group. But in the workplace, you HAVE to show up every single morning at the same time, with no option of coming in when it suits you and there is definitely no coming in with the evening class/shift. As a student, the earliest hour I ever woke up was 7:00am, anything earlier than that was considered inhumane torture. My reason for waking up every morning as a student was “you have a degree to go get” but as a graduate having to go to work I have to say to myself “here’s to a new day of learning and growing”. My second struggle has come in having to change my beloved wardrobe. Everyone who knows me knows how passionate I am about fashion and shopping. For me Shopping is a BASIC HUMAN RIGHT, that should not be messed with. Therefore it was with great sadness that I swopped my rippled skinny jeans for pin-striped formal pants, my long chandelier earrings for simple studs, my flat shoes for heels, my sheer blouses for crisp white shirts, even my beloved wild hair had to make way for a simple ponytail. Our looks as ladies also came under great scrutiny, we had to acquaint ourselves with some serious grooming tools: the mascara, blush, lip-gloss, blush, a touch of lipstick and a manicure set. Language was a struggle of its own as well. Being a student you learn to relate to your peers in a informal way. In the workplace you have to address everyone with the same respect, including your peers. Words like “Hey buddy” had to make way for “good morning friend”. The month long winter school holiday was one of the things I looked forward to as a student. This was a time where we got to be at home and avoid the coldest month of the year, July. It was the most amazing thing ever, just sleeping and snuggling up to all sorts of warm things but sadly it is now a thing of the past. It is currently winter in South Africa and going to work has become the most unfashionable thing with all the jackets, boots, scarves and gloves. But summer is coming and I will miss those holidays too. As a student the school holidays were like a gift for us to catch a break and not think for a while which was why it was imaginable how someone would go on for the entire year without a break, with only the promise of a mere 21 days annual leave!! Right now I am sure we are all looking forward to taking that annual leave when the time is right. The worst rude awakening I must say, has to be presenting in front of clients and managers. As a student you have the same class mates for almost four years therefore presenting in front of them becomes the norm over the years and your lecturer will always go gently on you. What they don’t tell you at University is that in the real world, time is money and clients pay money to see you present therefore there is no room for error. Clients are not there to give you a score and boost your ego, they expect nothing less than 100% and they will let you know without a second thought. For a graduate this can feel like you are being fed to the sharks, you either get eaten or you swim for your life. At the end of the day, it is all an experience that is meant to groom us into better professional and make us a part of the Red Team. All the sacrifices are worth it and they lead us to being better and more polished professionals. So if you are interested in joining the ECEMEA Sales and Presales Internship Programme, please have a look at http://campus.oracle.com for more information and for our latest vacancies and internships. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Battery life starts at 2:30 hrs (99%), but less than 1 minute later is only 1:30 hrs (99%)

    - by zondu
    After searching this and other forums, I haven't seen this same issue listed anywhere for Ubuntu 12. Prior to installing Ubuntu 12.10, my Netbook (Acer AspireOne D250, SATA HDD) was consistently getting 2:30-3 hrs battery life under Windows XP Home, SP3. However, immediately after installing Ubuntu 12.10, the battery life starts out at 2:30 hrs (99%), but less than 1 minute later suddenly drops to 1:30 hrs (99%), which seems very odd. It could be a complete coincidence that the battery is suddenly flaky at the exact same moment that Ubuntu 12.10 was installed, but that doesn't seem likely. I'm a newbie to Ubuntu, so I don't have much experience tweaking/trouble-shooting yet. Here's what I've tried so far: enabled laptop mode (sudo su, then echo 5 /proc/sys/vm/laptop_mode) and checked that it is running when the A/C adapter is unplugged, but it doesn't seem to have made any noticeable difference in battery life, installed Jupiter, but it didn't work and messed up the system, so I had to uninstall it, disabled bluetooth (wifi is still on b/c it is necessary), set the screen to lowest brightness, etc., run through at least 1 full power cycle (running until the netbook shut itself off due to critical battery) and have been using it normally (sometimes plugged in, often unplugged until the battery gets very low) for a week since installing Ubuntu 12.10. installed powertop, but have no idea how to interpret its results. Here are the results of acpi -b: w/ A/C adapter: Battery 0: Full, 100% immediately after unplugging: Battery 0: Discharging, 99%, 02:30:20 remaining 1 minute after unplugging: Battery 0: Discharging, 99%, 01:37:49 remaining 2-3 minutes after unplugging: Battery 0: Discharging, 95%, 01:33:01 remaining 10 minutes after unplugging: Battery 0: Discharging, 85%, 01:13:38 remaining Results of cat /sys/class/power_supply/BAT0/uevent: w/ A/C adapter: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Full POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=12136000 POWER_SUPPLY_CURRENT_NOW=773000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1956000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= immediately after unplugging: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11886000 POWER_SUPPLY_CURRENT_NOW=773000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1937000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 1 minute later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11728000 POWER_SUPPLY_CURRENT_NOW=1174000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1937000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 2-3 minutes later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11583000 POWER_SUPPLY_CURRENT_NOW=1209000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1878000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= 10 minutes later: POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Discharging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=11230000 POWER_SUPPLY_CURRENT_NOW=1239000 POWER_SUPPLY_CHARGE_FULL_DESIGN=4500000 POWER_SUPPLY_CHARGE_FULL=1956000 POWER_SUPPLY_CHARGE_NOW=1644000 POWER_SUPPLY_MODEL_NAME=UM08B32 POWER_SUPPLY_MANUFACTURER=SANYO POWER_SUPPLY_SERIAL_NUMBER= Results of upower -i /org/freedesktop/UPower/devices/battery_BAT0: w/ A/C adapter: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:24:58 2012 (823 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: fully-charged energy: 21.1248 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 8.3484 W voltage: 12.173 V percentage: 100% capacity: 43.4667% technology: lithium-ion immediately after unplugging: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:41:25 2012 (1 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.9196 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 8.3484 W voltage: 11.86 V time to empty: 2.5 hours percentage: 99.0286% capacity: 43.4667% technology: lithium-ion History (charge): 1354023683 99.029 discharging 1 minute later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:42:31 2012 (17 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.9196 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.5432 W voltage: 11.753 V time to empty: 1.5 hours percentage: 99.0286% capacity: 43.4667% technology: lithium-ion History (charge): 1354023683 99.029 discharging History (rate): 1354023751 13.543 discharging 2-3 minutes later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:45:06 2012 (20 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 20.2824 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.7484 W voltage: 11.545 V time to empty: 1.5 hours percentage: 96.0123% capacity: 43.4667% technology: lithium-ion History (charge): 1354023906 96.012 discharging 1354023844 97.035 discharging History (rate): 1354023906 13.748 discharging 1354023875 12.992 discharging 1354023844 13.284 discharging 10 minutes later: native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/device:02/PNP0C0A:00/power_supply/BAT0 vendor: SANYO model: UM08B32 power supply: yes updated: Tue Nov 27 15:54:24 2012 (28 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: discharging energy: 18.1764 Wh energy-empty: 0 Wh energy-full: 21.1248 Wh energy-full-design: 48.6 Wh energy-rate: 13.2948 W voltage: 11.268 V time to empty: 1.4 hours percentage: 86.0429% capacity: 43.4667% technology: lithium-ion History (charge): 1354024433 86.043 discharging History (rate): 1354024464 13.295 discharging 1354024433 13.662 discharging 1354024402 13.781 discharging I noticed that between #2 and #3 (0 and 1 minutes after unplugging), while the battery still reports 99% charge and drops from 2:30 hr to 1:30 hr, the energy usage goes from 8.34 W to 13.54 W and the current_now increases, but shouldn't it be using less energy in battery mode since the screen is much dimmer and it's in power saving mode? (or is that normal behavior?) It also seems to drain more quickly than what it predicts, especially with the 1-1.25 hour drop in the first minute of being unplugged, which seems odd. What really concerns me is that Ubuntu 12.10 may not be properly managing the battery (with the sudden change in charge/life from 2:30 to 1:30 or 1:15 within a minute of unplugging), and that a new battery may quickly die under Ubuntu 12.10. I'd greatly appreciate any advice/suggestions on what to do, and especially whether there's a way to get back the 1-1.5 hrs of battery life that were suddenly lost when changing from WinXp to Ubuntu 12.10. Thanks :)

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Custom page sizes in paging dropdown in Telerik RadGrid

    Working with Telerik RadControls for ASP.NET AJAX is actually quite easy and the initial effort to get started with the control suite is very low. Meaning that you can easily get good result with little time. But there are usually cases where you have to go a little further and dig a little bit deeper than the standard scenarios. In this article I am going to describe how you can customize the default values (10, 20 and 50) of the drop-down list in the paging element of RadGrid. Get control over the displayed page sizes while using numeric paging... The default page sizes are good but not always good enough The paging feature in RadGrid offers you 3, well actually 4, possible page sizes in the drop-down element out-of-the box, which are 10, 20 or 50 items. You can get a fourth option by specifying a value different than the three standards for the PageSize attribute, ie. 35 or 100. The drawback in that case is that it is the initial page size. Certainly, the available choices could be more flexible or even a little bit more intelligent. For example, by taking the total count of records into consideration. There are some interesting scenarios that would justify a customized page size element: A low number of records, like 14 or similar shouldn't provide a page size of 50, A high total count of records (ie: 300+) should offer more choices, ie: 100, 200, 500, or display of all records regardless of number of records I am sure that you might have your own requirements, and I hope that the following source code snippets might be helpful. Wiring the ItemCreated event In order to adjust and manipulate the existing RadComboBox in the paging element we have to handle the OnItemCreated event of RadGrid. Simply specify your code behind method in the attribute of the RadGrid tag, like so: <telerik:RadGrid ID="RadGridLive" runat="server" AllowPaging="true" PageSize="20"    AllowSorting="true" AutoGenerateColumns="false" OnNeedDataSource="RadGridLive_NeedDataSource"    OnItemDataBound="RadGrid_ItemDataBound" OnItemCreated="RadGrid_ItemCreated">    <ClientSettings EnableRowHoverStyle="true">        <ClientEvents OnRowCreated="RowCreated" OnRowSelected="RowSelected" />        <Resizing AllowColumnResize="True" AllowRowResize="false" ResizeGridOnColumnResize="false"            ClipCellContentOnResize="true" EnableRealTimeResize="false" AllowResizeToFit="true" />        <Scrolling AllowScroll="true" ScrollHeight="360px" UseStaticHeaders="true" SaveScrollPosition="true" />        <Selecting AllowRowSelect="true" />    </ClientSettings>    <MasterTableView DataKeyNames="AdvertID">        <PagerStyle AlwaysVisible="true" Mode="NextPrevAndNumeric" />        <Columns>            <telerik:GridBoundColumn HeaderText="Listing ID" DataField="AdvertID" DataType="System.Int32"                SortExpression="AdvertID" UniqueName="AdvertID">                <HeaderStyle Width="66px" />            </telerik:GridBoundColumn>             <!--//  ... and some more columns ... -->         </Columns>    </MasterTableView></telerik:RadGrid> To provide a consistent experience for your visitors it might be helpful to display the page size selection always. This is done by setting the AlwaysVisible attribute of the PagerStyle element to true, like highlighted above. Customize the values of page size Your delegate method for the ItemCreated event should look like this: protected void RadGrid_ItemCreated(object sender, GridItemEventArgs e){    if (e.Item is GridPagerItem)    {        var dropDown = (RadComboBox)e.Item.FindControl("PageSizeComboBox");        var totalCount = ((GridPagerItem)e.Item).Paging.DataSourceCount;        var sizes = new Dictionary<string, string>() {            {"10", "10"},            {"20", "20"},            {"50", "50"}        };        if (totalCount > 100)        {            sizes.Add("100", "100");        }        if (totalCount > 200)        {            sizes.Add("200", "200");        }        sizes.Add("All", totalCount.ToString());        dropDown.Items.Clear();        foreach (var size in sizes)        {            var cboItem = new RadComboBoxItem() { Text = size.Key, Value = size.Value };            cboItem.Attributes.Add("ownerTableViewId", e.Item.OwnerTableView.ClientID);            dropDown.Items.Add(cboItem);        }        dropDown.FindItemByValue(e.Item.OwnerTableView.PageSize.ToString()).Selected = true;    }} It is important that we explicitly check the event arguments for GridPagerItem as it is the control that contains the PageSizeComboBox control that we want to manipulate. To keep the actual modification and exposure of possible page size values flexible I am filling a Dictionary with the requested 'key/value'-pairs based on the number of total records displayed in the grid. As a final step, ensure that the previously selected value is the active one using the FindItemByValue() method. Of course, there might be different requirements but I hope that the snippet above provide a first insight into customized page size value in Telerik's Grid. The Grid demos describe a more advanced approach to customize the Pager.

    Read the article

  • Is software support an option for your career?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you have a technical background, why should you choose a career in support? We have invited Serban to answer these questions and to give us an overview of one of the biggest technical teams in Oracle Romania. He’s been with Oracle for 7 years leading the local PeopleSoft Financials & Supply Chain Support team. Back in 2013 Serban started building a new support team in Romania – Fusion HCM. His current focus is building a strong support team for Fusion HCM, latest solution for Business HR Professionals from Oracle. The solution is offered both on Premise (customer site installation) but more important as a Cloud offering – SaaS.  So, why should a technical person choose Software Support over other technical areas?  “I think it is mainly because of the high level of technical skills required to provide the best technical solutions to our customers. Oracle Software Support covers complex solutions going from Database or Middleware to a vast area of business applications (basically covering any needs that a large enterprise may have). Working with such software requires very strong skills both technical and functional for the different areas, going from Finance, Supply Chain Management, Manufacturing, Sales to other very specific business processes. Our customers are large enterprises that already have a support layer inside their organization and therefore the Oracle Technical Support Engineers are working with highly specialized staff (DBA’s, System/Application Admins, Implementation Consultants). This is a very important aspect for our engineers because they need to be highly skilled to match our customer’s specialist’s expectations”.  What’s the career path in your team? “Technical Analysts joining our teams have a clear growth path. The main focus is to become a master of the product they will support. I think one need 1 or 2 years to reach a good level of understanding the product and delivering optimal solutions because of the complexity of our products. At a later stage, engineers can choose their professional development areas based on the business needs and preferences and then further grow towards as technical expert or a management role. We have analysts that have more than 15 years of technical expertise and they still learn and grow in technical area. Important fact is, due to the expansion of the Romanian Software support center, there are various management opportunities. So, if you want to leverage your experience and if you want to have people management responsibilities Oracle Software Support is the place to be!”  Our last question to Serban was about the benefits of being part of Oracle Software Support. Here is what he said: “We believe that Oracle delivers “State of the art” Support level to our customers. This is not possible without high investment in our staff. We commit from the start to support any technical analyst that joins us (being junior or very senior) with any training needs they have for their job. We have various technical trainings as well as soft-skills trainings required for a customer facing professional to be successful in his role. Last but not least, we’re aiming to make Oracle Romania SW Support a global center of excellence which means we’re investing a lot in our employees.”  If you’re looking for a job where you can combine your strong technical skills with customer interaction Oracle Software Support is the place to be! Send us your CV at [email protected]. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How do I dig myself out of this DEEP hole? [closed]

    - by user74847
    I may be a bit bias in the way i word this but any opinions and suggestions are welcome. I should start by saying i have a MSc in CS and a degree in new media +6 years expereince and im probably around a middleweight developer. I started a web development company with my friend from uni a year ago, there was a 4 month gap in the middle where i went miles away work on a big project. Ive since returned and picked up where we left off. A year on though i find im still staying up til 5am and getting up at 9 sometimes 2-3 days without sleep. While i was away i was working 9-5 and struggling to keep up with doing stuff for my clients 8 hours ahead, after work, so things stagnated. We currently have about 12 active projects, with one other part time developer and a full time freelancer who is dealing with one of our major projects. I am solely responsible for concurrently developing 2 big sites similar to gumtree in functionality, at the same time as about 5-6+ small WordPress based 5-10page sites. a lot of the content isnt in yet or the client is delaying so i chop and change project every other day which does my head in. Is it reasonable to expect myself to remember the intricate details of each project when i come back to it a week later? and remember the details of a task which hasnt been written down? my business partner seems to think so. or am i just forgetful? Im particularly bad at estimating timescales which doesnt help, added to that a lot of the technologies im am using are new to me (a magento site took weeks to theme rather than days and was full of bugs, even after 1000's of google searches and hours reading forums) im still trying to learn and find the best CMS for us to use and getting my head around the likes of Bootstrap and jquery, Cpanel / Linux (we just got a blank vps for me to set up with no experience) even installing an SSL certificate caused everyone's mail clients to go down which was more stress for me to sort out. I find the pressure of the workload and timescales and trying to learn this stuff so fast is beginning to turn me against my career path. The fact that i never seem to get anything done really winds up my business partner and iv come to associate him with the stress and pain of the whole situation especially when I get berated or a look that says "oh you retard" when I forget something. Even today i spent hours learning how a particular themeforest theme worked with wordpress and how i could twist it to work for our partiuclar needs, on the surface had done no work, that triggered a 30 minute tirade of anger and stress and questioning what i had done from my business partner. had i taken too long to work on that? shoudl i have done it in 2 hours instead of 6? i told him i would take 2 hours. i was wrong. I feel like im running myself into the ground. My sleeping pattern has got so bad that when im working im half asleep and making mistakes, my eyes are constantly purple underneath, i literally fall asleep at my desk, its affecting my social life too, ive not slept more than lightly for the last year and grind through impossible code puzzles in my half sleep wich keeps me awake, when im already exhausted. plus the work is rushed and buggy when it does get done so drags on into the next project. I also procrastinate quite badly, pacing the livingroom, looking out the window when Im alone for three days straight in the flat and start to get cabin fever which means i do even less work and the negative feedback loop continues. I get told im the only one with the problem when i say that i cant work from home any more, and examples of other freelancers get brought up. an office wouldnt bring any extra cash in to the company but im convinced having that moving more than 2 meters away from my bed to go to "work" would get me working, at the moment i feel guilty like i should be working 24-7. It is important that we do all this work to raise enough cash to get our business to the next level but every month still feels like a struggle to pay the rent (there is about £20K coming in by Jan) and i have to borrow money from friends often to buy food or get a taxi to a meeting, so it is vital the money keeps coming in. (im also 20 mins late for nearly all meetings but thats a different issue) have you experienced anything similar? how can i deal with the issues ive raised? is it realistic to develop 10 sites at once? how can i improve my relationship with my business partner? do you struggle to work at home? how do you deal with that? i think if i dont get my life on track by feb i will seriously consider giving it all up, but that seems like such a waste. any ideas!!? i need help! Thanks.

    Read the article

  • Adventures in Windows 8: Solving activation errors

    - by Laurent Bugnion
    Note: I tagged this article with “MVVM” because I got a few support requests for MVVM Light regarding this exact issue. In fact it is a Windows 8 issue and has nothing to do with MVVM Light per se… Sometimes when you work on a Windows 8 app, you will get a very annoying issue when starting the app. In that case, the app doesn’t not even start past the Splash screen. Putting a breakpoint in App.xaml.cs doesn’t help because the app doesn’t even reach that point! So what exactly is happening? Well when a Windows 8 app starts, the system is performing a few check first. One of the checks, for instance, is to see if an app with the same package ID is already available. The package ID is a unique value set in the package manifest. In the Solution Explorer, double click on Package.appxmanifest. This opens the manifest in a special editor Click on the Packaging tab See the GUID under Package Name. This is the unique ID I am talking about. If there is a conflict (i.e. if an app is already installed with the exact same ID), Windows will warm the user that the app is already installed. However when you are in the process of developing an app, you install and uninstall the same app many many times (every time that you start in Visual Studio), and sometimes some issues arise, for instance failing to uninstall the app before starting the new instance of the same app. First step if you get such an error When the application fails to start past the splash screen, the first step is to identify what kind of error happened. In my experience the “already installed” is by far the most frequent (in fact I never had another such error), but it can be something else. An annoying thing is that the popup that shows the error is usually started below the Windows 8 app, and so you don’t even see it! This is especially true if you run this in the Simulator. In that case, do the following: Press on the Simulator’s home button, then press on the Desktop tile on the Start menu. The error popup should be shown on the desktop. If your applications runs on the Local machine, you also do the same and press the Windows button, and then from the Start menu press the Desktop tile. Deployment error in Studio Sometimes the same error causes Visual Studio to fail launching the application at all with a deployment error. This is a better case, because at least it is clear that there is an issue. In that case, write down the code that is shown in the Error window (for instance 0x80073D05 in the example below). Once you have the error code, go to the “Troubleshooting packaging, deployment, and query of Windows Store apps” page and look up the code in question. In my case, the error was “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED”, “An error occurred while deleting the package's previously existing application data.” Solving the “ERROR_DELETING_EXISTING_APPLICATIONDATA_STORE_FAILED” issue Update: Before trying the below, you can also try the simple steps: Exit Visual Studio Go to the Start menu Locate your app’s tile. It should be visible in the Start menu directly, towards the far end on the right. Right click the tile and select Uninstall from the App Bar. Restart Visual Studio and try again. Sometimes it helps. If it doesn’t, then try the following: In order to solve the case where Windows, for any reason, fails to delete the existing application before starting the new instance, follow the steps: Open the Package.appxmanifest in Visual Studio Open the Packaging tab. Change the Package name. For tests you can just try to change the last character of the GUID, though I would recommend creating a brand new GUID. Press Start Type GUID Start the GUID Generator application Select Registry Format Press Copy. Paste the new GUID in place of the Package Name in Visual Studio Important: don’t forget to remove the curly brackets at the beginning and at the end of the newly pasted GUID. Then you just have to cross your fingers and start the application again… If it works, celebrate. if it doesn’t work… well at this point I am not sure so good luck with that ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • If I were in a Silverlight focus group, here is ten things I would say.

    - by mbcrump
    Silverlight is a great product right off the shelf. I use it, love it and spend a lot of time helping the community understand it. This however, doesn’t mean that I don’t think that it can get better. If I were invited to a Microsoft Focus Group about Silverlight here is 10 things I would say:  We need more navigation templates. I’ve found (4) templates that Microsoft has released (Cosmo, Windows 7, Accent and JetPack). This number needs to be around 16. In order to get more people developing for Silverlight, we need to give them a variety of templates to get them off the ground quickly. Silverlight needs to ship with the next version of Windows. At least version 4 needs to be pre-installed on Windows going forward. It’s small, in its own sandbox and I cannot find a reason for it not to be included. Silverlight needs to run on more platforms.  iOS and Android are the key here. I think Microsoft should shoot for Android first since I believe Android will take the lead in the mobile market (at least for the short-term). It would also be great to see Microsoft use Silverlight as the focus on their new tablets / “AppleTV”. I would even invest in getting it working with Kinect. When creating a new project in Silverlight, we should have the option to create a Unit Test. Most Silverlight developers are not unit testing. If this is surprising to you then you need to get out and talk to more developers. I partially blame this on Microsoft. When you create a new ASP.NET MVC application, you simply put a check to create a Unit Test project. We need the same thing for Silverlight. We should steer the developer into the right direction. Design patterns such as MVVM need to be easier to implement in Silverlight solutions.  I’d go so far as to say that MVVM Light should ship with Visual Studio. With the project / item templates and code snippets, Laurent puts you into the right direction. This is the way that it should have been. Easy for the 9-5 developer to grasp. I believe the majority of developers use code behind because that’s what is in all the demos provided by Microsoft. They are not trying to write sucky code it is that they simply don’t know a better way.  The XAP Files should be obfuscated/unused references deleted by default when in “Release” mode. A better Silverlight experience starts with a smaller XAP file. The less that a user has to download is the better, even with the majority of people on broadband. I would also recommend built-in obfuscation by Microsoft. People are paranoid that they can rename the .zip and run it through reflector. Get rid of the boring install experiences. Here is a great write up on what I’m talking about. The default “Install Silverlight” and “Loading screens” suck. They suck bad. We need a choice of templates that a professional designer has created.  Silverlight needs to supports more image formats. For example: it would be great to use .gif’s without converting them to .png.    Switching between Blend 4 and VS2010 to develop a Silverlight application is a pain. Probably one of the biggest issues that I can’t think of a good solution for. It would be nice if VS2012 had the best of both worlds and you never have to leave VS. We need reporting controls with SSRS included with the Silverlight Toolkit. I can’t think of another control that we need built into the toolkit. It would also be helpful to have export to .xls, .pdf and .doc included with the control. I hope that this post will at least get a few people talking. Who knows, Microsoft could be working on these things right now. Thanks for reading!  Subscribe to my feed CodeProject

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Notes from AT&T ARO Session at Oredev 2013

    - by Geertjan
    The mobile internet is 12 times bigger than internet was 12 years ago. Explosive growth, faster networks, and more powerful devices. 85% of users prefer mobile apps, while 56% have problems. Almost 60% want less than 2 second mobile app startup. App with poor mobile experience results in not buying stuff, going to competitor, not liking your company. Battery life. Bad mobile app is worse than no app at all because it turns people away from brand, etc. Apps didn't exist 10 years ago, 72 billion dollars a year in 2013, 151 billion in 2017.Testing performance. Mobile is different than regular app. Need to fix issues before customers discover them. ARO is free and open source AT&T tool for identifying mobile app performance problems. Mobile data is different -- radio resource control state machine. Radio resource control -- radio from idle to continuous reception -- drains battery, sends data, packets coming through, after packets come through radio is still on which is tail time, after 10 seconds of no data coming through radio goes off. For example, YouTube, e.g., 10 to 15 seconds after every connection, can be huge drain on battery, app traffic triggers RRC state. Goal. Balance fast network connectivity against battery usage. ARO is free and open source and test any platform and won awards. How do I test my app? pcap or tcdump network. Native collector: Android and iOS. Android rooted device is needed. Test app on phone, background data, idle for ads and analytics. Graded against 25 best practices. See all the processes, all network traffic mapped to processes, stats about trace, can look just at your app, exlude Facebook, etc. Many tests conducted, e.g., file download, HTML (wrapped applications, e.g., cordova). Best Practices. Make stuff smaller. GZIP, smaller files, download faster, best for files larger than 800 bytes, minification -- remove tabs and commenting -- browser doesn't need that, just give processor what it needs remove wheat from chaff. Images -- make images smaller, 1024x1024 image for a checkmark, swish it, make it 33% smaller, ARO records the screen, probably could be 9 times smaller. Download less stuff. 17% of HTTP content on mobile is duplicate data because of caching, reloading from cache is 75% to 99% faster than downloading again, 75% possible savings which means app will start up faster because using cache -- everyone wants app starting up 2 seconds. Make fewer HTTP requests. Inline and combine CSS and JS when possible reduces the number of requests, spread images used often. Fewer connections. Faster and use less battery, for example, download an image every 60 secs, download an add every 60 seconds, send analytics every 60 seconds -- instead of that, use transaction manager, download everything at once, reduce amount of time connected to network by 40% also -- 80% of applications do NOT close connections when they are finished, e.g., download picture, 10 seconds later the radio turns off, if you do not explicitly close, eventually server closes, 38% more tail time, 40% less energy if you close connection right away, background data traffic is 27% of data and 55% of network time, this kills the battery. Look at redirection. Adds 200 to 600 ms on each connection, waterfall diagram to all the requests -- e.g., xyz.com redirect to www.xyz.com redirect to xyz.mobi to www.xyz.com, waterfall visualization of packets, minimize redirects but redirects are fine. HTML best practices. Order matters and hiding code (JS downloading blocks rendering, always do CSS before JS or JS asynchronously, CSS 'display:none' hides images from user but the browser downloads them which adds latency to application. Some apps turn on GPS for no reason. Tell network when down, but maybe some other app is using the radio at the same time. It's all about knowing best practices: everyone wins with ARO (carriers, e.g., AT&T, developers, customers). Faster apps, better battery usage, network traffic better, better app reviews, happier customers. MBTA app, referenced as an example.ARO is free, open source, can test all platforms.

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • DirectoryServicesCOMException when working with System.DirectoryServices.AccountManagement

    - by antik
    I'm attempting to determine whether a user is a member of a given group using System.DirectoryServices.AccountManagment. I'm doing this inside a SharePoint WebPart in SharePoint 2007 on a 64-bit system. Project targets .NET 3.5 Impersonation is enabled in the web.config. The IIS Site in question is using an IIS App Pool with a domain user configured as the identity. I am able to instantiate a PrincipalContext as such: PrincipalContext pc = new PrincipalContext(ContextType.Domain) Next, I try to grab a principal: using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) { GroupPrincipal group = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\somegroup"); // snip: exception thrown by line above. } Both the above and UserPrincipal.FindByIdentity with a user SAM throw a DirectoryServicesCOMException: "Logon failure: Unknown user name or bad password" I've tried passing in a complete SAMAccountName to either FindByIdentity (in the form of MYDOMAIN\username) or just the username with no change in behavior. I've tried executing the code with other credentials using both the HostingEnvironment.Impersonate and SPSecurity.RunWithElevatedPrivileges approaches and also experience the same result. I've also tried instantiating my context with the domain name in place: Principal Context pc = new PrincipalContext(ContextType.Domain, "MYDOMAIN"); This throws a PrincipalServerDownException: "The server could not be contacted." I'm working on a reasonably hardened server. I did not lock the system down so I am unsure exactly what has been done to it. If there are credentials I need to allocate to my pool identity's user or in the domain security policy in order for these to work, I can configure the domain accordingly. Are there any settings that would be preventing my code from running? Am I missing something in the code itself? Is this just not possible in a SharePoint web? EDIT: Given further testing, my code functions correctly when tested in a Console application targeting .NET 4.0. I targeted a different framework because I didn't have AccountManagement available to me in the console app when targeting .NET 3.5 for some reason. using (PrincipalContext pc = new PrincipalContext(ContextType.Domain)) using (UserPrincipal adUser = UserPrincipal.FindByIdentity(pc, "MYDOMAIN\joe.user")) using (GroupPrincipal adGroup = GroupPrincipal.FindByIdentity(pc, "MYDOMAIN\user group")) { if (adUser.IsMemberOf(adGroup)) { Console.WriteLine("User is a member!"); } else { Console.WriteLine("User is NOT a member."); } } What varies in my SharePoint environment that might prohibit this function from executing?

    Read the article

  • C# and WUAPI: BeginDownload function

    - by Laurus Schluep
    first things first: I have no experience in object-oriented programming, whatsoever. I created my share of VB scripts and a bit of Java in school, but that's it. So my problem most likely lies there. But nevertheless, for the past few days, I've been trying to get a little application together that allows me to scan for, select and install Windows updates. So far I've been able to understand most of the reference and with the help of a few posts around the internet and I'm now at the point where I can select and download updates. So far I've been able to download a collection of updates using the following code: UpdateCollection CurrentInstallCollection = (UpdateCollection)e.Argument; UpdateDownloader CurrentDownloader = CurrentSession.CreateUpdateDownloader(); CurrentDownloader.Updates = CurrentInstallCollection; This is run in a background worker and returns once the download is done. It works just fine, I can see the updates getting downloaded on the file system but there isn't really a way to display the progress within the application. To do such a thing, there is the IDownloadJob interface that allows me to use the .BeginDownload method of the downloader (UpdateSession.CreateUpdateDownloader)...i think, at least. :D And here comes the problem: I have now tried for about 6 hours to get the code working, but no matter what I tried nothing worked. Also, there isn't much information around on the internet about the .BeginDownload method (or at least it seems that way), but my call of the method doesn't work: IDownloadJob CurrentDownloadJob = CurrentDownloader.BeginDownload(); I have no clue what arguments to supply...I've tried methods, objects...to no avail. The complete block of code looks like this: UpdateCollection CurrentInstallCollection = (UpdateCollection)e.Argument; UpdateDownloader CurrentDownloader = CurrentSession.CreateUpdateDownloader(); CurrentDownloader.Updates = CurrentInstallCollection; IDownloadJob CurrentDownloadJob = CurrentDownloader.BeginDownload(); IDownloadProgress CurrentJobProgess = CurrentDownloadJob.GetProgress(); tbStatus.Text = Convert.ToString(CurrentJobProgess.PercentComplete); I've found one source on the internet that called the method with .BeginDownload(this,this,this), which does not report any error in the code editor but probably won't help with reporting as it is my understanding, that the arguments supplied are the methods that are called when the described event occurrs (progress has changed or the download has finished). I also tried this, but it didn't work either: http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/636a8399-2bc1-46ff-94df-a58cebfe688c A detailed description of the BeginDownload method: http://msdn.microsoft.com/en-us/library/aa386132(v=VS.85).aspx WUAPI Reference: Unfortunately I'm not allowed to post the link, but the link to the BeginDownload method goes to the same place. :) I know, it's quite a bit to ask, but if someone could point me in the right direction (as in which arguments to pass and how), it'd be very much appreciated! :)

    Read the article

  • How to send messages between c++ .dll and C# app using named pipe?

    - by Gal
    I'm making an injected .dll written in C++, and I want to communicate with a C# app using named pipes. Now, I am using the built in System.IO.Pipe .net classes in the C# app, and I'm using the regular functions in C++. I don't have much experience in C++ (Read: This is my first C++ code..), tho I'm experienced in C#. It seems that the connection with the server and the client is working, the only problem is the messaged aren't being send. I tried making the .dll the server, the C# app the server, making the pipe direction InOut (duplex) but none seems to work. When I tried to make the .dll the server, which sends messages to the C# app, the code I used was like this: DWORD ServerCreate() // function to create the server and wait till it successfully creates it to return. { hPipe = CreateNamedPipe(pipename,//The unique pipe name. This string must have the following form: \\.\pipe\pipename PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_NOWAIT, //write and read and return right away PIPE_UNLIMITED_INSTANCES,//The maximum number of instances that can be created for this pipe 4096 , // output time-out 4096 , // input time-out 0,//client time-out NULL); if(hPipe== INVALID_HANDLE_VALUE) { return 1;//failed } else return 0;//success } void SendMsg(string msg) { DWORD cbWritten; WriteFile(hPipe,msg.c_str(), msg.length()+1, &cbWritten,NULL); } void ProccesingPipeInstance() { while(ServerCreate() == 1)//if failed { Sleep(1000); } //if created success, wait to connect ConnectNamedPipe(hPipe, NULL); for(;;) { SendMsg("HI!"); if( ConnectNamedPipe(hPipe, NULL)==0) if(GetLastError()==ERROR_NO_DATA) { DebugPrintA("previous closed,ERROR_NO_DATA"); DisconnectNamedPipe(hPipe); ConnectNamedPipe(hPipe, NULL); } Sleep(1000); } And the C# cliend like this: static void Main(string[] args) { Console.WriteLine("Hello!"); using (var pipe = new NamedPipeClientStream(".", "HyprPipe", PipeDirection.In)) { Console.WriteLine("Created Client!"); Console.Write("Connecting to pipe server in the .dll ..."); pipe.Connect(); Console.WriteLine("DONE!"); using (var pr = new StreamReader(pipe)) { string t; while ((t = pr.ReadLine()) != null) { Console.WriteLine("Message: {0}",t); } } } } I see that the C# client connected to the .dll, but it won't receive any message.. I tried doing it the other way around, as I said before, and trying to make the C# send messages to the .dll, which would show them in a message box. The .dll was injected and connected to the C# server, but when It received a message it just crashed the application it was injected to. Please help me, or guide me on how to use named pipes between C++ and C# app

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >