Search Results

Search found 1352 results on 55 pages for 'perspective'.

Page 48/55 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Summit reflections

    - by Rob Farley
    So far, my three PASS Summit experiences have been notably different to each other. My first, I wasn’t on the board and I gave two regular sessions and a Lightning Talk in which I told jokes. My second, I was a board advisor, and I delivered a precon, a spotlight and a Lightning Talk in which I sang. My third (last week), I was a full board director, and I didn’t present at all. Let’s not talk about next year. I’m not sure there are many options left. This year, I noticed that a lot more people recognised me and said hello. I guess that’s potentially because of the singing last year, but could also be because board elections can bring a fair bit of attention, and because of the effort I’ve put in through things like 24HOP... Yeah, ok. It’d be the singing. My approach was very different though. I was watching things through different eyes. I looked for the things that seemed to be working and the things that didn’t. I had staff there again, and was curious to know how their things were working out. I knew a lot more about what was going on behind the scenes to make various things happen, and although very little about the Summit was actually my responsibility (based on not having that portfolio), my perspective had moved considerably. Before the Summit started, Board Members had been given notebooks – an idea Tom (who heads up PASS’ marketing) had come up with after being inspired by seeing Bill walk around with a notebook. The plan was to take notes about feedback we got from people. It was a good thing, and the notebook forms a nice pair with the SQLBits one I got a couple of years ago when I last spoke there. I think one of the biggest impacts of this was that during the first keynote, Bill told everyone present about the notebooks. This set a tone of “we’re listening”, and a number of people were definitely keen to tell us things that would cause us to pull out our notebooks. PASSTV was a new thing this year. Justin, the host, featured on the couch and talked a lot of people about a lot of things, including me (he talked to me about a lot of things, I don’t think he talked to a lot people about me). Reaching people through online methods is something which interests me a lot – it has huge potential, and I love the idea of being able to broadcast to people who are unable to attend in person. I’m keen to see how this medium can be developed over time. People who know me will know that I’m a keen advocate of certification – I've been SQL certified since version 6.5, and have even been involved in creating exams. However, I don’t believe in studying for exams. I think training is worthwhile for learning new skills, but the goal should be on learning those skills, not on passing an exam. Exams should be for proving that the skills are there, not a goal in themselves. The PASS Summit is an excellent place to take exams though, and with an attitude of professional development throughout the event, why not? So I did. I wasn’t expecting to take one, but I was persuaded and took the MCM Knowledge Exam. I hadn’t even looked at the syllabus, but tried it anyway. I was very tired, and even fell asleep at one point during it. I’ll find out my result at some point in the future – the Prometric site just says “Tested” at the moment. As I said, it wasn’t something I was expecting to do, but it was good to have something unexpected during the week. Of course it was good to catch up with old friends and make new ones. I feel like every time I’m in the US I see things develop a bit more, with more and more people knowing who I am, who my staff are, and recognising the LobsterPot brand. I missed being a presenter, but I definitely enjoyed seeing many friends on the list of presenters. I won’t try to list them, because there are so many these days that people might feel sad if I don’t mention them. For those that I managed to see, I was pleased to see that the majority of them have lifted their presentation skills since I last saw them, and I happily told them as much. One person who I will mention was Paul White, who travelled from New Zealand to his first PASS Summit. He gave two sessions (a regular session and a half-day), packed large rooms of people, and had everyone buzzing with enthusiasm. I spoke to him after the event, and he told me that his expectations were blown away. Paul isn’t normally a fan of crowds, and the thought of 4000 people would have been scary. But he told me he had no idea that people would welcome him so well, be so friendly and so down to earth. He’s seen the significance of the SQL Server community, and says he’ll be back. It’ll be good to see him there. Will you be there too?

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • The Best BPM Journey: More Exciting Destinations with Process Accelerators

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Open World (OOW) earlier this month has been a great occasion to discuss with our BPM customers. It was interesting to hear definite patterns emerging from those conversations: “BPM is a journey”, “experiences to share”, “our organization now understands what BPM is”, and my favorite (with some caveats): “BPM is like wine tasting, once you start, you want to try more”. These customers have started their journey, climbed up the learning curve, and reached a vantage point that allows them to see their next BPM destination. They see the next few processes they are going to tackle and improve with BPM. These processes/destinations target both horizontal processes where BPM replaces or coordinates manual activities, and critical industry processes that the company needs to improve to compete and deliver increasing value. Each new destination generates value, allowing the organization to reduce the cost of manual processes that were not supported by apps/custom development, and increase efficiency of end-to-end processes partially covered by apps/custom dev. The question we wanted to answer is how to help organizations experience deeper success with BPM, by increasing their awareness of the potential for reaching new targets, and equipping them with the right tools. We decided that we needed to identify destinations, and plot routes to show the fastest path to those destinations. In the end we want to enable customers to reach “Process Excellence”: continuously set new targets and consistently and efficiently reach them. The result is Oracle Process Accelerators (PA), solutions built using the rich functionality in Oracle BPM Suite. PAs offers a rapidly expanding list of exciting destinations. Our launch of the latest installment of Process Accelerators at Oracle Open World includes new Industry-focused solutions such as Public Sector Incident Reporting and Financial Services Loan Origination, and improved other horizontal PAs, including Travel Request Management, Document Routing and Approval, and Internal Service Requests. Just before OOW we had extended the Oracle deployment of Travel Request Management, riding the enthusiastic response from early adopters among travelers (employees), management and support (approvers). “Getting there first” means being among the first to extract value from the PA approach, while acquiring deeper insights into the customers’ perspective. This is especially noteworthy when it comes to PAs, a set of solutions designed to be quickly deployed and iteratively improved by customers. The OOW launch has generated immediate feedback from customers, non-customers, analysts, and partners. They all confirmed that both Business and IT at organizations benefit from PAs when it comes to exploring the potential for BPM to improve their business processes. PAs help customers visualize what can be done with BPM, and PAs are made to be extended: you can see your destination, change the path to fit your needs, and deploy. We're discovering new destinations/processes that the market wants us to support, generic enough across industries and within industries. We'll keep on building sets of requirements, deliver functional design, construct solutions using Oracle BPM, and test them not only functionally but for performance, scalability, clustering, making them robust, product-quality. Delivering BPM solutions with product-grade quality is the equivalent of following a tried-and-tested path on a map. Do you know of existing destinations in your industry? If yes, we can draw a path to innovative processes together.

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Cloud Deployment Models

    - by B R Clouse
    Normal 0 false false false EN-US X-NONE X-NONE As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes. Most cloud deployments today are either private or public. It is also possible to connect a private cloud and a public cloud to form a hybrid cloud. A private cloud is for the exclusive use of an organization. Enterprises, universities and government agencies throughout the world are using private clouds. Some have designed, built and now manage their private clouds. Others use a private cloud that was built by and is now managed by a provider, hosted either onsite or at the provider’s datacenter. Because private clouds are for exclusive use, they are usually the option chosen by organizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internet connection. Because they require no capital investment from their users, they are particularly attractive to companies with limited resources in less regulated environments and for temporary workloads such as development and test environments. Public clouds offer a range of products, from end-user software packages to more basic services such as databases or operating environments. Public clouds may also offer cloud services such as a disaster recovery for a private cloud, or the ability to “cloudburst” a temporary workload spike from a private cloud to a public cloud. These are examples of a hybrid cloud. These are most feasible when the private and public clouds are built with similar technologies. Usually people think of a public cloud in terms of a user role, e.g., “Which public cloud should I consider using?” But someone needs to own and manage that public cloud. The company who owns and operates a public cloud is known as a public cloud provider. Oracle Database Cloud Service, Amazon RDS, database.com and Savvis Symphony Database are examples of public cloud database services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When evaluating deployment models, be aware that you can use any or all of the available options. Some workloads may be best-suited for a private cloud, some for a public or hybrid cloud. And you might deploy multiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make sure that each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you the greatest flexibility in moving resources and workloads among your different clouds. Oracle’s portfolio of cloud products and services enables both deployment models. Oracle can manage either model. Universities, government agencies and companies in all types of business everywhere in the world are using clouds built with the Oracle portfolio. By employing a consistent portfolio, these customers are able to run all of their workloads – from test and development to the most mission-critical -- in a consistent manner: One Enterprise Cloud, powered by Oracle.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • MS Chart Control Scale - Line graph show 12 months

    - by Mike
    Hi, On my X Axis, I have months. The chart shows up to 11 points, i.e. Jan - Nov of the same year, but when I add 12 points (Jan - Dec), it will do an auto label thing and change the interval for every 4 months. How can I change the graph so that it shows 12 months before it does the auto labels? Here is the server control code I am currently using. <asp:CHART ID="Chart1" runat="server" BorderColor="181, 64, 1" BorderDashStyle="Solid" BorderWidth="2" Height="296px" ImageLocation="~/TempImages/ChartPic_#SEQ(300,3)" ImageType="Png" Palette="None" Width="700px" BorderlineColor=""> <legends> <asp:Legend BackColor="Transparent" Font="Trebuchet MS, 8pt, style=Bold" IsTextAutoFit="False" Name="Default" Alignment="Center" DockedToChartArea="ChartArea1" Docking="Top" IsDockedInsideChartArea="False" Title="Legend"> </asp:Legend> </legends> <series> <asp:Series BorderColor="180, 26, 59, 105" BorderWidth="2" ChartType="Line" Color="220, 65, 140, 240" MarkerSize="6" Name="Series1" ShadowColor="Black" ShadowOffset="2" XValueType="DateTime" YValueType="Double" LabelFormat="c0" LegendText="Actual" MarkerStyle="Circle"> </asp:Series> <asp:Series BorderColor="180, 26, 59, 105" BorderWidth="2" ChartType="Line" Color="220, 224, 64, 10" MarkerSize="6" Name="Series2" ShadowColor="Black" ShadowOffset="2" XValueType="DateTime" YValueType="Double" LabelFormat="c0" LegendText="Projected" MarkerStyle="Circle"> </asp:Series> <asp:Series BorderColor="180, 26, 59, 105" BorderWidth="2" ChartArea="ChartArea1" ChartType="Line" Legend="Default" Name="Series3" LabelFormat="c0" XValueType="DateTime" YValueType="Double" Color="0, 192, 192" MarkerSize="6" ShadowColor="Black" ShadowOffset="2" LegendText="Actual Credit Limit" MarkerStyle="Circle"> </asp:Series> </series> <chartareas> <asp:ChartArea BackColor="#DEEDF7" BackGradientStyle="TopBottom" BackSecondaryColor="White" BorderColor="64, 64, 64, 64" BorderDashStyle="Solid" Name="ChartArea1" ShadowColor="Transparent"> <area3dstyle inclination="40" isclustered="False" isrightangleaxes="False" lightstyle="Realistic" perspective="9" rotation="25" wallwidth="3" /> <axisy linecolor="64, 64, 64, 64" islabelautofit="False" isstartedfromzero="False"> <LabelStyle Font="Trebuchet MS, 8.25pt, style=Bold" Format="c0" /> <majorgrid linecolor="64, 64, 64, 64" /> </axisy> <axisx linecolor="64, 64, 64, 64" intervaloffsettype="Months" intervaltype="Months" islabelautofit="False" isstartedfromzero="False"> <LabelStyle Font="Trebuchet MS, 8.25pt, style=Bold" Angle="-60" Format="MMM yy" /> <majorgrid linecolor="64, 64, 64, 64" /> </axisx> </asp:ChartArea> </chartareas> </asp:CHART> Thanks.

    Read the article

  • OCR an RSA key fob (security token)

    - by user130582
    I put together a quick WinForm/embedded IE browser control which logs into our company's bank website each morning and scrapes/exports the desired deposit information (the bank is a smallish regional bank). Since we have a few dozen "pseudoaccounts" that draw from the same master account, this actually takes 10-15 minutes to retrieve. Anyway, the only problem is that our business bank account reuires an RSA security token (http://www.rsa.com/node.aspx?id=1156)--if you are not familiar, it is a small device which shows a random 6 digit number every 15(?) seconds, so I have to prompt for this value before starting. This is on top of the website's login based security model, so even if you create a read-only account that can't do anything, you still have to put the RSA number in. We have 5 of these tokens for different people in the company. From our perspective this is nusiance security. I was joking about using a web camera to OCR the digits from the key fob so they didn't have to type it in -- mainly so that the scraping/export would be done before anyone arrives in the morning. Well, they asked if I could really do it. So now I ask you, how hard (how many hours) do you think it would take to OCR these digits reliably from a JPEG image produced by the camera? I already know I can get the JPEG easily. I think you get 3 tries to log in, so it really needs to hit a 99% accuracy rate. I could work on this on my off time, but they don't want me to put more than a few hours into it, so I want to leverage as much existing code as possible. This is a 7-segment display (like an alarm clock) so it's not exactly text that an OCR package would be used to seeing. Also--there is a countdown timer on the side of the display; typically when it is down to 1 bar, you wait until the next number appears and it starts over at 5 bars (like signal strength on your cell phone). So this would need to be OCRd as well but it is not text. Anyway the more I think about it as I type this, the less convinced I am that I can truly get this right, so maybe I should just work on it in my spare time?

    Read the article

  • How to Practice Unix Programming in C?

    - by danben
    After five years of professional Java (and to a lesser extent, Python) programming and slowly feeling my CS education slip away, I decided I wanted to broaden my horizons / general usefulness to the world and do something that feels more (to me) like I really have an influence over the machine. I chose to learn C and Unix programming since I feel like that is where many of the most interesting problems are. My end goal is to be able to do this professionally, if for no other reason than the fact that I have to spend 40-50 hours per week on work that pays the bills, so it may as well also be the type of coding I want to get better at. Of course, you don't get hired to do things you haven't dont before, so for now I am ramping up on my own. To this end, I started with K&R, which was a great resource in part due to the exercises spread throughout each chapter. After that I moved on to Computer Systems: A Programmer's Perspective, followed by ten chapters of Advanced Programming in the Unix Environment. When I am done with this book, I will read Unix Network Programming. What I'm missing in the Stevens books is the lack of programming problems; they mainly document functionality and provide examples, with a few end-of-chapter questions following. I feel that I would benefit much more from being challenged to use the knowledge in each chapter ala K&R. I could write some test program for each function, but this is a less desirable method as (1) I would probably be less motivated than if I were rising to some external challenge, and (2) I will naturally only think to use the function in the ways that have already occurred to me. So, I'd like to get some recommendations on how to practice. Obviously, my first choice would be to find some resource that has Unix programming challenges. I have also considered finding and attempting to contribute to some open source C project, but this is a bit daunting as there would be some overhead in learning to use the software, then learning the codebase. The only open-source C project I can think of that I use regularly is Python, and I'm not sure how easy that would be to get started on. That said, I'm open to all kinds of suggestions as there are likely things I haven't even thought of.

    Read the article

  • SHAREPOINT: Custom Field type property storage defined for custom field

    - by Eric Rockenbach
    ok here is a great question. I have a set of generic custom fields that are highly configurable from an end user perspective and the configuration is getting overbearing as there are nearly 100 plus items each custom field allows you to perform in the areas of Server/Client Validation, Server/Client Events/Actions, Server/Client Bindings parent/child, display properties for form/control, etc, etc. Right now I'm storing most of these values as "Text" in my field xml for my propertyschema. I'm very familiar with the multi column value, but this is not a complex custom type in sense it's an array. I also considered creating serilzable objects and stuffing them into the text field and then pulling out and de-serilizing them when editing through the field editor or acting on the rules through the custom spfield. So I'm trying to take the following for example <PropertySchema> <Fields> <Field Name="EntityColumnName" Hidden="TRUE" DisplayName="EntityColumnName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnParentPK" Hidden="TRUE" DisplayName="EntityColumnParentPK" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityColumnValueName" Hidden="TRUE" DisplayName="EntityColumnValueName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntityListName" Hidden="TRUE" DisplayName="EntityListName" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> <Field Name="EntitySiteUrl" Hidden="TRUE" DisplayName="EntitySiteUrl" MaxLength="500" DisplaySize="200" Type="Text"> <default></default> </Field> </Fields> <PropertySchema> And turn it into this... <PropertySchema> <Fields> <Field Name="ServerValidationRules" Hidden="TRUE" DisplayName="ServerValidationRules" Type="ServerValidationRulesType"> <default></default> </Field> </Fields> <PropertySchema> Ideas?????

    Read the article

  • create text inside a rectangle using inkscape

    - by mr calendar
    I've put some text inside a rectangle using inkscape so the tree is like <svg:rect><svg:text><svg:tspan>text.... The problem is, I can't see the text. I've tried fiddling with the opacity of the rect to no avail. There should be a way of doing this from the UI? Edit example as requested <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" width="184.25197" height="262.20471" id="svg2" sodipodi:version="0.32" inkscape:version="0.46" version="1.0" sodipodi:docname="ex1.svg" inkscape:output_extension="org.inkscape.output.svg.inkscape"> <defs id="defs4"> <inkscape:perspective sodipodi:type="inkscape:persp3d" inkscape:vp_x="0 : 526.18109 : 1" inkscape:vp_y="0 : 1000 : 0" inkscape:vp_z="744.09448 : 526.18109 : 1" inkscape:persp3d-origin="372.04724 : 350.78739 : 1" id="perspective10" /> </defs> <sodipodi:namedview id="base" pagecolor="#ffffff" bordercolor="#666666" borderopacity="1.0" gridtolerance="10000" guidetolerance="10" objecttolerance="10" inkscape:pageopacity="0.0" inkscape:pageshadow="2" inkscape:zoom="0.64" inkscape:cx="195.9221" inkscape:cy="335.3072" inkscape:document-units="px" inkscape:current-layer="layer1" showgrid="false" inkscape:window-width="640" inkscape:window-height="675" inkscape:window-x="44" inkscape:window-y="44" /> <metadata id="metadata7"> <rdf:RDF> <cc:Work rdf:about=""> <dc:format>image/svg+xml</dc:format> <dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> </cc:Work> </rdf:RDF> </metadata> <g inkscape:label="Layer 1" inkscape:groupmode="layer" id="layer1"> <rect style="opacity:0.25480766;fill:#ff0000;fill-opacity:1;stroke:#000000;stroke-width:12.94795799;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" id="rect2383" width="150.87796" height="84.226181" x="18.221733" y="39.557121"> <text xml:space="preserve" style="font-size:56.0331955px;font-style:normal;font-weight:normal;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;font-family:Bitstream Vera Sans" x="44.815186" y="114.0088" id="text2385" transform="scale(1.0054479,0.9945816)"><tspan sodipodi:role="line" id="tspan2387" x="44.815186" y="114.0088">text</tspan></text> </rect> </g> </svg> I'd expect to be able to see this in inkscape. The workaround is to put text on a layer above the box (the intent is that the box obscures the layers below it) and not try and get clever with nested tags. Shame it doesn't work though.

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • Is the Scala 2.8 collections library a case of "the longest suicide note in history" ?

    - by oxbow_lakes
    First note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter. Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially). I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example... > List("Paris", "London").map(_.length) res0: List[Int] List(5, 6) ...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like: def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base) Is this going to put people off coming to Scala? Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off? Was the library re-design a sensible idea? If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens? Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading fud with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java). Note - I should be clear that, whilst I believe that Josh Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.

    Read the article

  • Volunteer for a potential employer?

    - by EoRaptor013
    I've been looking for work since March, and haven't had much luck. Recently, however, I interviewed with a small company near my home for a C#, .NET, SQL development position. I hit it off very well with the hiring manager during the phone screen, and even more so during the face to face. Unfortunately, I failed the practical test: wiring up a web form, creating a couple of SQL stored procedures, saving new data with validation, and creating a minimal search screen. I knew what I was doing, but I was too slow to meet their standards as all the work needed to be done within an hour. Nevertheless, I really liked the place, the environment, the people who I would have been working with, and the boss. (I gave the company an 11 on Joel's 12 point scale.) So, the obvious next step was to scrape the rust off. I've been trying to create little projects for myself, but I don't know that I've been effective in getting any faster. What with all that goes into creating a project, I'm not heads-down coding as much as I think I need. Now, with all that introduction, here's the question. I have been thinking about calling the hiring manager at that place, and asking him to let me volunteer for three or four weeks, with no strings attached. I think it would benefit me, and wouldn't cost him anything (as long as I didn't slow the existing people down!). At the end of that period, he might, or might not, be inclined to hire me, but even if not, I would have had as much as 160 hours of in the trenches development. Maybe not all shiny, but no more rust, I would think. Does this plan make any sense at all? I certainly don't want to sound desperate (although, I'm not far from being there), and I very much need the tuneup, lube, and change the oil. What's the downside, if any, to me doing this? Do any of you see red flags going up—either from the prerspective of the hiring manager, or from the perspective of a developer?

    Read the article

  • What was "The Next Big Thing" when you were just starting out in programming?

    - by Andrew
    I'm at the beginning of my career and there are lots of things which are being touted as "The Next Big Thing". For example: Dependency Injection (Spring, etc) MVC (Struts, ASP.NET MVC) ORMs (Linq To SQL, Hibernate) Agile Software Development These things have probably been around for some time, but I've only just started out. And don't get me wrong, I think these things are great! So, what was "The Next Big Thing" when you were starting out? When was it? Were people sceptical of it at first? Why? Did you think it would catch on? Did it pan out and become widely accepted/used? If not, why not? EDIT It's been nearly a week since I first posted this question and I can safely say that I did not expect such explosive interest. I asked the question so that I could gain a perspective of what kinds of innovations in programming people thought were most important when they were starting out. At the time of writing this I have read ~95% of all answers. To answer a few questions, the "Next Big Things" I listed are ones that I am currently really excited about and that I had not really been exposed to until I started working. I'm hoping to implement some or all of these in the near future at my current workplace. To many people they are probably old news. In regards to the "is this a real question" debate, I can see that obviously hasn't been settled yet. I feel bad whenever I read a comment saying that these kinds of questions take away from the real meaning of SO. I'm not wholly convinced that it doesn't. On the other hand, I have seen a lot of comments saying what a great question it is. Anyway, I have chosen "The Internet!" as my answer to this question. I don't think (in my very humble opinion, and, it seems many SOers opinions) that many things related to programming can compare. Nowadays every business and their dog has a website which can do anything from simply supplying information to purchasing goods halfway around the world to updating your blog. And of course, all these businesses need people like us. Thanks to everyone for all the great answers!

    Read the article

  • function parameter used to store value

    - by user248247
    Hi, I have to define an interface. The API in my homework is stated below: int generate_codes(char * ssn, char * student_id); int denotes 0 or 1 for pass or fail. studentid is an output param should return a 6 digit id. ssn is a 9 digit input param they school program will take ssn's and use my code to generate the student id. now from an API perspective should I not be using const char * for both parameters. should the studentid not be passed in by reference? rather than by pointer? can someone tell me how i can easily use the pointer in my test app which uses my api to get the pointer such that it prints a std::string from a char *? my app code looks something like const char * ssn = "987098765" const char * studnt_id = new char [7]; int value = -1; value = generate_codes(ssn,studnt_id); std::string test(studnt_id); std::cout<<"student id= "<<test<<" Pass/fail= "<<value<<std::endl; delete [] studnt_id; return 0; I basically got an error about << not being compatible with the right hand side of the operand. When i changed the code to std::cout<<"student id= "<<test.c_str()<<" Pass/fail= "<<value<<std::endl; then it worked but i get garbage for the value. not sure how to do get the value form the pointer. THe value inside the function prints just fine. but when i try to print it outside of the function it prints garbage. Inside the above function I do set the studndt_id like so std::string str_studnt_id = studnt_id; should that make the address of the str_studnt point to the address of studnt_id and thus any changes I make to the value that its pointing to it should reflect outside the function?

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • In a DDD approach, is this example modelled correctly?

    - by Tag
    Just created an acc on SO to ask this :) Assuming this simplified example: building a web application to manage projects... The application has the following requirements/rules. 1) Users should be able to create projects inserting the project name. 2) Project names cannot be empty. 3) Two projects can't have the same name. I'm using a 4-layered architecture (User Interface, Application, Domain, Infrastructure). On my Application Layer i have the following ProjectService.cs class: public class ProjectService { private IProjectRepository ProjectRepo { get; set; } public ProjectService(IProjectRepository projectRepo) { ProjectRepo = projectRepo; } public void CreateNewProject(string name) { IList<Project> projects = ProjectRepo.GetProjectsByName(name); if (projects.Count > 0) throw new Exception("Project name already exists."); Project project = new Project(name); ProjectRepo.InsertProject(project); } } On my Domain Layer, i have the Project.cs class and the IProjectRepository.cs interface: public class Project { public int ProjectID { get; private set; } public string Name { get; private set; } public Project(string name) { ValidateName(name); Name = name; } private void ValidateName(string name) { if (name == null || name.Equals(string.Empty)) { throw new Exception("Project name cannot be empty or null."); } } } public interface IProjectRepository { void InsertProject(Project project); IList<Project> GetProjectsByName(string projectName); } On my Infrastructure layer, i have the implementation of IProjectRepository which does the actual querying (the code is irrelevant). I don't like two things about this design: 1) I've read that the repository interfaces should be a part of the domain but the implementations should not. That makes no sense to me since i think the domain shouldn't call the repository methods (persistence ignorance), that should be a responsability of the services in the application layer. (Something tells me i'm terribly wrong.) 2) The process of creating a new project involves two validations (not null and not duplicate). In my design above, those two validations are scattered in two different places making it harder (imho) to see whats going on. So, my question is, from a DDD perspective, is this modelled correctly or would you do it in a different way?

    Read the article

  • Can you pass by reference in Java?

    - by dbones
    Hi. Sorry if this sounds like a newbie question, but the other day a Java developer mentioned about passing a paramter by reference (by which it was ment just pass a Reference object) From a C# perspective I can pass a reference type by value or by reference, this is also true to value types I have written a noddie console application to show what i mean.. can i do this in Java? namespace ByRefByVal { class Program { static void Main(string[] args) { //Creating of the object Person p1 = new Person(); p1.Name = "Dave"; PrintIfObjectIsNull(p1); //should not be null //A copy of the Reference is made and sent to the method PrintUserNameByValue(p1); PrintIfObjectIsNull(p1); //the actual reference is passed to the method PrintUserNameByRef(ref p1); //<-- I know im passing the Reference PrintIfObjectIsNull(p1); Console.ReadLine(); } private static void PrintIfObjectIsNull(Object o) { if (o == null) { Console.WriteLine("object is null"); } else { Console.WriteLine("object still references something"); } } /// <summary> /// this takes in a Reference type of Person, by value /// </summary> /// <param name="person"></param> private static void PrintUserNameByValue(Person person) { Console.WriteLine(person.Name); person = null; //<- this cannot affect the orginal reference, as it was passed in by value. } /// <summary> /// this takes in a Reference type of Person, by reference /// </summary> /// <param name="person"></param> private static void PrintUserNameByRef(ref Person person) { Console.WriteLine(person.Name); person = null; //this has access to the orginonal reference, allowing us to alter it, either make it point to a different object or to nothing. } } class Person { public string Name { get; set; } } } If it java cannot do this, then its just passing a reference type by value? (is that fair to say) Many thanks Bones

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >