Search Results

Search found 1614 results on 65 pages for 'emps (expensive managemen'.

Page 58/65 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

  • Hiring New IT Employees versus Promoting Internally for IT Positions

    Recently I was asked my opinion regarding the hiring of IT professionals in regards to the option of hiring new IT employees versus promoting internally for IT positions. After thinking a little more about this question regarding staffing, specifically pertaining to promoting internally verses new employees; I think my answer to this question is that it truly depends on the situation. However, in most cases I would side with promoting internally. The key factors in this decision should be based on a company/department’s current values, culture, attitude, and existing priorities.  For example if a company values retaining all of its hard earned business knowledge then they would tend to promote existing employees internal over hiring a new employee. Moreover, the company will have to pay to train an existing employee to learn a new technology and the learning curve for some technologies can be very steep. Conversely, if a company values new technologies and technical proficiency over business knowledge then a company would tend to hire new employees because they may already have experience with a technology that the company is planning on using. In this scenario, the company would have to take on the additional overhead of allowing a new employee to learn how the business operates prior to them being fully effective. To illustrate my points above let us look at contractor that builds in ground pools for example.  He has the option to hire employees that are very strong but use small shovels to dig, or employees weak in physical strength but use large shovels to dig. Which employee should the contractor use to dig a hole for a new in ground pool? If we compare the possible candidates for this job we will find that they are very similar to hiring someone internally verses a new hire. The first example represents the existing workers that are very strong regarding the understanding how the business operates and the reasons why in a specific manner. However this employee could be potentially weaker than an outsider pertaining to specific technologies and would need some time to build their technical prowess for a new position much like the strong worker upgrading their shovels in order to remove more dirt at once when digging. The other employee is very similar to hiring a new person that may already have the large shovel but will need to increase their strength in order to use the shovel properly and efficiently so that they can move a maximum amount of dirt in a minimal amount of time. This can be compared to new employ learning how a business operates before they can be fully functional and integrated in the company/department. Another key factor in this dilemma pertains to existing employee and their passion for their work, their ability to accept new responsibility when given, and the willingness to take on responsibilities when they see a need in the business. As much as possible should be considered in this decision down to the mood of the team, the quality of existing staff, learning cure for both technology and business, and the potential side effects of the existing staff.  In addition, there are many more consideration based on the current team/department/companies culture and mood. There are several factors that need to be considered when promoting an individual or hiring new blood for a team. They both can provide great benefits as well as create controversy to a group. Personally, staffing especially in the IT world is like building a large scale system in that all of the components and modules must fit together and preform as one cohesive system in the same way a team must come together using their individually acquired skills so that they can work as one team.  If a module is out of place or is nonexistent then the rest of the team will suffer until the all of its issues are addressed and resolved. Benefits of Promoting Internally Internal promotions give employees a reason to constantly upgrade their technology, business, and communication skills if they want to further their career Employees can control their own destiny based on personal desires Employee already knows how the business operates Companies can save money by promoting internally because the initial overhead of allowing new hires to learn how a company operates is very expensive Newly promoted employees can assist in training their replacements while transitioning to their new role within a company. Existing employees already have a proven track record in regards fitting in with the business culture; this is always an unknown with all new hires Benefits of a New Hire New employees can energize and excite existing employees New employees can bring new ideas and advancements in technology New employees can offer a different perspective on existing issues based on their past experience. As you can see the decision to promote an existing employee from within a company verses hiring a new person should be based on several factors that should ultimately place the business in the best possible situation for the immediate and long term future. How would you handle this situation? Would you hire a new employee or promote from within?

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Thinking differently about BI delivery

    - by jamiet
    My day job involves implementing Business Intelligence (BI) solutions which, as I have said before, is simply about giving people the information they need to do their jobs. I’m always interested in learning about new ways of achieving that aim and that is my motivation for writing blog entries that are not concerned with SQL or SQL Server per se. Implementing BI systems usually involves hacking together a bunch third party products with some in-house “glue” and delivering information using some shiny, expensive web-based front-end tool; the list of vendors that supply such tools is big and ever-growing. No doubt these tools have their place and of late I have started to wonder whether they can be supplemented with different ways of delivering information. The problem I have with these separate web-based tools is exactly that – they are separate web-based tools. What’s the problem with that you might ask? I’ll explain! They force the information worker to go somewhere unfamiliar in order to get the information they need to do their jobs. Would it not be better if we could deliver information into the tools that those information workers are already using and not force them to go somewhere else? I look at the rise of blogging over recent years and I realise that what made them popular is that people can subscribe to RSS feeds and have information pushed to them in their tool of choice rather than them having to go and find the information for themselves in a tool that has been foisted upon them. Would it not be a good idea to adopt the principle of subscription for the benefit of delivering BI information as well? I think it would and in the rest of this blog entry I’ll outline such a scenario where the power of subscription could be used to enhance the delivery of information to information workers. Typical questions that information workers ask might be: What are my year-on-year sales figures? What was my footfall yesterday? How many widgets have I sold so far today? Each of those questions includes a time element and that shouldn’t surprise us, any BI system that I have worked on includes the dimension of time. Now, what do people use to view and organise their time-oriented information? Its not a trick question, they use a calendar and in the enterprise space more often than not that calendar is managed using Outlook. Given then that information workers are already looking at their calendar in Outlook anyway would it not make sense then to deliver information into that same calendar? Of course it would. Calendars are a great way of visualising information such as sales figures. Observe: Just in this single screenshot I have managed to convey a multitude of information. The information worker can see, at a glance, information about hourly/daily/weekly/monthly sales and, moreover, he/she is viewing that information right inside the tool that they use every day. There is no effort on the part of him/her, the information just appears hour after hour, day after day. Taking the idea further, each one of those calendar items could be a mini-dashboard in its own right. Double-clicking on an item could show a plethora of other information about that time slot such as breaking the sales down per region or year-over-year comparisons. Perhaps the title could employ a sparkline? Loads of possibilities. The point is that calendars are a completely natural way to visualise information; we should make more use of them! The real beauty of delivering information using calendars for us BI developers is that it should be so easy. In the case of Outlook we don’t need to write complicated VBA code that can go and manipulate a person’s calendar, simply publishing data in a format that Outlook can understand is sufficient and happily such formats already exist; iCalendar is the accepted format and the even more flexible xCalendar is hopefully on its way as well.   I’d like to make one last point and this one is with my SQL Server hat on. Reporting Services 2008 R2 introduced the ability to publish data as subscribable Atom feeds so it seems logical that it could also be a vehicle for delivering calendar feeds too. If you think this would be a good idea go and vote for it at Publish data as iCalendar feeds and please please please add some comments (especially if you vote it down). Work smarter, not harder! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • Key Windows Phone Development Concepts

    - by Tim Murphy
    As I am doing more development in and out of the enterprise arena for Windows Phone I decide I would study for the 70-599 test.  I generally take certification tests as a way to force me to dig deeper into a technology.  Between the development and studying I decided it would be good to put a post together of key development features in Windows Phone 7 environment.  Contrary to popular belief the launch of Windows Phone 8 will not obsolete Windows Phone 7 development.  With the launch of 7.8 coming shortly and people who will remain on 7.X for the foreseeable future there are still consumers needing these apps so don’t throw out the baby with the bath water. PhoneApplicationService This is a class that every Windows Phone developer needs to become familiar with.  When it comes to application state this is your go to repository.  It also contains events that help with management of your application’s lifecycle.  You can access it like the following code sample. 1: PhoneApplicationService.Current.State["ValidUser"] = userResult; DeviceNetworkInformation This class allows you to determine the connectivity of the device and be notified when something changes with that connectivity.  If you are making web service calls you will want to check here before firing off. I have found that this class doesn’t actually work very well for determining if you have internet access.  You are better of using the following code where IsConnectedToInternet is an App level property. private void Application_Launching(object sender, LaunchingEventArgs e){ // Validate user access if (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None) { IsConnectedToInternet = true; } else { IsConnectedToInternet = false; } NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(NetworkChange_NetworkAddressChanged);}void NetworkChange_NetworkAddressChanged(object sender, EventArgs e){ IsConnectedToInternet = (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None);} Push Notification Push notification allows your application to receive notifications in a way that reduces the application’s power needs. This MSDN article is a good place to get the basics of push notification, but you can see the essential concept in the diagram below.  There are three types of push notification: toast, Tile and raw.  The first two work regardless of the state of the application where as raw messages are discarded if your application is not running.   Live Tiles Live tiles are one of the main differentiators of the Windows Phone platform.  They allow users to find information at a glance from their start screen without navigating into individual apps.  Knowing how to implement them can be a great boost to the attractiveness of your application. The simplest step-by-step explanation for creating live tiles is here. Local Database While your application really only has Isolated Storage as a data store there are some ways of giving you database functionality to develop against.  There are a number of open source ORM style solutions.  Probably the best and most native way I have found is to use LINQ to SQL.  It does take a significant amount of setup, but the ease of use once it is configured is worth the cost.  Rather than repeat the full concepts here I will point you to a post that I wrote previously. Tasks (Bing, Email) Leveraging built in features of the Windows Phone platform is an easy way to add functionality that would be expensive to develop on your own.  The classes that you need to make yourself familiar with are BingMapsDirectionsTask and EmailComposeTask.  This will allow your application to supply directions and give the user an email path to relay information to friends and associates. Event model Because of the ability for users to switch quickly to switch to other apps or the home screen is just one reason why knowing the Windows Phone event model is important.  You need to be able to save data so that if a user gets a phone call they can come back to exactly where they were in your application.  This means that you will need to handle such events as Launching, Activated, Deactivated and Closing at an application level.  You will probably also want to get familiar with the OnNavigatedTo and OnNavigatedFrom events at the page level.  These will give you an opportunity to save data as a user navigates through your app. Summary This is just a small portion of the concepts that you will use while building Windows Phone apps, but these are some of the most critical.  With the launch of Windows Phone 8 this list will probably expand.  Take the time to investigate these topics further and try them out in your apps. del.icio.us Tags: Windows Phone 7,Windows Phone,WP7,Software Development,70-599

    Read the article

  • BeansBinding Across Modules in a NetBeans Platform Application

    - by Geertjan
    Here's two TopComponents, each in a different NetBeans module. Let's use BeansBinding to synchronize the JTextField in TC2TopComponent with the data published by TC1TopComponent and received in TC2TopComponent by listening to the Lookup. The key to getting to the solution is to have the following in TC2TopComponent, which implements LookupListener: private BindingGroup bindingGroup = null; private AutoBinding binding = null; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && binding != null) { bindingGroup.getBinding("customerNameBinding").unbind(); } if (!result.allInstances().isEmpty()){ Customer c = result.allInstances().iterator().next(); // put the customer into the lookup of this topcomponent, // so that it will remain in the lookup when focus changes // to this topcomponent: ic.set(Collections.singleton(c), null); bindingGroup = new BindingGroup(); binding = Bindings.createAutoBinding( // a two-way binding, i.e., a change in // one will cause a change in the other: AutoBinding.UpdateStrategy.READ_WRITE, // source: c, BeanProperty.create("name"), // target: jTextField1, BeanProperty.create("text"), // binding name: "customerNameBinding"); bindingGroup.addBinding(binding); bindingGroup.bind(); } } I must say that this solution is preferable over what I've been doing prior to getting to this solution: I would get the customer from the resultChanged, set a class-level field to that customer, add a document listener (or action listener, which is invoked when Enter is pressed) on the text field and, when a change is detected, set the new value on the customer. All that is not needed with the above bit of code. Then, in the node, make sure to use canRename, setName, and getDisplayName, so that when the user presses F2 on a node, the display name can be changed. In other words, when the user types something different in the node display name after pressing F2, the underlying customer name is changed, which happens, in the first place, because the customer name is bound to the text field's value, so that the text field's value will also change once enter is pressed on the changed node display name. Also set a PropertyChangeListener on the node (which implies you need to add property change support to the customer object), so that when the customer object changes (which happens, in the second place, via a change in the value of the text field, as defined in the binding defined above), the node display name is updated. In other words, there's still a bit of plumbing you need to include. But less than before and the nasty class-level field for storing the customer in the TC2TopComponent is no longer needed. And a listener on the text field, with a property change listener implented on the TC2TopComponent, isn't needed either. On the other hand, it's more code than I was using before and I've had to include the BeansBinding JAR, which adds a bit of overhead to my application, without much additional functionality over what I was doing originally. I'd lean towards not doing things this way. Seems quite expensive for essentially replacing a listener on a text field and a property change listener implemented on the TC2TopComponent for being notified of changes to the customer so that the text field can be updated. On the other other hand, it's kind of nice that all this listening-related code is centralized in one place now. So, here's a nice improvement over the above. Instead of listening for a customer, listen for a node, from which the customer can be obtained. Then, bind the node display name to the text field's value, so that when the user types in the text field, the node display name is updated. That saves you from having to listen in the node for changes to the customer's name. In addition to that binding, keep the previous binding, because the previous binding connects the customer name to the text field, so that when the customer display name is changed via F2 on the node, the text field will be updated. private BindingGroup bindingGroup = null; private AutoBinding nodeUpdateBinding; private AutoBinding textFieldUpdateBinding; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && textFieldUpdateBinding != null) { bindingGroup.getBinding("textFieldUpdateBinding").unbind(); } if (bindingGroup != null && nodeUpdateBinding != null) { bindingGroup.getBinding("nodeUpdateBinding").unbind(); } if (!result.allInstances().isEmpty()) { Node n = result.allInstances().iterator().next(); Customer c = n.getLookup().lookup(Customer.class); ic.set(Collections.singleton(n), null); bindingGroup = new BindingGroup(); nodeUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, n, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "nodeUpdateBinding"); bindingGroup.addBinding(nodeUpdateBinding); textFieldUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, c, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "textFieldUpdateBinding"); bindingGroup.addBinding(textFieldUpdateBinding); bindingGroup.bind(); } } Now my node has no property change listener, while the customer has no property change support. As in the first bit of code, the text field doesn't have a listener either. All that listening is taken care of by the BeansBinding code.  Thanks to Toni for help with this, though he can't be blamed for anything that is wrong with it, only thanked for anything that is right with it. 

    Read the article

  • How to set text in Y-axis, instead of numbers, in a RadChart component from Telerik with Bar-type

    - by radbyx
    Hi, I have made an bar RadChart with "SeriesOrientation="Horizontal"". I have the text showing at the end for each bars, but instead I would like that text to be listet in the y-axis, instead of the 1,2,3.. numbers. It seems like i'm not allow to set any text in the y-axis, is there a property I can set? Here is my code snippes: === .ascx === <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <telerik:RadChart ID="RadChart1" runat="server" Skin="WebBlue" AutoLayout="true" Height="350px" Width="680px" SeriesOrientation="Horizontal"> <Series> <telerik:ChartSeries DataYColumn="UnitPrice" Name="Product Unit Price"> </telerik:ChartSeries> </Series> <PlotArea> <YAxis> <Appearance> <TextAppearance TextProperties-Font="Verdana, 8.25pt, style=Bold" /> </Appearance> </YAxis> <XAxis DataLabelsColumn="TenMostExpensiveProducts"> </XAxis> </PlotArea> <ChartTitle> <TextBlock Text="Ten Most Expensive Products" /> </ChartTitle> </telerik:RadChart> </ContentTemplate> </asp:UpdatePanel> ========================= === .ascx === protected void Page_Load(object sender, EventArgs e) { RadChart1.AutoLayout = false; RadChart1.Legend.Visible = false; // Create a ChartSeries and assign its name and chart type ChartSeries chartSeries = new ChartSeries(); chartSeries.Name = "Name"; chartSeries.Type = ChartSeriesType.Bar; // add new items to the series, // passing a value and a label string chartSeries.AddItem(98, "Product1"); chartSeries.AddItem(95, "Product2"); chartSeries.AddItem(100, "Product3"); chartSeries.AddItem(75, "Product4"); chartSeries.AddItem(1, "Product5"); // add the series to the RadChart Series collection RadChart1.Series.Add(chartSeries); // add the RadChart to the page. // this.Page.Controls.Add(RadChart1); // RadChart1.Series[0].Appearance.LegendDisplayMode = ChartSeriesLegendDisplayMode.Nothing; // RadChart1.Series[0].DataYColumn = "Uptime"; RadChart1.PlotArea.XAxis.DataLabelsColumn = "Name"; RadChart1.PlotArea.XAxis.Appearance.TextAppearance.TextProperties.Font = new System.Drawing.Font("Verdana", 8); RadChart1.BackColor = System.Drawing.Color.White; RadChart1.Height = 350; RadChart1.Width = 570; RadChart1.DataBind(); } I want to have the text: "Product1", "Product2", ect in the y-axis, can anyone help? Thx.

    Read the article

  • Compare NSArray with NSMutableArray adding delta objects to NSMutableArray

    - by Hooligancat
    I have an NSMutableArray that is populated with objects of strings. For simplicity sake we'll say that the objects are a person and each person object contains information about that person. Thus I would have an NSMutableArray that is populated with person objects: person.firstName person.lastName person.age person.height And so on. The initial source of data comes from a web server and is populated when my application loads and completes it's initialization with the server. Periodically my application polls the server for the latest list of names. Currently I am creating an NSArray of the result set, emptying the NSMutableArray and then re-populating the NSMutableArray with NSArray results before destroying the NSArray object. This seems inefficient to me on a few levels and also presents me with a problem losing table row references which I can work around, but might be creating more work for myself in doing so. The inefficiency seems to be that I should be able to compare the two arrays and end up with a filtered NSArray. I could then add the filtered set to the NSMutableArray. This would mean that I can simply append new data to the NSMutableArray instead of throwing everything out and re-populating. Conversely I would need to do the same filter in reverse to see if there are records that need removing from the NSMutableArray. Is there any method to do this in a more efficient manner? Have I overlooked something in the docs some place that refers to a simpler technique? I have a problem when I empty the NSMutableArray and re-populate in that any referencing tables lose their selected row state. I can track it and re-select it, but my theory is that using some form of compare and adding objects and removing objects instead of dealing with the whole array in one block might mean I keep my row reference (assuming the item isn't deleted of course). Any suggestions or help much appreciated. Update Would it be just as fast to do a fast enumeration over each comparing each line item as I go? It seems like an expensive operation, but with the last fast enumeration code it might be pretty efficient...

    Read the article

  • How much abstraction is too much?

    - by Daniel Bingham
    In an Object Oriented Program: How much abstraction is too much? How much is just right? I have always been a nuts and bolts kind of guy. I understood the concept behind high levels of encapsulation and abstraction, but always felt instinctively that adding too much would just confuse the program. I always tried to shoot for an amount of abstraction that left no empty classes or layers. And where in doubt, instead of adding a new layer to the hierarchy, I would try and fit something into the existing layers. However, recently I've been encountering more highly abstracted systems. Systems where everything that could require a representation later in the hierarchy gets one up front. This leads to a lot of empty layers, which at first seems like bad design. However, on second thought I've come to realize that leaving those empty layers gives you more places to hook into in the future with out much refactoring. It leaves you greater ability to add new functionality on top of the old with out doing nearly as much work to adjust the old. The two risks of this seem to be that you could get the layers you need wrong. In this case one would wind up still needing to do substantial refactoring to extend the code and would still have a ton of never used layers. But depending on how much time you spend coming up with the initial abstractions, the chance of screwing it up, and the time that could be saved later if you get it right - it may still be worth it to try. The other risk I can think of is the risk of over doing it and never needing all the extra layers. But is that really so bad? Are extra class layers really so expensive that it is much of a loss if they are never used? The biggest expense and loss here would be time that is lost up front coming up with the layers. But much of that time still might be saved later when one can work with the abstracted code rather than more low level code. So when is it too much? At what point do the empty layers and extra "might need" abstractions become overkill? How little is too little? Where's the sweet spot? Are there any dependable rules of thumb you've found in the course of your career that help you judge the amount of abstraction needed?

    Read the article

  • Good Email Notification Sending Service

    - by Philibert Perusse
    I need to send a few but important email notifications to individual users. For instance, when they register their software I send them a confirmation email. Right now, I am using 'sendmail' from my Perl CGI script to do the job. Most of my automated email are lost or marked as junk. Unfortunately, I am using shared hosting services and not a very good control over the SPF and SenderID DNS records. Even more bad, some other user of that shared server has been infected with some kind of SPAM-BOT and the IP is now blacklisted until further notice! Anyway I just don't want to deal with this kind of headache. I am looking for an online service that I will be able to subscribe to and pay something like 0.10$ per email I send with no monthly fees. I just need and API to be able to send the email from PHP or Perl code I will have to write. I have been looking around at all those "Email Sending Services" and they are all wrapped around creating campains and managing lists for bulk email marketing distribution and newsletters. But remember, I want to send an email notification to a "single" recipient. So far, I have look at MailChimp, SocketLabs, iContact, ConstantContact, StreamSend and so many others to no avail. I have seen one comment at Hackers News saying that MailChimp have an API for transactional e-mails (i.e. ad-hoc ones to welcome a user for example). So you're not just restricted to using them for bulk emails But I cannot find this in the API documentation supplied, maybe this was removed. Any suggestions out there. Here is a summary of my requirements: Allows ad hoc sending of email to a single recipient. Throughput may well be throttle I don't care, i am sending like 2-5 emails a day. API available in PHP or Perl to connect to that web service. Ideally I can send HTML formatted emails, otherwise I will live with text only. Solution not too expensive, between 0.01$ and 0.25$ per email would be acceptable. No recurring monthly fees.

    Read the article

  • Complex.pm taking up a lot of time

    - by synapz
    While trying to profile my perl program, I find that Complex.pm is taking up a lot of time with what looks like some kind of warning. Also, my code shouldn't have any complex numbers being generated or used, so I am not sure what it is doing in Complex.pm, anyway. Here's the FastProf output for the most expensive lines: /usr/lib/perl5/5.8.8/Math/Complex.pm:182 1.55480 276232: _cannot_make("real part", $re) unless $re =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:310 1.01132 453641: sub cartesian {$_[0]->{c_dirty} ? /usr/lib/perl5/5.8.8/Math/Complex.pm:315 0.97497 562188: sub set_cartesian { $_[0]->{p_dirty}++; $_[0]->{c_dirty} = 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:189 0.86302 276232: return $self; /usr/lib/perl5/5.8.8/Math/Complex.pm:1332 0.85628 293660: $self->{display_format} = { %display_format }; /usr/lib/perl5/5.8.8/Math/Complex.pm:185 0.81529 276232: _cannot_make("imaginary part", $im) unless $im =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:1316 0.78749 293660: my %display_format = %DISPLAY_FORMAT; /usr/lib/perl5/5.8.8/Math/Complex.pm:1335 0.69534 293660: %{$self->{display_format}} : /usr/lib/perl5/5.8.8/Math/Complex.pm:186 0.66697 276232: $self->set_cartesian([$re, $im ]); /usr/lib/perl5/5.8.8/Math/Complex.pm:170 0.62790 276232: my $self = bless {}, shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:172 0.56733 276232: if (@_ == 0) { /usr/lib/perl5/5.8.8/Math/Complex.pm:316 0.53179 281094: $_[0]->{'cartesian'} = $_[1] } /usr/lib/perl5/5.8.8/Math/Complex.pm:1324 0.48768 293660: if (@_ == 1) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1319 0.44835 293660: if (exists $self->{display_format}) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1318 0.40355 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:187 0.39950 276232: $self->display_format('cartesian'); /usr/lib/perl5/5.8.8/Math/Complex.pm:1315 0.39312 293660: my $self = shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:1331 0.38087 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:184 0.35171 276232: $im ||= 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:181 0.34145 276232: if (defined $re) { /usr/lib/perl5/5.8.8/Math/Complex.pm:171 0.33492 276232: my ($re, $im); /usr/lib/perl5/5.8.8/Math/Complex.pm:390 0.20658 128280: my ($z1, $z2, $regular) = @_; /usr/lib/perl5/5.8.8/Math/Complex.pm:391 0.20631 128280: if ($z1->{p_dirty} == 0 and ref $z2 and $z2->{p_dirty} == 0) { Thanks for any help!

    Read the article

  • Approaches to create a nested tree structure of NSDictionaries?

    - by d11wtq
    I'm parsing some input which produces a tree structure containing NSDictionary instances on the branches and NSString instance at the nodes. After parsing, the whole structure should be immutable. I feel like I'm jumping through hoops to create the structure and then make sure it's immutable when it's returned from my method. We can probably all relate to the input I'm parsing, since it's a query string from a URL. In a string like this: a=foo&b=bar&a=zip We expect a structure like this: NSDictionary { "a" => NSDictionary { 0 => "foo", 1 => "zip" }, "b" => "bar" } I'm keeping it just two-dimensional in this example for brevity, though in the real-world we sometimes see var[key1][key2]=value&var[key1][key3]=value2 type structures. The code hasn't evolved that far just yet. Currently I do this: - (NSDictionary *)parseQuery:(NSString *)queryString { NSMutableDictionary *params = [NSMutableDictionary dictionary]; NSArray *pairs = [queryString componentsSeparatedByString:@"&"]; for (NSString *pair in pairs) { NSRange eqRange = [pair rangeOfString:@"="]; NSString *key; id value; // If the parameter is a key without a specified value if (eqRange.location == NSNotFound) { key = [pair stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; value = @""; } else { // Else determine both key and value key = [[pair substringToIndex:eqRange.location] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; if ([pair length] > eqRange.location + 1) { value = [[pair substringFromIndex:eqRange.location + 1] stringByReplacingPercentEscapesUsingEncoding:NSASCIIStringEncoding]; } else { value = @""; } } // Parameter already exists, it must be a dictionary if (nil != [params objectForKey:key]) { id existingValue = [params objectForKey:key]; if (![existingValue isKindOfClass:[NSDictionary class]]) { value = [NSDictionary dictionaryWithObjectsAndKeys:existingValue, [NSNumber numberWithInt:0], value, [NSNumber numberWithInt:1], nil]; } else { // FIXME: There must be a more elegant way to build a nested dictionary where the end result is immutable? NSMutableDictionary *newValue = [NSMutableDictionary dictionaryWithDictionary:existingValue]; [newValue setObject:value forKey:[NSNumber numberWithInt:[newValue count]]]; value = [NSDictionary dictionaryWithDictionary:newValue]; } } [params setObject:value forKey:key]; } return [NSDictionary dictionaryWithDictionary:params]; } If you look at the bit where I've added FIXME it feels awfully clumsy, pulling out the existing dictionary, creating an immutable version of it, adding the new value, then creating an immutable dictionary from that to set back in place. Expensive and unnecessary? I'm not sure if there are any Cocoa-specific design patterns I can follow here?

    Read the article

  • do you want try ? [closed]

    - by gemxia
    Only with time and hard work, that can you get an IT certification. Although there are hundreds of certifications for you to pick from, the basic steps to get certified are the same. The following steps are certain to clear your puzzles about the preparation process of your <. The first step to take is choosing a certification. It is simple but at the same time very important. Make sure to choose the certifications that are respected in your industries. The second step you should take is to evaluate your experience. Find out what skills and experience the IBM certification is expecting. Then, decide what type of training is suitable for you. Preparation books will certainly not make you an expert in subjects you’re not already an expert in. But, for the subject areas you know little or nothing about, a study guide provides you clues and guidance about what the important information from those subjects is when it comes to passing the Examkiller IBM examination exam. Visit certification forums during your 000-M62 certification exam preparation. In this way, you can learn from others’ mistakes and example, meanwhile help your own studies. Achieving your goals without proper training is a sure road to failure. Knowing about a topic and having special expertise in it are completely different. One cannot be an expert in the IT industry without the proper foundation. Taking a training class for Examkiller IBM exam might be a guaranteed way. When the economy dips and budgets get tightened, one of the first things to go from corporate spending is training. There are plenty of courses, boot camps and cram sessions that promise to prepare you for the IBM exam, but they are exceptionally expensive. As much as possible, for your own benefit, you should look for resources that are free. Vendor of IBM offers free resource in their sites. These practice exams are the closest to the real exams. If you think that you have got ready for the exam, you can take the fourth now, which is registering your exam. Even if you have passed your <, yet you can’t relax, since there are still so many certifications ahead. If you have just memorized some questions and answers, excepting a fluke, then, don’t take the IBM test exam, until you really have the experience and skills the certification requires.

    Read the article

  • ASP.NET MVC hosting problem, routing, handlers and modules

    - by johnabs
    Hi, This is one of them again, hosting with fasthosts. I recently purchased a windows developer package from them. When I tried to deploy my ASP.NET 3.5 MVC project, which was working fine with the same host on a reseller account, it is not working. The reason to move from reseller to a normal Windows Developer package is, the MS-SQL server database is too expensive in a reseller account, strange! I contact them, and they are saying they upgraded the control panel. The technical support said, I need to disable the Modules and Handlers from the web.config file. Also I need to deploy them as .dll files. I am not sure how to do that. The website is partially working now. I think the problem is now with the routing. I have not deployed any .dlls apart from the System.Web.Mvc.dll, System.Web.Routing.dll and System.Web.Abstractions.dll. I have not got a clue on how to deploy the Modules and Handlers. The code that I commented on the web.config file is as follows. <!--<modules runAllManagedModulesForAllRequests="true"> <remove name="ScriptModule" /> <remove name="UrlRoutingModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <remove name="MvcHttpHandler" /> <remove name="UrlRoutingHandler" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="MvcHttpHandler" preCondition="integratedMode" verb="*" path="*.mvc" type="System.Web.Mvc.MvcHttpHandler, System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="UrlRoutingHandler" preCondition="integratedMode" verb="*" path="UrlRouting.axd" type="System.Web.HttpForbiddenHandler, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </handlers>--> Anybody got any ideas please. Thank you for your time.

    Read the article

  • Which third party website thumbnailing services do you use?

    - by Ben Delarre
    I've got a requirement for showing thumbnails of arbitrary websites. I need to be able to show small thumbnails (120px by 90px), and larger thumbnails of around 480px wide. I'll need to specify the queue and invalid placeholder images and preferably have a pingback when the queued images are processed so I can respond appropriately. I'd also need a simple API I can use either directly embedded in my HTML, or from a simple web request to queue the images. I've been looking at various services ranging from low-fi services, to large scale ones - here's some examples: www.bitpixels.com Uses Google AppEngine, seems like a prototype or a toy. Free! www.websnapr.com Tried using this, made a free account and requested a thumbnail. Waited a few minutes and refreshed a couple of times, and ended up having the account banned. Free is tricky yes, but if I can't try it out successfully I'm disinclined to pay. www.shrinktheweb.com Free account seems to be very quick. Lots of documentation on the site, and even covers local caching of the images to your own server (documentation mostly in PHP). Quality of thumbnails look good, and there appear to be sufficient options for setting thumbnail placeholder images and parameters for altering how the thumbnailing is done. Also supports large 'screenshots' of URLs - very useful for me. Discovered the PRO pricing is an à la carte menu, allowing me to select just the features I want and keep the monthly cost low. Excellent stuff, have decided to use this service. www.thumbalizr.com Good coverage of thumbnail sizes and control options - even allowing specification for browser width when thumbnailing. No ping-back, but I can live without that. Supports local caching of images with PHP API, would prefer .NET, but can port it if necessary. Looks like a fairly professional service but seems fairly expensive for the number of thumbnails you get to generate. apologies for lack of proper linking - spam protection! I'm not entirely convinced by any of them, and since this will be a long term service I'd like some stability and support. I'm willing to pay for the service, but I'd want something that fulfills most if not all of my requirements for that. I should also mention that we're hosted on Windows under IIS, so local solutions involving Xvfb and the like sadly can't be used for this project. So my question is: what services do you use? How have they panned out, are you happy with them?

    Read the article

  • Why is Perl's Math::Complex taking up so much time when I try acos(1)?

    - by synapz
    While trying to profile my Perl program, I find that Math::Complex is taking up a lot of time with what looks like some kind of warning. Also, my code shouldn't have any complex numbers being generated or used, so I am not sure what it is doing in Math::Complex, anyway. Here's the FastProf output for the most expensive lines: /usr/lib/perl5/5.8.8/Math/Complex.pm:182 1.55480 276232: _cannot_make("real part", $re) unless $re =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:310 1.01132 453641: sub cartesian {$_[0]->{c_dirty} ? /usr/lib/perl5/5.8.8/Math/Complex.pm:315 0.97497 562188: sub set_cartesian { $_[0]->{p_dirty}++; $_[0]->{c_dirty} = 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:189 0.86302 276232: return $self; /usr/lib/perl5/5.8.8/Math/Complex.pm:1332 0.85628 293660: $self->{display_format} = { %display_format }; /usr/lib/perl5/5.8.8/Math/Complex.pm:185 0.81529 276232: _cannot_make("imaginary part", $im) unless $im =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:1316 0.78749 293660: my %display_format = %DISPLAY_FORMAT; /usr/lib/perl5/5.8.8/Math/Complex.pm:1335 0.69534 293660: %{$self->{display_format}} : /usr/lib/perl5/5.8.8/Math/Complex.pm:186 0.66697 276232: $self->set_cartesian([$re, $im ]); /usr/lib/perl5/5.8.8/Math/Complex.pm:170 0.62790 276232: my $self = bless {}, shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:172 0.56733 276232: if (@_ == 0) { /usr/lib/perl5/5.8.8/Math/Complex.pm:316 0.53179 281094: $_[0]->{'cartesian'} = $_[1] } /usr/lib/perl5/5.8.8/Math/Complex.pm:1324 0.48768 293660: if (@_ == 1) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1319 0.44835 293660: if (exists $self->{display_format}) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1318 0.40355 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:187 0.39950 276232: $self->display_format('cartesian'); /usr/lib/perl5/5.8.8/Math/Complex.pm:1315 0.39312 293660: my $self = shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:1331 0.38087 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:184 0.35171 276232: $im ||= 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:181 0.34145 276232: if (defined $re) { /usr/lib/perl5/5.8.8/Math/Complex.pm:171 0.33492 276232: my ($re, $im); /usr/lib/perl5/5.8.8/Math/Complex.pm:390 0.20658 128280: my ($z1, $z2, $regular) = @_; /usr/lib/perl5/5.8.8/Math/Complex.pm:391 0.20631 128280: if ($z1->{p_dirty} == 0 and ref $z2 and $z2->{p_dirty} == 0) { Thanks for any help!

    Read the article

  • Need help with Xpath methods in javascript (selectSingleNode, selectNodes)

    - by Andrija
    I want to optimize my javascript but I ran into a bit of trouble. I'm using XSLT transformation and the basic idea is to get a part of the XML and subquery it so the calls are faster and less expensive. This is a part of the XML: <suite> <table id="spis" runat="client"> <rows> <row id="spis_1"> <dispatch>'2008', '288627'</dispatch> <data col="urGod"> <title>2008</title> <description>Ur. god.</description> </data> <data col="rbr"> <title>288627</title> <description>Rbr.</description> </data> ... </rows> </table> </suite> In the page, this is the javascript that works with this: // this is my global variable for getting the elements so I just get the most // I can in one call elemCollection = iDom3.Table.all["spis"].XML.DOM.selectNodes("/suite/table/rows/row").context; //then I have the method that uses this by getting the subresults from elemCollection //rest of the method isn't interesting, only the selectNodes call _buildResults = function (){ var _RowList = elemCollection.selectNodes("/data[@col = 'urGod']/title"); var tmpResult = ['']; var substringResult=""; for (i=0; i<_RowList.length; i++) { tmpResult.push(_RowList[i].text,iDom3.Global.Delimiter); } ... //this variant works elemCollection = iDom3.Table.all["spis"].XML.DOM _buildResults = function (){ var _RowList = elemCollection.selectNodes("/suite/table/rows/row/data[@col = 'urGod']/title"); var tmpResult = ['']; var substringResult=""; for (i=0; i<_RowList.length; i++) { tmpResult.push(_RowList[i].text,iDom3.Global.Delimiter); } ... The problem is, I can't find a way to use the subresults to get what I need.

    Read the article

  • What's the best way to do base36 arithmetic in perl?

    - by DVK
    What's the best way to do base36 arithmetic in Perl? To be more specific, I need to be able to do the following: Operate on positive N-digit numbers in base 36 (e.g. digits are 0-9 A-Z) N is finite, say 9 Provide basic arithmetic, at the very least the following 3: Addition (A+B) Subtraction (A-B) Whole division, e.g. floor(A/B). Strictly speaking, I don't really need a base10 conversion ability - the numbers will 100% of time be in base36. So I'm quite OK if the solution does NOT implement conversion from base36 back to base10 and vice versa. I don't much care whether the solution is brute-force "convert to base 10 and back" or converting to binary, or some more elegant approach "natively" performing baseN operations (as stated above, to/from base10 conversion is not a requirement). My only 3 considerations are: It fits the minimum specifications above It's "standard". Currently we're using and old homegrown module based on base10 conversion done by hand that is buggy and sucks. I'd much rather replace that with some commonly used CPAN solution instead of re-writing my own bicycle from scratch, but I'm perfectly capable of building it if no better standard possibility exists. It must be fast-ish (though not lightning fast). Something that takes 1 second to sum up 2 9-digit base36 numbers is worse than anything I can roll on my own :) P.S. Just to provide some context in case people decide to solve my XY problem for me in addition to answering the technical question above :) We have a fairly large tree (stored in DB as a bunch of edges), and we need to superimpose order on a subset of that tree. The tree dimentions are big both depth- and breadth- wise. The tree is VERY actively updated (inserts and deletes and branch moves). This is currently done by having a second table with 3 columns: parent_vertex, child_vertex, local_order, where local_order is an 9-character string built of A-Z0-9 (e.g. base 36 number). Additional considerations: It is required that the local order is unique per child (and obviously unique per parent), Any complete re-ordering of a parent is somewhat expensive, and thus the implementation is to try and assign - for a parent with X children - the orders which are somewhat evenly distributed between 0 and 36**10-1, so that almost no tree inserts result in a full re-ordering.

    Read the article

  • Scala actors: receive vs react

    - by jqno
    Let me first say that I have quite a lot of Java experience, but have only recently become interested in functional languages. Recently I've started looking at Scala, which seems like a very nice language. However, I've been reading about Scala's Actor framework in Programming in Scala, and there's one thing I don't understand. In chapter 30.4 it says that using react instead of receive makes it possible to re-use threads, which is good for performance, since threads are expensive in the JVM. Does this mean that, as long as I remember to call react instead of receive, I can start as many Actors as I like? Before discovering Scala, I've been playing with Erlang, and the author of Programming Erlang boasts about spawning over 200,000 processes without breaking a sweat. I'd hate to do that with Java threads. What kind of limits am I looking at in Scala as compared to Erlang (and Java)? Also, how does this thread re-use work in Scala? Let's assume, for simplicity, that I have only one thread. Will all the actors that I start run sequentially in this thread, or will some sort of task-switching take place? For example, if I start two actors that ping-pong messages to each other, will I risk deadlock if they're started in the same thread? According to Programming in Scala, writing actors to use react is more difficult than with receive. This sounds plausible, since react doesn't return. However, the book goes on to show how you can put a react inside a loop using Actor.loop. As a result, you get loop { react { ... } } which, to me, seems pretty similar to while (true) { receive { ... } } which is used earlier in the book. Still, the book says that "in practice, programs will need at least a few receive's". So what am I missing here? What can receive do that react cannot, besides return? And why do I care? Finally, coming to the core of what I don't understand: the book keeps mentioning how using react makes it possible to discard the call stack to re-use the thread. How does that work? Why is it necessary to discard the call stack? And why can the call stack be discarded when a function terminates by throwing an exception (react), but not when it terminates by returning (receive)? I have the impression that Programming in Scala has been glossing over some of the key issues here, which is a shame, because otherwise it's a truly excellent book.

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >