Search Results

Search found 2670 results on 107 pages for 'review'.

Page 92/107 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • To refund or not to refund this client?

    - by Mahalia Samuels
    I'd really appreciate your advice on an ongoing project. I presented my client with a proposal and design samples which he approved, and he paid in full instead of the 50% upfront deposit as I'd given him a generous discount. He was then slow in furnishing me with some of the content, but once we did, he expected the website to be finished immediately which was not possible. Because he needed it done urgently, we agreed to try to get it done about 10 working days after the content was provided, but the developer who was helping me let me down. The next week, I completed the website myself and uploaded it to the server on a Friday afternoon. He then calls and texts me on following Sunday while I'm at church to say it's not online (there was probably a problem with his browser). The next morning, I received an email from him demanding a full refund within two days because he couldn't see the website (even though it was live, and I tested it on multiple browsers, a different computer and my phone), and he called me shouting at me because he couldn't access it. Finally when he was able to access it, he was unhappy with a certain detail regarding the slideshow which I began fixing and which was done the next day. He then referred me to another website and said he wanted it to look similar but not identical to it in terms of the layout. He also now wanted to add more features which were not in the original design. I got a designer to work on a new design which I sent to him for review, which if approved would be completed by 15 October, and he approved it last Thursday. He then called me yesterday to say that he wanted to change the design - he only approved it out of impatience. He now wants the website to be more similar to the other website he referred me to and he wants it done before the 15th! Then, he says to me that other people have done websites for him in three days - website's he's complained to me about for lacking dimension because they were just premium themes, whereas we'd designed and coded from scratch. I'm thinking of finishing the website but refunding him in full (or at least the refundable 50%) less domain registration and other non-refundable amounts, just to avoid further escalation of this matter and having him call me next week and say he wants to change it again. These are the applicable terms and conditions as laid out in the agreement: Total amount due for this project is Amount A. Client shall pay Consultant a deposit of Amount B (50% of total amount due for project) in advance before any work commences on the Project. The balance is due within 7 working days of completion of project. Deposit is non-refundable. Should client opt to host elsewhere, applicable transferral fee of Amount C will apply. Estimated project completion time frame is 14 to 30 days from the date Client furnishes Consultant with Brief and all other required media and data, provided that Client has made payment to secure the project. Consultant will make every effort to meet agreed upon due dates. The Client should be aware that failure to submit required information or materials, or last minute changes and excessive changes may cause subsequent delays. Client delays could result in significant delays in delivery of finished work. Major changes in client input or direction or brief will be charged at normal rates. Any work the Client wishes Consultant to create, which is not specified in the attached Proposal will be considered an additional service. Client agrees to pay Consultant for any additional expenses or additional services not included in the attached quotation and proposal if requested by the Client. Web design credit in the name of the Consultant, and link to Consultant’s website shall be placed on the footer of the final Website. Either party may terminate this Agreement by giving 7 days written notice to the other of such termination. In the event that Work is postponed or terminated at the request of the Client, Consultant shall have the right to bill pro rata at full rates for work completed through the date of that request, while reserving all rights under this Agreement. If additional payment is due, this shall be payable within seven days of the Client's written notification to stop work. In the event of termination, the Client shall also pay any expenses incurred by Consultant and the Consultant shall own all rights to the Work. Advice please?

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Cloud Deployment Models

    - by B R Clouse
    Normal 0 false false false EN-US X-NONE X-NONE As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes. Most cloud deployments today are either private or public. It is also possible to connect a private cloud and a public cloud to form a hybrid cloud. A private cloud is for the exclusive use of an organization. Enterprises, universities and government agencies throughout the world are using private clouds. Some have designed, built and now manage their private clouds. Others use a private cloud that was built by and is now managed by a provider, hosted either onsite or at the provider’s datacenter. Because private clouds are for exclusive use, they are usually the option chosen by organizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internet connection. Because they require no capital investment from their users, they are particularly attractive to companies with limited resources in less regulated environments and for temporary workloads such as development and test environments. Public clouds offer a range of products, from end-user software packages to more basic services such as databases or operating environments. Public clouds may also offer cloud services such as a disaster recovery for a private cloud, or the ability to “cloudburst” a temporary workload spike from a private cloud to a public cloud. These are examples of a hybrid cloud. These are most feasible when the private and public clouds are built with similar technologies. Usually people think of a public cloud in terms of a user role, e.g., “Which public cloud should I consider using?” But someone needs to own and manage that public cloud. The company who owns and operates a public cloud is known as a public cloud provider. Oracle Database Cloud Service, Amazon RDS, database.com and Savvis Symphony Database are examples of public cloud database services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When evaluating deployment models, be aware that you can use any or all of the available options. Some workloads may be best-suited for a private cloud, some for a public or hybrid cloud. And you might deploy multiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make sure that each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you the greatest flexibility in moving resources and workloads among your different clouds. Oracle’s portfolio of cloud products and services enables both deployment models. Oracle can manage either model. Universities, government agencies and companies in all types of business everywhere in the world are using clouds built with the Oracle portfolio. By employing a consistent portfolio, these customers are able to run all of their workloads – from test and development to the most mission-critical -- in a consistent manner: One Enterprise Cloud, powered by Oracle.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Solaris 11.1 changes building of code past the point of __NORETURN

    - by alanc
    While Solaris 11.1 was under development, we started seeing some errors in the builds of the upstream X.Org git master sources, such as: "Display.c", line 65: Function has no return statement : x_io_error_handler "hostx.c", line 341: Function has no return statement : x_io_error_handler from functions that were defined to match a specific callback definition that declared them as returning an int if they did return, but these were calling exit() instead of returning so hadn't listed a return value. These had been generating warnings for years which we'd been ignoring, but X.Org has made enough progress in cleaning up code for compiler warnings and static analysis issues lately, that the community turned up the default error levels, including the gcc flag -Werror=return-type and the equivalent Solaris Studio cc flags -v -errwarn=E_FUNC_HAS_NO_RETURN_STMT, so now these became errors that stopped the build. Yet on Solaris, gcc built this code fine, while Studio errored out. Investigation showed this was due to the Solaris headers, which during Solaris 10 development added a number of annotations to the headers when gcc was being used for the amd64 kernel bringup before the Studio amd64 port was ready. Since Studio did not support the inline form of these annotations at the time, but instead used #pragma for them, the definitions were only present for gcc. To resolve this, I fixed both sides of the problem, so that it would work for building new X.Org sources on older Solaris releases or with older Studio compilers, as well as fixing the general problem before it broke more software building on Solaris. To the X.Org sources, I added the traditional Studio #pragma does_not_return to recognize that functions like exit() don't ever return, in patches such as this Xserver patch. Adding a dummy return statement was ruled out as that introduced unreachable code errors from compilers and analyzers that correctly realized you couldn't reach that code after a return statement. And on the Solaris 11.1 side, I updated the annotation definitions in <sys/ccompile.h> to enable for Studio 12.0 and later compilers the annotations already existing in a number of system headers for functions like exit() and abort(). If you look in that file you'll see the annotations we currently use, though the forms there haven't gone through review to become a Committed interface, so may change in the future. Actually getting this integrated into Solaris though took a bit more work than just editing one header file. Our ELF binary build comparison tool, wsdiff, actually showed a large number of differences in the resulting binaries due to the compiler using this information for branch prediction, code path analysis, and other possible optimizations, so after comparing enough of the disassembly output to be comfortable with the changes, we also made sure to get this in early enough in the release cycle so that it would get plenty of test exposure before the release. It also required updating quite a bit of code to avoid introducing new lint or compiler warnings or errors, and people building applications on top of Solaris 11.1 and later may need to make similar changes if they want to keep their build logs similarly clean. Previously, if you had a function that was declared with a non-void return type, lint and cc would warn if you didn't return a value, even if you called a function like exit() or panic() that ended execution. For instance: #include <stdlib.h> int callback(int status) { if (status == 0) return status; exit(status); } would previously require a never executed return 0; after the exit() to avoid lint warning "function falls off bottom without returning value". Now the compiler & lint will both issue "statement not reached" warnings for a return 0; after the final exit(), allowing (or in some cases, requiring) it to be removed. However, if there is no return statement anywhere in the function, lint will warn that you've declared a function returning a value that never does so, suggesting you can declare it as void. Unfortunately, if your function signature is required to match a certain form, such as in a callback, you not be able to do so, and will need to add a /* LINTED */ to the end of the function. If you need your code to build on both a newer and an older release, then you will either need to #ifdef these unreachable statements, or, to keep your sources common across releases, add to your sources the corresponding #pragma recognized by both current and older compiler versions, such as: #pragma does_not_return(exit) #pragma does_not_return(panic) Hopefully this little extra work is paid for by the compilers & code analyzers being able to better understand your code paths, giving you better optimizations and more accurate errors & warning messages.

    Read the article

  • Revisiting the Generations

    - by Row Henson
    I was asked earlier this year to contribute an article to the IHRIM publication – Workforce Solutions Review.  My topic focused on the reality of the Gen Y population 10 years after their entry into the workforce.  Below is an excerpt from that article: It seems like yesterday that we were all talking about the entry of the Gen Y'ers into the workforce and what a radical change that would have on how we attract, retain, motivate, reward, and engage this new, younger segment of the workforce.  We all heard and read that these youngsters would be more entrepreneurial than their predecessors – the Gen X'ers – who were said to be more loyal to their profession than their employer. And, we heard that these “youngsters” would certainly be far less loyal to their employers than the Baby Boomers or even earlier Traditionalists. It was also predicted that – at least for the developed parts of the world – they would be more interested in work/life balance than financial reward; they would need constant and immediate reinforcement and recognition and we would be lucky to have them in our employment for two to three years. And, to keep them longer than that we would need to promote them often so they would be continuously learning since their long-term (10-year) goal would be to own their own business or be an independent consultant.  Well, it occurred to me recently that the first of the Gen Y'ers are now in their early 30s and it is time to look back on some of these predictions. Many really believed the Gen Y'ers would enter the workforce with an attitude – expect everything to be easy for them – have their employers meet their demands or move to the next employer, and I believe that we can now say that, generally, has not been the case. Speaking from personal experience, I have mentored a number of Gen Y'ers and initially felt that with a 40-year career in Human Resources and Human Resources Technology – I could share a lot with them. I found out very quickly that I was learning at least as much from them! Some of the amazing attributes I found from these under-30s was their fearlessness, ease of which they were able to multi-task, amazing energy and great technical savvy. They were very comfortable with collaborating with colleagues from both inside the company and peers outside their organization to problem-solve quickly. Most were eager to learn and willing to work hard.  This brings me to the generation that will follow the Gen Y'ers – the Generation Z'ers – those born after 1998. We have come full circle. If we look at the Silent Generation or Traditionalists, we find a workforce that preceded the television and even very early telephones. We Baby Boomers (as I fall right squarely in this category) remembered the invention of the television and telephone – but laptop computers and personal digital assistants (PDAs) were a thing of “StarTrek” and other science fiction movies and publications. Certainly, the Gen X'ers and Gen Y'ers grew up with the comfort of these devices just as we did with calculators. But, what of those under the age of 10 – how will the workplace look in 15 more years and what type of workforce will be required to operate in the mobile, global, virtual world. I spoke to a friend recently who had her four-year-old granddaughter for a visit. She said she found her in the den in front of the TV trying to use her hand to get the screen to move! So, you see – we have come full circle. The under-70 Traditionalist grew up in a world without TV and the Generation Z'er may never remember the TV we knew just a few years ago. As with every generation – we spend much time generalizing on their characteristics. The most important thing to remember is every generation – just like every individual – is different. The important thing for those of us in Human Resources to remember is that one size doesn’t fit all. What motivates one employee to come to work for you and stay there and be productive is very different than what the next employee is looking for and the organization that can provide this fluidity and flexibility will be the survivor for generations to come. And, finally, just when we think we have it figured out, a multitude of external factors such as the economy, world politics, industries, and technologies we haven’t even thought about will come along and change those predictions. As I reach retirement age – I do so believing that our organizations are in good hands with the generations to follow – energetic, collaborative and capable of working hard while still understanding the need for balance at work, at home and in the community! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • Getting Started with ADF Mobile Sample Apps

    - by Denis T
    Getting Started with ADF Mobile Sample Apps   Installation Steps Install JDeveloper 11.1.2.3.0 from Oracle Technology Network After installing JDeveloper, go to Help menu and select "Check For Updates" and find the ADF Mobile extension and install this. It will require you restart JDeveloper For iOS development, be on a Mac and have Xcode installed. (Currently only Xcode 4.4 is officially supported. Xcode 4.5 support is coming soon) For Android development, have the Android SDK installed. In the JDeveloper Tools menu, select "Preferences". In the Preferences dialog, select ADF Mobile. You can expand it to select configure your Platform preferences for things like the location of Xcode and the Android SDK. In your /jdeveloper/jdev/extensions/oracle.adf.mobile/Samples folder you will find a PublicSamples.zip. Unzip this into the Samples folder so you have all the projects ready to go. Open each of the sample application's .JWS file to open the corresponding workspace. Then from the "Application" menu, select "Deploy" and then select the deployment profile for the platform you wish to deploy to. Try deploying to the simulator/emulator on each platform first because it won't require signing. Note: If you wish to deploy to the Android emulator, it must be running before you start the deployment.   Sample Application Details   Recommended Order of Use Application Name Description 1 HelloWorld The "hello world" application for ADF Mobile, which demonstrates the basic structure of the framework. This basic application has a single application feature that is implemented with a local HTML file. Use this application to ascertain that the development environment is set up correctly to compile and deploy an application. See also Section 4.2.2, "What Happens When You Create an ADF Mobile Application." 2 CompGallery This application is meant to be a runtime application and not necessarily to review the code, though that is available. It serves as an introduction to the ADF Mobile AMX UI components by demonstrating all of these components. Using this application, you can change the attributes of these components at runtime and see the effects of those changes in real time without recompiling and redeploying the application after each change. See generally Chapter 8, "Creating ADF Mobile AMX User Interface." 3 LayoutDemo This application demonstrates the user interface layout and shows how to create the various list and button styles that are commonly used in mobile applications. It also demonstrates how to create the action sheet style of a popup component and how to use various chart and gauge components. See Section 8.3, "Creating and Using UI Components" and Section 8.5, "Providing Data Visualization." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 4 JavaDemo This application demonstrates how to bind the user interface to Java beans. It also demonstrates how to invoke EL bindings from the Java layer using the supplied utility classes. See also Section 8.10, "Using Event Listeners" and Section 9.2, "Understanding EL Support." 5 Navigation This application demonstrates the various navigation techniques in ADF Mobile, including bounded task flows and routers. It also demonstrates the various page transitions. See also Section 7.2, "Creating Task Flows." Note: This application must be opened from the Samples directory or the Default springboard option must be cleared in the Applications page of the adfmf-application.xml overview editor, then selected again. 6 LifecycleEvents This application implements lifecycle event handlers on the ADF Mobile application itself and its embedded application features. This application shows you where to insert code to enable the applications to perform their own logic at certain points in the lifecycle. See also Section 5.6, "About Lifecycle Event Listeners." Note: iOS, the LifecycleEvents sample application logs data to the Console application, located at Applications-Utilities-Console application. 7 DeviceDemo This application shows you how to use the DeviceFeatures data control to expose such device features as geolocation, e-mail, SMS, and contacts, as well as how to query the device for its properties. See also Section 9.5, "Using the DeviceFeatures Data Control." Note: You must also run this application on an actual device because SMS and some of the device properties do not function on an iOS simulator or Android emulator. 8 GestureDemo This application demonstrates how gestures can be implemented and used in ADF Mobile applications. See also Section 8.4, "Enabling Gestures." 9 StockTracker This application demonstrates how data change events use Java to enable data changes to be reflected in the user interface. It also has a variety of layout use cases, gestures and basic mobile patterns. See also Section 9.7, "Data Change Events."

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • ASP.Net MVC 2 DropDownListFor in EditorTemplate

    - by tschreck
    I have a view model that looks like this: namespace AutoForm.Models { public class ProductViewModel { [UIHint("DropDownList")] public String Category { get; set; } [ScaffoldColumn(false)] public IEnumerable<SelectListItem> CategoryList { get; set; } ... } } It has Category and CategoryList properties. The CategoryList is the source data for the Category dropdown UI element. I have an EditorTemplate that looks like this: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<ProductViewModel>" %> <%@ Import Namespace="AutoForm.Models"%> <%=Html.DropDownListFor(m => m.Category , Model.CategoryList ) %> NOTE: this EditorTemplate is strongly typed to ProductViewModel My Controller is populating CategoryList property with data from a database. I cannot get the DropDownListFor template to render a drop down list with data from CategoryList. I know CategoryList is getting populated with data in the controller because I see the data when I debug and step through the controller. Here's my error message in the browser: Server Error in '/' Application. Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: Line 2: <%@ Import Namespace="AutoForm.Models"% Line 3: Line 4: <%=Html.DropDownListFor(m = m.Category, Model.CategoryList) % Source File: c:\ProjectStore\AutoForm\AutoForm\Views\Shared\EditorTemplates\DropDownList.ascx Line: 4 Any ideas? Thanks Tom As a followup, I noticed that ViewData.Model is null when I'm stepping through the code in the EditorTemplate. I have the EditorTemplate strongly typed to "ProductViewModel" which is also the type that's passed to the View in the controller. I'm perplexed as to why ViewData.Model is null even though it's getting populated in the controller before getting passed to the view.

    Read the article

  • my asp.net mvc 2.0 application fails with error "No parameterless constructor defined for this objec

    - by loviji
    Hello, i'm new in asp.net mvc 2. I'm trying to list all data from one table(ms sql server table). as ORM I use Entity Framework. now, I'm tried to write something to do this: Model: private uqsEntities _uqsEntity; public permissionRepository(uqsEntities uqsEntity) { _uqsEntity = uqsEntity; } public IEnumerable<userPermissions> getAllData() { return _uqsEntity.userPermissions; } controller: private DataManager _dataManager; public ManagePermissionsController(DataManager datamanager) { _dataManager = datamanager; } public ActionResult Index() { return RedirectToAction("List"); } [AcceptVerbs(HttpVerbs.Get)] public ActionResult List() { return List(null); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult List(int? userID) { return View(_dataManager.Permission.getAllData().ToList()); } Route: routes.MapRoute( "ManagePerm", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "ManagePermissions", action = "Index"}, new string[] { "uqs.Controllers" } // Parameter defaults ); and View automatically generated by Visual Studio(in action mouse right-click). when I run app. , my app. fails. No parameterless constructor defined for this object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.MissingMethodException: No parameterless constructor defined for this object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [MissingMethodException: No parameterless constructor defined for this object.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) +86 System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) +230 System.Activator.CreateInstance(Type type, Boolean nonPublic) +67 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +80 [InvalidOperationException: An error occurred when trying to create a controller of type 'uqs.Controllers.ManagePermissionsController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +190 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +68 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +118 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +46 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +63 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +13 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679186 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 please, somebody, help me to catch problem.

    Read the article

  • Controller Action Methods with different signatures

    - by Narsil
    I am trying to get my URLs in files/id format. I am guessing I should have two Index methods in my controller, one with a parameter and one with not. But I get this error message in browser below. Anyway here is my controller methods: public ActionResult Index() { return Content("Index "); } // // GET: /Files/5 public ActionResult Index(int id) { File file = fileRepository.GetFile(id); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } Error: Server Error in '/' Application. The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Reflection.AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController] System.Web.Mvc.ActionMethodSelector.FindActionMethod(ControllerContext controllerContext, String actionName) +396292 System.Web.Mvc.ReflectedControllerDescriptor.FindAction(ControllerContext controllerContext, String actionName) +62 System.Web.Mvc.ControllerActionInvoker.FindAction(ControllerContext controllerContext, ControllerDescriptor controllerDescriptor, String actionName) +13 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +99 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Updated Code: public ActionResult Index(int? id) { if (id.HasValue) { File file = fileRepository.GetFile(id.Value); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } else return Content("Index"); } It's still not the thing I want. URLs have to be in files?id=3 format. I want files/3 routes from global.asax routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); //files/3 //it's the one I wrote routes.MapRoute("Files", "{controller}/{id}", new { controller = "Files", action = "Index", id = UrlParameter.Optional} ); I tried adding a new route after reading Jeff's post but I can't get it working. It still works with files?id=2 though.

    Read the article

  • System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

    - by coffeeaddict
    We have 2 identical database tables on our dev server. In both is a table that holds an X509Certificate2 data in one of its fields. When I grab the cert along with the password that I've been using all along, it works over the first database just fine. I run the same code though over the 2nd database and get this error and I don't get why if the setup is exactly the same and the database is also on this server. So I'm not sure why when I switch my connection string to talk to Database2, even though it's setup the same and the code I'm running over it is the same, it complains. Here's the stack trace: An existing connection was forcibly closed by the remote host Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SocketException (0x2746): An existing connection was forcibly closed by the remote host] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +232 [IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.] System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) +7035903 System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) +58 System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) +116 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ProcessReceivedBlob(Byte[] buffer, Int32 count, AsyncProtocolRequest asyncRequest) +86 System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) +123 System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) +7184357 System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) +217 System.Threading.ExecutionContext.runTryCode(Object userData) +376 System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) +0 System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) +98 System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) +1134 System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) +88 System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) +20 System.Net.ConnectStream.WriteHeaders(Boolean async) +360 [WebException: The underlying connection was closed: An unexpected error occurred on a send.] System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) +857631 System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) +10 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +243 Pmall.PayPal.PayPalApi.PayPalAPIAASoapBinding.SetExpressCheckout(SetExpressCheckoutReq SetExpressCheckoutReq) in C:\www\ssss\ssss\ssss\ssss\Reference.cs:1304 ssss.PayPal.ExpressCheckout.PayPalCheckout.SetExpressCheckout(PaymentDetailsType[] paymentDetails, String returnURL, String cancelURL, PayPalPaymentFlowType paymentFlowType) in C:\www\ssss\ssss\ssss\PayPalCheckout.cs:96 ssss.Web.ssss.SetExpressCheckout() in C:\www\ssss\ssss\ssss.aspx.cs:83 ssss.Web.ssss.Page_Load(Object sender, EventArgs e) in C:\www\ssss\ssss\Register.aspx.cs:24 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • System.UriFormatException: Invalid URI: The hostname could not be parsed.

    - by Shane
    All of a sudden I'm getting the following error on my website. It doesnt access a db. just a simple website using .net 2.0. I did recently apply the available windows server 2003 service packs. Could that have changed things? I should add the error randomly comes and goes and has been doing so for today and yesterday. I leave it for 5 minutes and the error is gone. Server Error in '/' Application. Invalid URI: The hostname could not be parsed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UriFormatException: Invalid URI: The hostname could not be parsed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [UriFormatException: Invalid URI: The hostname could not be parsed.] System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) +5367536 System.Uri.CreateUri(Uri baseUri, String relativeUri, Boolean dontEscape) +31 System.Uri..ctor(Uri baseUri, String relativeUri) +34 System.Net.HttpWebRequest.CheckResubmit(Exception& e) +5300867 [WebException: Cannot handle redirect from HTTP/HTTPS protocols to other dissimilar ones.] System.Net.HttpWebRequest.GetResponse() +5314029 System.Xml.XmlDownloadManager.GetNonFileStream(Uri uri, ICredentials credentials) +69 System.Xml.XmlDownloadManager.GetStream(Uri uri, ICredentials credentials) +3929371 System.Xml.XmlUrlResolver.GetEntity(Uri absoluteUri, String role, Type ofObjectToReturn) +54 System.Xml.XmlTextReaderImpl.OpenUrlDelegate(Object xmlResolver) +74 System.Threading.CompressedStack.runTryCode(Object userData) +70 System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) +0 System.Threading.CompressedStack.Run(CompressedStack compressedStack, ContextCallback callback, Object state) +108 System.Xml.XmlTextReaderImpl.OpenUrl() +186 System.Xml.XmlTextReaderImpl.Read() +208 System.Xml.XmlLoader.Load(XmlDocument doc, XmlReader reader, Boolean preserveWhitespace) +112 System.Xml.XmlDocument.Load(XmlReader reader) +108 System.Web.UI.WebControls.XmlDataSource.PopulateXmlDocument(XmlDocument document, CacheDependency& dataCacheDependency, CacheDependency& transformCacheDependency) +303 System.Web.UI.WebControls.XmlDataSource.GetXmlDocument() +153 System.Web.UI.WebControls.XmlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) +29 System.Web.UI.WebControls.BaseDataList.GetData() +39 System.Web.UI.WebControls.DataList.CreateControlHierarchy(Boolean useDataSource) +264 System.Web.UI.WebControls.BaseDataList.OnDataBinding(EventArgs e) +55 System.Web.UI.WebControls.BaseDataList.DataBind() +75 System.Web.UI.WebControls.BaseDataList.EnsureDataBound() +55 System.Web.UI.WebControls.BaseDataList.CreateChildControls() +65 System.Web.UI.Control.EnsureChildControls() +97 System.Web.UI.Control.PreRenderRecursiveInternal() +53 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Control.PreRenderRecursiveInternal() +202 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4588

    Read the article

  • Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0

    - by Jonesy
    HI folks, I have a .net application (vb.net) and I'm using the ajax control toolkit. It works fine on my production machine but when I upload it to the host (fasthosts) i get this error: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Web.Ajax, Version=3.0.31106.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e' or one of its dependencies. The module was expected to contain an assembly manifest.] AjaxControlToolkit.ToolkitScriptManager.ApplyAssembly(ScriptReference script, Boolean isComposite) +0 AjaxControlToolkit.ToolkitScriptManager.OnResolveScriptReference(ScriptReferenceEventArgs e) +167 System.Web.UI.ScriptManager.RegisterScripts() +191 System.Web.UI.ScriptManager.OnPagePreRenderComplete(Object sender, EventArgs e) +113 System.Web.UI.Page.OnPreRenderComplete(EventArgs e) +8698462 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1029 Here is my web.conf file. Its very simple: <system.web> <customErrors mode="Off"/> <compilation debug="true"> <assemblies> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A"/> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/></assemblies></compilation></system.web> Does anyone know whats up? -- Billy

    Read the article

  • Integrating POP3 client functionality into a C# application?

    - by flesh
    I have a web application that requires a server based component to periodically access POP3 email boxes and retrieve emails. The service then needs to process the emails which will involve: Validating the email against some business rules (does it contain a valid reference in the subject line, which user sent the mail, etc.) Analysing and saving any attachments to disk Take the email body and attachment details and create a new item in the database Or update an existing item where the reference matches the incoming email subject line What is the best way to approach this? I really don't want to have to write a POP3 client from scratch, but I need to be able to customize the processing of emails. Ideally I would be able to plug in some component that does the access and retrieval for me, returning arrays of attachments, body text, subject line, etc. ready for my processing... [ UPDATE: Reviews ] OK, so I have spent a fair amount of time looking into (mainly free) .NET POP3 libraries so I thought I'd provide a short review of some of those mentioned below and a few others: Pop3.net - free - works OK, very basic in terms of functionality provided. This is pretty much just the POP3 commands and some base64 encoding, but it's very straight forward - probably a good introduction Pop3 Wizard - commercial / some open source code - couldn't get this to build, missing DLLs, I wouldn't bother with this C#Mail - free - works well, comes with Mime parser and SMTP client, however the comments are in Japanese (not a big deal) and it didn't work with SSL 'out of the box' - I had to change the SslStream constructor after which it worked no problem OpenPOP - free - hasn't been updated for about 5 years so it's current state is .NET 1.0, doesn't support SSL but that was no problem to resolve - I just replaced the existing stream with an SslStream and it worked. Comes with Mime parser. Of the free libraries, I'd go for C#Mail or OpenPOP. I looked at a few commercial libraries: Chillkat, Rebex, RemObjects, JMail.net. Based on features, price and impression of the company I would probably go for Rebex and may in the future if my requirements change or I run into production issues with either of C#Mail or OpenPOP. In case anyone's needs it, this is the replacement SslStream constructor that I used to enable SSL with C#Mail and OpenPOP: SslStream stream = new SslStream(clientSocket.GetStream(), false, delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors errors) { return true; });

    Read the article

  • Impersonation - Access is denied

    - by krisg
    I am having trouble using impersonation to delete a PerformanceCounterCategory from an MVC website. I have a static class and when the application starts it checks whether or not a PerformanceCounterCategory exists, and if it contains the correct counters. If not, it deletes the category and creates it again with the required counters. It works fine when running under the built in webserver Cassini, but when i try run it through IIS7 (Vista) i get the following error: Access is denied Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ComponentModel.Win32Exception: Access is denied The code used is from an MS article, from memory... var username = "user"; var password = "password"; var domain = "tempuri.org"; WindowsImpersonationContext impersonationContext; // if impersonation fails - return if (!ImpersonateValidUser(username, password, domain, out impersonationContext)) { throw new AuthenticationException("Impersonation failed"); } PerformanceCounterCategory.Delete(PerfCategory); UndoImpersonation(impersonationContext); ... private static bool ImpersonateValidUser(string username, string password, string domain, out WindowsImpersonationContext impersonationContext) { const int LOGON32_LOGON_INTERACTIVE = 2; const int LOGON32_PROVIDER_DEFAULT = 0; WindowsIdentity tempWindowsIdentity; var token = IntPtr.Zero; var tokenDuplicate = IntPtr.Zero; if (RevertToSelf()) { if (LogonUserA(username, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) != 0) { if (DuplicateToken(token, 2, ref tokenDuplicate) != 0) { tempWindowsIdentity = new WindowsIdentity(tokenDuplicate); impersonationContext = tempWindowsIdentity.Impersonate(); if (impersonationContext != null) { CloseHandle(token); CloseHandle(tokenDuplicate); return true; } } } } if (token != IntPtr.Zero) CloseHandle(token); if (tokenDuplicate != IntPtr.Zero) CloseHandle(tokenDuplicate); impersonationContext = null; return false; } [DllImport("advapi32.dll")] public static extern int LogonUserA(String lpszUserName, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int DuplicateToken(IntPtr hToken, int impersonationLevel, ref IntPtr hNewToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool RevertToSelf(); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] public static extern bool CloseHandle(IntPtr handle); The error is thrown when processing tries to execute the PerformanceCounterCategory.Delete command. Suggestions?

    Read the article

  • JLabel animation in JPanel

    - by Trizicus
    After scratching around I found that it's best to implement a custom image component by extending a JLabel. So far that has worked great as I can add multiple "images" (jlabels without the layout breaking. I just have a question that I hope someone can answer for me. I noticed that in order to animate JLabels across the screen I need to setlayout(null); and setbounds of the component and then to animate eventually setlocation(x,y);. Is this a best practice or a terrible way to animate a component? I plan on eventually making an animation class but I don't want to do so and end up having to chuck it. I have included relevant code for a quick review check. import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JPanel; import javax.swing.Timer; public class GraphicsPanel extends JPanel { private Timer timer; private long startTime = 0; private int numFrames = 0; private float fps = 0.0f; private int x = 0; GraphicsPanel() { final Entity ent1 = new Entity(); ent1.setBounds(x, 0, ent1.getWidth(), ent1.getHeight()); add(ent1); //ESSENTIAL setLayout(null); //GAMELOOP timer = new Timer(30, new ActionListener() { public void actionPerformed(ActionEvent e) { getFPS(); incX(); ent1.setLocation(x, 0); repaint(); } }); timer.start(); } public void incX() { x++; } @Override public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D) g.create(); g2.setClip(0, 0, getWidth(), getHeight()); g2.setColor(Color.BLACK); g2.drawString("FPS: " + fps, 1, 15); } public void getFPS() { ++numFrames; if (startTime == 0) { startTime = System.currentTimeMillis(); } else { long currentTime = System.currentTimeMillis(); long delta = (currentTime - startTime); if (delta > 1000) { fps = (numFrames * 1000) / delta; numFrames = 0; startTime = currentTime; } } } } Thank you!

    Read the article

  • "Bad binary signature" in ASP.NET MVC application

    - by David M
    We are getting the error above on some pages of an ASP.NET MVC application when it is deployed to a 64 bit Windows 2008 server box. It works fine on our development machines, though these are 32 bit XP. Just wondered if anyone had encountered this before, and has any suggestions? Details as follows: Bad binary signature. (Exception from HRESULT: 0x80131192) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.COMException: Bad binary signature. (Exception from HRESULT: 0x80131192) All projects are set to compile for Any CPU, and are compiled in Release mode. The ASP.NET site is precompiled, and the precompiled build is on a 64 bit Windows 2008 TeamCity build agent. Thanks in advance. EDIT We're still plagued by this. I have looked at all the binaries in the website's bin directory using corflags.exe. None has the 32BIT flag set, and all have a CorFlags value of 9 except for Antlr3.Runtime.dll which has a value of 1. The problem only affects certain pages, and it seems to be those which use FluentValidation (including FluentValidation.Mvc and FluentValidation.xValIntegration assemblies). None of these shows anything out of the ordinary when inspected with corflags.exe, and there are no odd looking dependencies revealed by ildasm. When built locally (32 bit Windows XP) the site deploys and runs fine. When built on the build agents (64 bit Windows 2008 Server) the site displays these errors. The site runs in Integrated Pipeline mode, and is not set to 32 bit. The stack trace is: [COMException (0x80131192): Bad binary signature. (Exception from HRESULT: 0x80131192)] ASP.views_user_newinternal_aspx.__RenderContent2(HtmlTextWriter __w, Control parameterContainer) in e:\TeamCity\buildAgent\work\605ee6b4a5d1dd36\...Admin.Mvc\Views\User\NewInternal.aspx:53 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 ASP.views_shared_site_master.__Render__control1(HtmlTextWriter __w, Control parameterContainer) in e:\TeamCity\buildAgent\work\605ee6b4a5d1dd36\...Admin.Mvc\Views\Shared\Site.Master:26 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +115 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +240 System.Web.UI.Page.Render(HtmlTextWriter writer) +38 System.Web.Mvc.ViewPage.Render(HtmlTextWriter writer) +94 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +4240

    Read the article

  • asp.net MVC binding specific model results in error for post request

    - by Tomh
    Hi I'm having the following two actions defined in my controller [Authorize] [HttpGet] public ActionResult Edit() { ViewData.Model = HttpContext.User.Identity; return View(); } [Authorize] [HttpPost] public ActionResult Edit(User model) { return View(); } However if I post my editted data to the second action I get the following error: Server Error in '/' Application. An item with the same key has already been added. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: An item with the same key has already been added. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. I tried several things like renaming parameters and removing editable fields, but it seems the model type is the problem, what could be wrong? Stack Trace: [ArgumentException: An item with the same key has already been added.] System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +51 System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) +7464444 System.Linq.Enumerable.ToDictionary(IEnumerable`1 source, Func`2 keySelector, Func`2 elementSelector, IEqualityComparer`1 comparer) +270 System.Linq.Enumerable.ToDictionary(IEnumerable`1 source, Func`2 keySelector, IEqualityComparer`1 comparer) +102 System.Web.Mvc.ModelBindingContext.get_PropertyMetadata() +157 System.Web.Mvc.DefaultModelBinder.BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor) +158 System.Web.Mvc.DefaultModelBinder.BindProperties(ControllerContext controllerContext, ModelBindingContext bindingContext) +90 System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model) +50 System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +1048 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +280 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +257 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +109 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +314 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +34 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +21 System.Web.Mvc.Async.<>c__DisplayClass8`1.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult`1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8679150 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155

    Read the article

  • Is this a good way to do a game loop for an iPhone game?

    - by Danny Tuppeny
    Hi all, I'm new to iPhone dev, but trying to build a 2D game. I was following a book, but the game loop it created basically said: function gameLoop update() render() sleep(1/30th second) gameLoop The reasoning was that this would run at 30fps. However, this seemed a little mental, because if my frame took 1/30th second, then it would run at 15fps (since it'll spend as much time sleeping as updating). So, I did some digging and found the CADisplayLink class which would sync calls to my gameLoop function to the refresh rate (or a fraction of it). I can't find many samples of it, so I'm posting here for a code review :-) It seems to work as expected, and it includes passing the elapsed (frame) time into the Update method so my logic can be framerate-independant (however I can't actually find in the docs what CADisplayLink would do if my frame took more than its allowed time to run - I'm hoping it just does its best to catch up, and doesn't crash!). // // GameAppDelegate.m // // Created by Danny Tuppeny on 10/03/2010. // Copyright Danny Tuppeny 2010. All rights reserved. // #import "GameAppDelegate.h" #import "GameViewController.h" #import "GameStates/gsSplash.h" @implementation GameAppDelegate @synthesize window; @synthesize viewController; - (void) applicationDidFinishLaunching:(UIApplication *)application { // Create an instance of the first GameState (Splash Screen) [self doStateChange:[gsSplash class]]; // Set up the game loop displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(gameLoop)]; [displayLink setFrameInterval:2]; [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; } - (void) gameLoop { // Calculate how long has passed since the previous frame CFTimeInterval currentFrameTime = [displayLink timestamp]; CFTimeInterval elapsed = 0; // For the first frame, we want to pass 0 (since we haven't elapsed any time), so only // calculate this in the case where we're not the first frame if (lastFrameTime != 0) { elapsed = currentFrameTime - lastFrameTime; } // Keep track of this frames time (so we can calculate this next time) lastFrameTime = currentFrameTime; NSLog([NSString stringWithFormat:@"%f", elapsed]); // Call update, passing the elapsed time in [((GameState*)viewController.view) Update:elapsed]; } - (void) doStateChange:(Class)state { // Remove the previous GameState if (viewController.view != nil) { [viewController.view removeFromSuperview]; [viewController.view release]; } // Create the new GameState viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, IPHONE_WIDTH, IPHONE_HEIGHT) andManager:self]; // Now set as visible [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void) dealloc { [viewController release]; [window release]; [super dealloc]; } @end Any feedback would be appreciated :-) PS. Bonus points if you can tell me why all the books use "viewController.view" but for everything else seem to use "[object name]" format. Why not [viewController view]?

    Read the article

  • Is One Tool or a Suite of Tools Better for Scrum?

    - by Rob Wells
    G'day, Edit: We've been using Scrum very successfully for several years on several projects of varying sizes. In fact, our team developed the successful iPlayer project for the BBC using a classical Scrum approach. After using various combinations of tools, some high-tech, some low-tech, across these projects we now wish to try adopting a suitable tool suite. Our manager is to some extent attempting to force the adoption of a single suite of tools for Scrum. I've looked at the SO question "Best Scrum tools" and most people seem to recommend either: a suite of low-tech solutions, e.g. whiteboards, post-its, index cards, etc., or a monolithic tool that tries to satisfy as much as possible of the process, e.g. Agilo, Mingle, ScrumWorks, Target Process, etc. Our team is currently evaluating several different Scrum tools. However, we are looking at selecting a single, monolithic tool, e.g. Agilo. All of the "one-stop" solutions have their strengths and weaknesses with the serious enterprise type solutions being the best sort of fit. But all have some short comings. After reading the paper "Peer Code Review: An Agile Process" over at SmartBear I started wondering if we were trying to force adoption of a tool on a "best fit" basis. I think you can take a couple of reference artefacts of the Scrum development process, say user stories, epics and themes, and the code base which must use a well-known SCM, e.g. SVN, Hg, etc. Then if we take that as the common reference points for the tools employed then we would be able to use a group of tools to handle the different aspects of the Scrum process rather than try forcing a fit of a single tool would is a bit like forcing a square peg into the round hole. In this way, providing you've agreed your common reference points, you can use several tools, each performing their role better than a could be done by a single component in a monolithic tool suite. Is this a more sensible approach? Are the two reference points I mentioned above suitable, or is their a better choice of points where the tools would meet? cheers,

    Read the article

  • Is this (Lock-Free) Queue Implementation Thread-Safe?

    - by Hosam Aly
    I am trying to create a lock-free queue implementation in Java, mainly for personal learning. The queue should be a general one, allowing any number of readers and/or writers concurrently. Would you please review it, and suggest any improvements/issues you find? Thank you. import java.util.concurrent.atomic.AtomicReference; public class LockFreeQueue<T> { private static class Node<E> { E value; volatile Node<E> next; Node(E value) { this.value = value; } } private AtomicReference<Node<T>> head, tail; public LockFreeQueue() { // have both head and tail point to a dummy node Node<T> dummyNode = new Node<T>(null); head = new AtomicReference<Node<T>>(dummyNode); tail = new AtomicReference<Node<T>>(dummyNode); } /** * Puts an object at the end of the queue. */ public void putObject(T value) { Node<T> newNode = new Node<T>(value); Node<T> prevTailNode = tail.getAndSet(newNode); prevTailNode.next = newNode; } /** * Gets an object from the beginning of the queue. The object is removed * from the queue. If there are no objects in the queue, returns null. */ public T getObject() { Node<T> headNode, valueNode; // move head node to the next node using atomic semantics // as long as next node is not null do { headNode = head.get(); valueNode = headNode.next; // try until the whole loop executes pseudo-atomically // (i.e. unaffected by modifications done by other threads) } while (valueNode != null && !head.compareAndSet(headNode, valueNode)); T value = (valueNode != null ? valueNode.value : null); // release the value pointed to by head, keeping the head node dummy if (valueNode != null) valueNode.value = null; return value; }

    Read the article

  • Text to Speech in ASP.NET - Access is denied... what to do?

    - by Magnetic_dud
    On my personal website, i would like to make it "pronounce" something I solved the "concept" problem, as in here, and on my desktop it works smoothly when launched from visual web developer. Creates a file, and then an embedded player in the page will play it. Perfect. So, I uploaded it on the server... I get this error 500: Server Error in '/sapi' Application. Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. (...) Source Error: See it below Source File: c:\mypath\sapi\myfile.aspx.cs Line: 21 Stack Trace: [UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))] SpeechLib.SpVoiceClass.Speak(String Text, SpeechVoiceSpeakFlags Flags) +0 prova.Button1_Click(Object sender, EventArgs e) in c:\mypath\sapi\prova.aspx.cs:21 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +111 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +110 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1565 Version Information: Microsoft .NET Framework Version:2.0.50727.3053; ASP.NET Version:2.0.50727.3053 This is the source Source Error: Line 19: myfile.Open(@"C:\mypath\sapi\gen\hi.wav",SpeechStreamFileMode.SSFMCreateForWrite,false); Line 20: voice.AudioOutputStream = myfile; Line 21: voice.Speak("Hi",SpeechVoiceSpeakFlags.SVSFDefault); I get error on line 21, Voice.speak That probably means that the aspnet worker user has not some right permission The generation folder has all the right permissions: an empty file is created. So, i have to give permission of execute to some system dll? Do you know which dll? It is not bin\Interop.SpeechLib.dll, on this one the aspnet user has full control Ps: i have full control on the (windows) server (i mean, access by RDC, is not a shared hosting)

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >