Search Results

Search found 467 results on 19 pages for 'weak'.

Page 18/19 | < Previous Page | 14 15 16 17 18 19  | Next Page >

  • How to publish an ASP.NET MVC application to a free host

    - by Lirik
    Hi, I'm using a free web host (0000free) which supports ASP.NET MVC, but it uses Mono. This is the first time I deploy an MVC application, so I'm a little confused as to where I need to deploy it. I have Visual Studio 2010 and I used its Publish Feature (i.e. right click on the project name and click publish) and I tried several things: Publish method: FTP to the root folder. Publish method: FTP to the publich_html folder. Publish method: File System to the root folder. Publish method: File System to the publich_html folder. Publish method: File System to a local directory on my computer and then FTP to root and also tried the public_html folder. I went into the cPanel (control panel) to try and see if ASP.NET has to be added/enabled for my web site, but I didn't see anything there. I can't browse to Index.aspx nor can I redirect to it from index.html (as suggested from other posts on the host forum), right now I have a link from index.html to Index.aspx but it's not working either (see http://www.mydevarmy.com) I've also tried renaming Index.aspx to Default.aspx, but that doesn't work either. The search utility of the forum of the host is somewhat weak, so I use google to search their forum: http://www.google.com/search?q=publish+asp.net+site%3A0000free.com%2Fforum%2F&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a I've been reading Pro ASP.NET MVC Framework and they have a chapter about publishing, but it doesn't provide any specific information with respect to the location of publishing, this is all they say (and it's not very helpful in my case): Where Should I Put My Application? You can deploy your application to any folder on the server. When IIS first installs, it automatically creates a folder for a web site called Default Web Site at c:\Inetpub\wwwroot\, but you shouldn’t feel any obligation to put your application files there. It’s very common to host applications on a different physical drive from the operating system (e.g., in e:\websites\ example.com). It’s entirely up to you, and may be influenced by concerns such as how you plan to back up the server. Here is the exception I get when I try to view my Index.aspx page: Unrecognized attribute 'targetFramework'. (/home/devarmy/public_html/Web.config line 1) Description: HTTP 500. Error processing request. Stack Trace: System.Configuration.ConfigurationErrorsException: Unrecognized attribute 'targetFramework'. (/home/devarmy/public_html/Web.config line 1) at System.Configuration.ConfigurationElement.DeserializeElement (System.Xml.XmlReader reader, Boolean serializeCollectionKey) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSection.DoDeserializeSection (System.Xml.XmlReader reader) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSection.DeserializeSection (System.Xml.XmlReader reader) [0x00000] in <filename unknown>:0 at System.Configuration.Configuration.GetSectionInstance (System.Configuration.SectionInfo config, Boolean createDefaultInstance) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationSectionCollection.get_Item (System.String name) [0x00000] in <filename unknown>:0 at System.Configuration.Configuration.GetSection (System.String path) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetSection (System.String sectionName, System.String path, System.Web.HttpContext context) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetSection (System.String sectionName, System.String path) [0x00000] in <filename unknown>:0 at System.Web.Configuration.WebConfigurationManager.GetWebApplicationSection (System.String sectionName) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.get_CompilationConfig () [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.Build (System.Web.VirtualPath vp) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.GetCompiledType (System.Web.VirtualPath virtualPath) [0x00000] in <filename unknown>:0 at System.Web.Compilation.BuildManager.GetCompiledType (System.String virtualPath) [0x00000] in <filename unknown>:0 at System.Web.HttpApplicationFactory.InitType (System.Web.HttpContext context) [0x00000] in <filename unknown>:0

    Read the article

  • Performing mechanical movements using computer

    - by Vi
    How to make a computer (in particular, my laptop) to perform some mechanical movements without buying anything $5, soldering things inside computer or creating big sophisticated circuits? Traditionally CD-ROM tray is used to make computer do some movement IRL by, for example, SSH command, but in laptop tray is one-shot (unless manually reloaded) and also not very comfortable [mis]usage. Some assistance circuits can be in use too, but not complex. For example, there is a little motor that can work on USB power. Devices in my computer: DVD-ROM tray: one-time push. USB power: continuous power to the motor or LEDS or relay that turns on something powerful. Audio card. 3 outputs (modprobe alsa model=test can set Mic and Line-in as additional output). One controllable DC output (microphone) that can power up LED and some electronic (may be even mechanic?) relay. Also with sophisticated additional circuiting can control a lot of devices with a good precision. Both input and output support. Probably the most useful object in computer for radio ham. Modem. Don't know about this much, it doesn't work because of hsfmodem crashes kernel if memory is = 1GB. May be it's "pick up" and "hang up" can turn on and off power taken from USB port? Video card. VGA port? S-Video port? Will them be useful? Backlight. Tunable, but probably unuseful. CardBus (or some) slot. Nothing interesting for the task probably (is it?). AC adapter and battery. Probably nothing programmable here. /* My AC adapter already have additional jacks to connect extra devics */ Keyboard. No use. Touchpad. Good sensor (synclient -m 1), but no output. Various LEDs inside laptop. Probably too weak and requires soldering. Fans inside laptop. Poor control over them, requires soldering and dangerous to tinker. HDD (internal and external) that can be spin down and up (hdparm -Y, cat /dev/ubb). But connecting anything serially with it's power line makes HDD underpowered... And too complex. Is something are missed? Any ideas how to use described components? Any other ideas? May be there are easily available /* in developing countries */ cheap devices like "enhanced multimeters" that are controllable from computer and can provide configurable output and measure current and other things? Things to aid pushing many physical buttons with computer. Isn't this a simple idea and implementation and a lot of use in good hands?

    Read the article

  • Week in Geek: IPv6 Capable Smartphones Compromise User Privacy Edition

    - by Asian Angel
    This week we learned how to “clone a disk, resize static windows, and create system function shortcuts”, use 45 different services, sites, and apps to help read favorite sites, add MP3 support to Audacity (for saving in MP3 format), install a Wii game loader for easy backups and fast load times, create a Blue Screen of Death in any color, and more. Photo by legofenris. Weekly News Links Photo by The H Security. IPv6: Smartphones compromise users’ privacy Since version 4 of the iOS operating system, Apple’s iPhones, iPads and iPods have been capable of handling IPv6, and most Android devices have been capable since version 2.1. However, the operating systems transfer an ID that discloses information about their users. Dumb phones can be attacked too Much of the discussion of security threats to mobile phones revolves around smartphones, but researchers have found that less advanced “feature phones,” still used by the majority of people around the world, also are vulnerable to attack. SCADA exploit – the dragon awakes The recent publication of an exploit for KingView, a software package for visualising industrial process control systems, appears to be having an effect. Threatpost reports that both the Chinese vendor Wellintech and Chinese CERT (CN-CERT) have now reacted. Sophos: Spam to get more malicious Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos. Global spam traffic rebounds as Rustock wakes Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec. Cracking WPA keys in the cloud At the forthcoming Black Hat conference, blogger Thomas Roth plans to demonstrate how weak WPA PSKs can be cracked quickly and easily using Amazon’s Elastic Compute Cloud (EC2) service. Microsoft Security Advisory: Vulnerability in Internet Explorer could allow remote code execution Provides a link to more details about the vulnerability and shows a work-around/fix for the problem. Adobe plans to make it easier to delete Flash cookies in web browsers The new API, NPAPI:ClearSiteData, will allow Flash cookies – also known as Local Shared Objects (LSO) – to be deleted directly in the browser’s settings. Firefox beta getting new database standard The ninth beta version of Firefox is set to get support for a standard called IndexedDB that provides a database interface useful for offline data storage and other tasks needing information on a browser’s computer. MetroPCS accused of blocking certain Net content MetroPCS is violating the FCC’s recently approved Net neutrality rules by blocking certain Internet content, say several public interest groups. Server and Tools chief Muglia to leave Microsoft in summer 2011 Microsoft veteran and Server & Tools Business (STB) President Bob Muglia is leaving Microsoft, according to an email that CEO Steve Ballmer sent to employees on January 10. Report: DOJ nearing decision on Google-ITA The U.S. Department of Justice is gearing up for a possible formal antitrust investigation into whether or not Google should be allowed to purchase travel software company ITA Software, according to a report. South Korea says Google Street View broke law Police in South Korea reportedly say Google broke the country’s law when its Street View service captured personal data from unsecure Wi-Fi networks. The backlash over Google’s HTML5 video bet Choosing strategies based on what you believe to be long-term benefits is generally a good idea when running a business, but if you manage to alienate the world in the process, the long term may become irrelevant. Google answers critics on HTML5 Web video move Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage. Random TinyHacker Links A Special GiveAway: a Great Book & Great Security Software The team from 7 Tutorials has a special giveaway running during the month of January. Signed copies of their latest book, full 1-year licenses of BitDefender Internet Security 2011 and free 3-month trials for everyone willing to participate. One Click Rooting For Android Phones Here’s a nice tool that helps you root your Android phone effortlessly. New Angry Birds Free version 1.0 Available in the App Store. Google Code University Learn programming at Google Code University. Capture and Share Your Favorite Part Of a YouTube Video SnipSnip.it lets you share only the part of the video that you like. Super User Questions More great questions and answers from this past week’s popular topics at Super User. What are the Windows A: and B: drives used for? Does OS X support linux-like features? What is the easiest way to make a backup of an entire hard disk? Will shifting from Wireless to Wired network result in better performance? Is it legal to install Windows 7 Home Premium Retail inside VMware virtual machine? How-To Geek Weekly Article Recap Enjoy reading through our hottest articles from this past week. The 50 Best Ways to Disable Built-in Windows Features You Don’t Want The Best of CES (Consumer Electronics Show) in 2011 How to Upgrade Windows 7 Easily (And Understand Whether You Should) The Worst of CES (Consumer Electronics Show) in 2011 The How-To Geek Guide to Audio Editing: Basic Noise Removal One Year Ago on How-To Geek More great articles from one year ago filled with helpful geeky goodness for you to enjoy. Share Text & Images the Easy Way with JustPaste.it Start Portable Firefox in Safe Mode Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible Extensions Protect Your Computer from “Little Hands” with KidSafe Lock Prying Eyes Out of Your Minimized Windows Custom Crocheted Cylon-Cthulhu Hybrid What happens when you let your Cylon Centurion figure and your crocheted Cthulhu spend too many lonely nights together? A Cylon-Cthulhu hybrid, of course! You can get your own from the Cthulhu Chick store over on Etsy. Note: This is not an ad…Ruth is a friend of ours, and this Cylon-Cthulhu hybrid makes the perfect guard for the new MVP trophy in our office. The Geek Note Whether it is a geeky indoor project or just getting outside, we hope that you and your families have a terrific fun-filled weekend! Remember to keep sending those great tips in to us at [email protected]. Photo by qwrrty. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • SQL SERVER – Windows File/Folder and Share Permissions – Notes from the Field #029

    - by Pinal Dave
    [Note from Pinal]: This is a 29th episode of Notes from the Field series. Security is the task which we should give it to the experts. If there is a small overlook or misstep, there are good chances that security of the organization is compromised. This is very true, but there are always devils’s advocates who believe everyone should know the security. As a DBA and Administrator, I often see people not taking interest in the Windows Security hiding behind the reason of not expert of Windows Server. We all often miss the important mission statement for the success of any organization – Teamwork. In this blog post Brian tells the story in very interesting lucid language. Read On! In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words. When I talk security among database professionals, I find that most have at least a working knowledge of how to apply security within a database. When I talk with DBAs in particular, I find that most have at least a working knowledge of security at the server level if we’re speaking of SQL Server. One area I see continually that is weak is in the area of Windows file/folder (NTFS) and share permissions. The typical response is, “I’m a database developer and the Windows system administrator is responsible for that.” That may very well be true – the system administrator may have the primary responsibility and accountability for file/folder and share security for the server. However, if you’re involved in the typical activities surrounding databases and moving data around, you should know these permissions, too. Otherwise, you could be setting yourself up where someone is able to get to data he or she shouldn’t, or you could be opening the door where human error puts bad data in your production system. File/Folder Permission Basics: I wrote about file/folder permissions a few years ago to give the basic permissions that are most often seen. Here’s what you must know as a minimum at the file/folder level: Read - Allows you to read the contents of the file or folder. Having read permissions allows you to copy the file or folder. Write  – Again, as the name implies, it allows you to write to the file or folder. This doesn’t include the ability to delete, however, nothing stops a person with this access from writing an empty file. Delete - Allows the file/folder to be deleted. If you overwrite files, you may need this permission. Modify - Allows read, write, and delete. Full Control - Same as modify + the ability to assign permissions. File/Folder permissions aggregate, unless there is a DENY (where it trumps, just like within SQL Server), meaning if a person is in one group that gives Read and antoher group that gives Write, that person has both Read and Write permissions. As you might expect me to say, always apply the Principle of Least Privilege. This likely means that any additional permission you might add does not need Full Control. Share Permission Basics: At the share level, here are the permissions. Read - Allows you to read the contents on the share. Change - Allows you to read, write, and delete contents on the share. Full control - Change + the ability to modify permissions. Like with file/folder permissions, these permissions aggregate, and DENY trumps. So What Access Does a Person / Process Have? Figuring out what someone or some process has depends on how the location is being accessed: Access comes through the share (\\ServerName\Share) – a combination of permissions is considered. Access is through a drive letter (C:\, E:\, S:\, etc.) – only the file/folder permissions are considered. The only complicated one here is access through the share. Here’s what Windows does: Figures out what the aggregated permissions are at the file/folder level. Figures out what the aggregated permissions are at the share level. Takes the most restrictive of the two sets of permissions. You can test this by granting Full Control over a folder (this is likely already in place for the Users local group) and then setting up a share. Give only Read access through the share, and that includes to Administrators (if you’re creating a share, likely you have membership in the Administrators group). Try to read a file through the share. Now try to modify it. The most restrictive permission is the Share level permissions. It’s set to only allow Read. Therefore, if you come through the share, it’s the most restrictive. Does This Knowledge Really Help Me? In my experience, it does. I’ve seen cases where sensitive files were accessible by every authenticated user through a share. Auditors, as you might expect, have a real problem with that. I’ve also seen cases where files to be imported as part of the nightly processing were overwritten by files intended from development. And I’ve seen cases where a process can’t get to the files it needs for a process because someone changed the permissions. If you know file/folder and share permissions, you can spot and correct these types of security flaws. Given that there are a lot of database professionals that don’t understand these permissions, if you know it, you set yourself apart. And if you’re able to help on critical processes, you begin to set yourself up as a linchpin (link to .pdf) for your organization. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

  • ?11.2RAC??????????????

    - by JaneZhang(???)
           ?????,???????????????,???dbca???????,???????????dbca,?????????11.2???????,???????,??dbca??????????????????,????????????????     ????11.2???????RACDB2???,?????RACDB1? ?????rac1,????rac2?     ?11.2?,?????grid?????GI,??oracle????????,????????oracle?????? 1. ??????????????????,?????,???????????:audit_file_dest, background_dump_dest, user_dump_dest ?core_dump_dest????audit_file_dest=/u01/app/oracle/admin/RACDB/adump,?????????,?????????:ORA-09925: Unable to create audit trail fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 99252. ????????????????????????????:SQL> alter system set instance_number=2 scope=spfile sid='RACDB2';SQL> alter system set thread=2 scope=spfile sid='RACDB2';SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='RACDB2';SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.122)(PORT=1521))))' sid='RACDB2'; <=====192.0.2.122???2?VIP 3. ???????DB?$ORACLE_HOME/dbs/init<sid>.ora ?????DB?$ORACLE_HOME/dbs/init<sid>.ora,??????????????init<sid>.ora ????,????spfile???:=======================SPFILE='+DATA/racdb/spfileracdb.ora'??:[oracle@rac1 ~]$ scp $ORACLE_HOME/dbs/initRACDB1.ora rac2:$ORACLE_HOME/dbs/initRACDB2.ora <===????????24.  ??????/etc/oratab,????????????:RACDB2:/u01/app/oracle/product/11.2.0/dbhome_1:N       5.  ???????????: DB?$ORACLE_HOME/dbs/ora<sid>.pwd ????DB?$ORACLE_HOME/dbs/ora<sid>.pwd,??????????????:[oracle@rac1 dbs]$ scp $ORACLE_HOME/dbs/orapwRACDB1 rac2:$ORACLE_HOME/dbs/orapwRACDB2 <==?????26.  ?????????????,????????UNDO TABLESPACE?(??????dbca?????,???????undo tablespace????,?????????)??:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/….' SIZE 4096M ;???????:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '+DATA' SIZE 4096M ;7.  ?????????????,????????redo thread?redo log:??:SQL> alter database add logfile thread 2      group 3 ('/dev/...', '/dev/...') size 1024M,     group 4 ('/dev/...','dev/...') size 1024M;???????:SQL> alter database add logfile thread 2     group 3 ('+DATA','+RECO') size 1024M,     group 4 ('+DATA','+RECO') size 1024M;SQL> alter database enable thread 2; <==????thread8.  ??????????,?????????????:[oracle@rac2 admin]$su - oracle[oracle@rac2 admin]$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1[oracle@rac2 admin]$export ORACLE_SID=RACDB2[oracle@rac2 admin]$ sqlplus / as sysdbaSQL> startup <==??????,???????2????????????9. ?????OCR???GI??,?????????????:$srvctl add instance -d <database name> -i <new instance name> -n <new node name>Example of srvctl add instance command:============================[oracle@rac2 ~]$ srvctl add instance -d racdb -i RACDB2 -n rac2  <==????????,????ps -ef|grep smon???[oracle@rac2 dbs]$ ps -ef|grep smonroot      3453     1  1 Jun12 ?        04:03:05 /u01/app/11.2.0/grid/bin/osysmond.bingrid      3727     1  0 Jun12 ?        00:00:19 asm_smon_+ASM2oracle    5343  4543  0 14:06 pts/1    00:00:00 grep smonoracle   28736     1  0 Jun25 ?        00:00:03 ora_smon_RACDB2 <========??????10. ???????:$su - grid[grid@rac2 ~]$ crsctl stat res -t...ora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        OFFLINE OFFLINE             rac2????,??????offline,????????????sqlplus??????sqlplus??????,???srvctl??:[grid@rac2 ~]$ su  - oraclePassword: [oracle@rac2 ~]$ sqlplus / as sysdbaSQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> exit[oracle@rac2 ~]$ srvctl start instance -d racdb -i RACDB2[oracle@rac2 ~]$ su - gridPassword: [grid@rac2 ~]$ crsctl stat res -tora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        ONLINE  ONLINE       rac2                     Open                11. ?????????:[oracle@rac2 ~]$ crsctl stat res ora.racdb.db -pNAME=ora.racdb.dbTYPE=ora.database.typeACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--ACTION_FAILURE_TEMPLATE=ACTION_SCRIPT=ACTIVE_PLACEMENT=1AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%AUTO_START=restoreCARDINALITY=2CHECK_INTERVAL=1CHECK_TIMEOUT=30CLUSTER_DATABASE=trueDATABASE_TYPE=RACDB_UNIQUE_NAME=RACDBDEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)DEGREE=1DESCRIPTION=Oracle Database resourceENABLED=1FAILOVER_DELAY=0FAILURE_INTERVAL=60FAILURE_THRESHOLD=1GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/RACDB/adumpGEN_START_OPTIONS=GEN_START_OPTIONS@SERVERNAME(rac1)=openGEN_START_OPTIONS@SERVERNAME(rac2)=openGEN_USR_ORA_INST_NAME=GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1HOSTING_MEMBERS=INSTANCE_FAILOVER=0LOAD=1LOGGING_LEVEL=1MANAGEMENT_POLICY=AUTOMATICNLS_LANG=NOT_RESTARTING_TEMPLATE=OFFLINE_CHECK_INTERVAL=0ONLINE_RELOCATION_TIMEOUT=0ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1ORACLE_HOME_OLD=PLACEMENT=restrictedPROFILE_CHANGE_TEMPLATE=RESTART_ATTEMPTS=2ROLE=PRIMARYSCRIPT_TIMEOUT=60SERVER_POOLS=ora.RACDBSPFILE=+DATA/RACDB/spfileRACDB.oraSTART_DEPENDENCIES=hard(ora.DATA.dg,ora.RECO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DATA.dg,ora.RECO.dg)START_TIMEOUT=600STATE_CHANGE_TEMPLATE=STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.RECO.dg)STOP_TIMEOUT=600TYPE_VERSION=3.2UPTIME_THRESHOLD=1hUSR_ORA_DB_NAME=RACDBUSR_ORA_DOMAIN=USR_ORA_ENV=USR_ORA_FLAGS=USR_ORA_INST_NAME=USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1USR_ORA_INST_NAME@SERVERNAME(rac2)=RACDB2USR_ORA_OPEN_MODE=openUSR_ORA_OPI=falseUSR_ORA_STOP_MODE=immediateVERSION=11.2.0.3.0???11.2,?OCR???database??,??????,???????????database???????database???????,??????,???????????????ASM????????????  ?:dbca ???????????:????????oracle????dbca:su - oracledbca?? RAC database?? Instance Management?? add an instance???active rac database??????? ??undo?redo??

    Read the article

  • Un-failing over a Cisco PIX 515e

    - by ABrown
    We had a power outage at our data center last week and when our dual PIX 515E running IOS 7.0(8) (configured with a failover cable) came back, they were in a failed over state where the Secondary unit is active and the Primary unit is standby I have tried 'failover reset', 'failover active', and 'failover reload-standby' as well as executing reloads on both units in a variety of orders, and they don't come back Primary/Active Secondary/Standby. The only thing in my arsenal that I haven't tried is driving to the data center and performing a hard reboot, which I hate to do. I have read How Failover Works on the Cisco Secure Firewall and it seems like this should be wicked straight forward. output of show failover on Primary: Failover On Cable status: Normal Failover unit Primary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:52:05 UTC Mar 10 2010 This host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Other host: Secondary - Active Active time: 897045 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. output of show failover on Secondary: Failover On Cable status: Normal Failover unit Secondary Failover LAN Interface: N/A - Serial-based failover enabled Unit Poll frequency 15 seconds, holdtime 45 seconds Interface Poll frequency 15 seconds Interface Policy 1 Monitored Interfaces 2 of 250 maximum Version: Ours 7.0(8), Mate 7.0(8) Last Failover at: 02:03:04 UTC Feb 28 2010 This host: Secondary - Active Active time: 896925 (sec) Interface outside (x.x.x.164): Normal Interface inside (y.y.y.4): Normal Other host: Primary - Standby Ready Active time: 0 (sec) Interface outside (x.x.x.165): Normal Interface inside (y.y.y.3): Normal Stateful Failover Logical Update Statistics Link : Unconfigured. I'm seeing the following in my syslog: Mar 10 03:05:00 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:05:09 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reload-standby' command. Mar 10 03:05:12 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:05:12 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:06:09 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:06:09 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:06:10 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:06:10 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:06:23 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:06:23 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:06:24 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. Mar 10 03:07:05 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:07:31 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover active' command. Mar 10 03:08:04 fw1 %PIX-5-611103: User logged out: Uname: enable_1 Mar 10 03:08:04 fw1 %PIX-6-315011: SSH session from admin1_int on interface inside for user "pix" terminated normally Mar 10 03:08:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=20,my=Active,peer=Failed. Mar 10 03:08:39 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Failed. Mar 10 03:09:10 fw1 %PIX-6-605005: Login permitted from admin1_int/36891 to inside:192.168.4.4/ssh for user "pix" Mar 10 03:09:23 fw1 %PIX-5-111008: User 'enable_15' executed the 'failover reset' command. Mar 10 03:09:38 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=0,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is down. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=401,op=1,my=Active,peer=Failed. Mar 10 03:09:39 fw1 %PIX-6-720024: (VPN-Secondary) HA status callback: Control channel is up. Mar 10 03:09:39 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=411,op=2,my=Active,peer=Failed. Mar 10 03:09:52 fw1 %PIX-6-720032: (VPN-Secondary) HA status callback: id=3,seq=200,grp=0,event=406,op=80,my=Active,peer=Standby Ready. Mar 10 03:09:52 fw1 %PIX-6-720028: (VPN-Secondary) HA status callback: Peer state Standby Ready. Mar 10 03:09:53 fw2 %PIX-6-720027: (VPN-Primary) HA status callback: My state Standby Ready. I'm not exactly sure how to interpret that syslog data. Primary doesn't seem to even try to become Active. When I reload the individual units separately, my connections are retained, so it doesn't seem like I have a real hardware failure. Is there something I can query (IOS or SNMP) to check for hardware issues? Any thoughts? My IOS-fu is weak. Thanks for any help you might provide, Aaron

    Read the article

  • Create an Alias Directory inside a Virtual Host

    - by Praveen Kumar
    First, let me say, I asked this question in StackOverflow, and thought I could get more replies here. I checked here, here, here, here, here, and here before asking this question. I guess my search skills are weak. I am using the WampServer version 2.2e. I have a need like, I need a virtual path inside a virtual host. Let me say the two hosts that I have. Primary Virtual Host (Localhost) NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "C:/Wamp/www" </VirtualHost> My Apps Virtual Hosts <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My Blog Virtual Host <VirtualHost *:80> ServerName blog.praveen-kumar.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" ErrorLog "logs/praveen-kumar-ptrl-error.log" CustomLog "logs/praveen-kumar-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> My requirement now is to have http://apps.ptrl/blog/ and http://blog.praveen-kumar.ptrl/ should be the same directory. One thing I thought of is, moving the blog folder inside the apps folder, but it is connected with Git and other stuffs are there, so it is not possible to move the folder. So, I thought of creating an alias to the VirtualHost in this way: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> </VirtualHost> But when I tried to access http://apps.ptrl/blog, I am getting an Error 403 Forbidden page. Am I doing the right thing? If you need to look at the access log, and error log, they are here: # Access Log 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /blog HTTP/1.1" 403 206 127.0.0.1 - - [14/Oct/2012:09:53:11 +0530] "GET /favicon.ico HTTP/1.1" 404 209 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET / HTTP/1.1" 200 6935 127.0.0.1 - - [14/Oct/2012:09:53:53 +0530] "GET /app/blog/thumb.png HTTP/1.1" 404 216 # Error Log [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/Wamp/vhosts/ptrl/praveen-kumar/blog [Sun Oct 14 09:53:11 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/favicon.ico [Sun Oct 14 09:53:53 2012] [error] [client 127.0.0.1] File does not exist: C:/Wamp/vhosts/ptrl/apps/app/blog, referer: http://apps.ptrl/ Waiting eagerly for some help. I am ready to provide more info, if needed. Update #1: Changed VirtualHosts: <VirtualHost *:80> ServerName apps.ptrl DocumentRoot "C:/Wamp/vhosts/ptrl/apps" ErrorLog "logs/apps-ptrl-error.log" CustomLog "logs/apps-ptrl-access.log" common # The alias to the blog! Alias /blog "C:/Wamp/vhosts/ptrl/praveen-kumar/blog" <Directory "C:/Wamp/vhosts/ptrl/praveen-kumar/blog"> allow from all order allow,deny AllowOverride All </Directory> <Directory "C:/Wamp/vhosts/ptrl/apps"> allow from all order allow,deny AllowOverride All </Directory> DirectoryIndex index.html index.htm index.php </VirtualHost> The issue now: I am able to access the site. The physical links are working now. i.e., I am able to open http://apps.ptrl/blog/index.php but not http://apps.ptrl/blog/view-1.ptf, which gets translated to http://apps.ptrl/blog/index.php?page=view&id=1. Any solutions?

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Parallel Classloading Revisited: Fully Concurrent Loading

    - by davidholmes
    Java 7 introduced support for parallel classloading. A description of that project and its goals can be found here: http://openjdk.java.net/groups/core-libs/ClassLoaderProposal.html The solution for parallel classloading was to add to each class loader a ConcurrentHashMap, referenced through a new field, parallelLockMap. This contains a mapping from class names to Objects to use as a classloading lock for that class name. This was then used in the following way: protected Class loadClass(String name, boolean resolve) throws ClassNotFoundException { synchronized (getClassLoadingLock(name)) { // First, check if the class has already been loaded Class c = findLoadedClass(name); if (c == null) { long t0 = System.nanoTime(); try { if (parent != null) { c = parent.loadClass(name, false); } else { c = findBootstrapClassOrNull(name); } } catch (ClassNotFoundException e) { // ClassNotFoundException thrown if class not found // from the non-null parent class loader } if (c == null) { // If still not found, then invoke findClass in order // to find the class. long t1 = System.nanoTime(); c = findClass(name); // this is the defining class loader; record the stats sun.misc.PerfCounter.getParentDelegationTime().addTime(t1 - t0); sun.misc.PerfCounter.getFindClassTime().addElapsedTimeFrom(t1); sun.misc.PerfCounter.getFindClasses().increment(); } } if (resolve) { resolveClass(c); } return c; } } Where getClassLoadingLock simply does: protected Object getClassLoadingLock(String className) { Object lock = this; if (parallelLockMap != null) { Object newLock = new Object(); lock = parallelLockMap.putIfAbsent(className, newLock); if (lock == null) { lock = newLock; } } return lock; } This approach is very inefficient in terms of the space used per map and the number of maps. First, there is a map per-classloader. As per the code above under normal delegation the current classloader creates and acquires a lock for the given class, checks if it is already loaded, then asks its parent to load it; the parent in turn creates another lock in its own map, checks if the class is already loaded and then delegates to its parent and so on till the boot loader is invoked for which there is no map and no lock. So even in the simplest of applications, you will have two maps (in the system and extensions loaders) for every class that has to be loaded transitively from the application's main class. If you knew before hand which loader would actually load the class the locking would only need to be performed in that loader. As it stands the locking is completely unnecessary for all classes loaded by the boot loader. Secondly, once loading has completed and findClass will return the class, the lock and the map entry is completely unnecessary. But as it stands, the lock objects and their associated entries are never removed from the map. It is worth understanding exactly what the locking is intended to achieve, as this will help us understand potential remedies to the above inefficiencies. Given this is the support for parallel classloading, the class loader itself is unlikely to need to guard against concurrent load attempts - and if that were not the case it is likely that the classloader would need a different means to protect itself rather than a lock per class. Ultimately when a class file is located and the class has to be loaded, defineClass is called which calls into the VM - the VM does not require any locking at the Java level and uses its own mutexes for guarding its internal data structures (such as the system dictionary). The classloader locking is primarily needed to address the following situation: if two threads attempt to load the same class, one will initiate the request through the appropriate loader and eventually cause defineClass to be invoked. Meanwhile the second attempt will block trying to acquire the lock. Once the class is loaded the first thread will release the lock, allowing the second to acquire it. The second thread then sees that the class has now been loaded and will return that class. Neither thread can tell which did the loading and they both continue successfully. Consider if no lock was acquired in the classloader. Both threads will eventually locate the file for the class, read in the bytecodes and call defineClass to actually load the class. In this case the first to call defineClass will succeed, while the second will encounter an exception due to an attempted redefinition of an existing class. It is solely for this error condition that the lock has to be used. (Note that parallel capable classloaders should not need to be doing old deadlock-avoidance tricks like doing a wait() on the lock object\!). There are a number of obvious things we can try to solve this problem and they basically take three forms: Remove the need for locking. This might be achieved by having a new version of defineClass which acts like defineClassIfNotPresent - simply returning an existing Class rather than triggering an exception. Increase the coarseness of locking to reduce the number of lock objects and/or maps. For example, using a single shared lockMap instead of a per-loader lockMap. Reduce the lifetime of lock objects so that entries are removed from the map when no longer needed (eg remove after loading, use weak references to the lock objects and cleanup the map periodically). There are pros and cons to each of these approaches. Unfortunately a significant "con" is that the API introduced in Java 7 to support parallel classloading has essentially mandated that these locks do in fact exist, and they are accessible to the application code (indirectly through the classloader if it exposes them - which a custom loader might do - and regardless they are accessible to custom classloaders). So while we can reason that we could do parallel classloading with no locking, we can not implement this without breaking the specification for parallel classloading that was put in place for Java 7. Similarly we might reason that we can remove a mapping (and the lock object) because the class is already loaded, but this would again violate the specification because it can be reasoned that the following assertion should hold true: Object lock1 = loader.getClassLoadingLock(name); loader.loadClass(name); Object lock2 = loader.getClassLoadingLock(name); assert lock1 == lock2; Without modifying the specification, or at least doing some creative wordsmithing on it, options 1 and 3 are precluded. Even then there are caveats, for example if findLoadedClass is not atomic with respect to defineClass, then you can have concurrent calls to findLoadedClass from different threads and that could be expensive (this is also an argument against moving findLoadedClass outside the locked region - it may speed up the common case where the class is already loaded, but the cost of re-executing after acquiring the lock could be prohibitive. Even option 2 might need some wordsmithing on the specification because the specification for getClassLoadingLock states "returns a dedicated object associated with the specified class name". The question is, what does "dedicated" mean here? Does it mean unique in the sense that the returned object is only associated with the given class in the current loader? Or can the object actually guard loading of multiple classes, possibly across different class loaders? So it seems that changing the specification will be inevitable if we wish to do something here. In which case lets go for something that more cleanly defines what we want to be doing: fully concurrent class-loading. Note: defineClassIfNotPresent is already implemented in the VM as find_or_define_class. It is only used if the AllowParallelDefineClass flag is set. This gives us an easy hook into existing VM mechanics. Proposal: Fully Concurrent ClassLoaders The proposal is that we expand on the notion of a parallel capable class loader and define a "fully concurrent parallel capable class loader" or fully concurrent loader, for short. A fully concurrent loader uses no synchronization in loadClass and the VM uses the "parallel define class" mechanism. For a fully concurrent loader getClassLoadingLock() can return null (or perhaps not - it doesn't matter as we won't use the result anyway). At present we have not made any changes to this method. All the parallel capable JDK classloaders become fully concurrent loaders. This doesn't require any code re-design as none of the mechanisms implemented rely on the per-name locking provided by the parallelLockMap. This seems to give us a path to remove all locking at the Java level during classloading, while retaining full compatibility with Java 7 parallel capable loaders. Fully concurrent loaders will still encounter the performance penalty associated with concurrent attempts to find and prepare a class's bytecode for definition by the VM. What this penalty is depends on the number of concurrent load attempts possible (a function of the number of threads and the application logic, and dependent on the number of processors), and the costs associated with finding and preparing the bytecodes. This obviously has to be measured across a range of applications. Preliminary webrevs: http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.hotspot/ http://cr.openjdk.java.net/~dholmes/concurrent-loaders/webrev.jdk/ Please direct all comments to the mailing list [email protected].

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • Missing a constant on load.. how can i get around this? (Rails::Plugin::OpenID)

    - by Chris Kimpton
    I have a Rails 2 project that I am trying to upgrade to Rails 3, but getting some issues with bundler. When I run "rake", it runs the tests just fine. But when I run "bundle exec rake" it fails to find a constant. The error is this: /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/dependencies.rb:131:in `const_missing': uninitialized constant Rails::Plugin::OpenID (NameError) from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/vendor/plugins/open_id_authentication/init.rb:16:in `evaluate_init_rb' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `evaluate_method' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:166:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `each' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:276:in `run_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `run_prepare_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:631:in `prepare_dispatcher' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:185:in `process' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `run' from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 I have these gems installed: $ gem list *** LOCAL GEMS *** actionmailer (2.3.9) actionpack (2.3.9) activerecord (2.3.9) activeresource (2.3.9) activesupport (2.3.9) authlogic (2.1.3) bundler (1.0.7) gravtastic (2.2.0) linecache (0.43) mocha (0.9.10) newrelic_rpm (2.13.4) parseexcel (0.5.2) rack (1.1.0) rack-openid (1.1.1) rails (2.3.9) rake (0.8.7) ruby-debug-base (0.10.5.jb2, 0.10.4) ruby-debug-ide (0.4.15) ruby-openid (2.1.8, 2.1.7, 2.0.4) sqlite3-ruby (1.3.2) The bundler Gemfile is as follows: source 'http://rubygems.org' #gem 'rails', '3.0.3' gem "rails", "2.3.9" gem "activesupport", "2.3.9" gem "ruby-openid", "2.1.7", :require => "openid" #gem "authlogic-oid", "1.0.4" # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' gem "authlogic", "= 2.1.3" gem "newrelic_rpm" # gem "facebooker" gem "parseexcel" gem 'gravtastic', '= 2.2.0' gem "rack-openid", '=1.1.1', :require => 'rack/openid' # not sure what this does... gem "mocha" I have these plugins installed: 2dc_jqgrid authlogic_openid open_id_authentication squirrel I see these similar questions: Missing a constant on load.. how can i get around this? and Requiring gem in Rails 3 Controller failing with "Constant Missing" But their solutions dont seem to work for my situation. I am guessing the issue is around the plugins, but my ruby-foo is too weak. Thanks in advance, Chris

    Read the article

  • One url to image, few results

    - by Misiur
    Hi! I'm trying to show programmers that some captchas are too weak, and i'm breaking them. Now i've got something like this: Function: <?php function cbreak($image) { $info = getimagesize($image); $width = $info[0]; $height = $info[1]; $img = imagecreatefromgif($image); $map = array(); for($y=0; $y<$height; $y++) { for($x=0; $x<$width; $x++) { $color = imagecolorsforindex($img, imagecolorat($img, $x, $y)); $map[$x][$y] = ($color['red'] + $color['blue'] + $color['green'] > 750) ? TRUE : FALSE; } } echo '<pre>'; for($y=0; $y<$height; $y++) { for($x=0; $x<$width; $x++) { echo ($map[$x][$y] == TRUE) ? 'X' : '-'; } echo '<br>'; } echo '</pre>'; $sum = ''; for($x=0; $x<$width; $x++) { $count = 0; for($y=0; $y<$height; $y++) { if($map[$x][$y] == TRUE) $count++; } $sum .= ($count == 0) ? 'X' : $count; } $sum = preg_replace('#X+#', 'X', $sum); $sum = trim($sum, 'X'); $letters = explode('X', $sum); $patterns = array( /* Still not here */ ); $token = ''; for($i=0; $i<count($letters); $i++) { $token .= $patterns[$letters[$i]]; } echo $token; } ?> Action: <?php $cl = curl_init("http://www.takeagift.pl/rejestracja"); curl_setopt($cl, CURLOPT_RETURNTRANSFER, 1); $r = curl_exec($cl); $pattern = "/src=[\"'].*[\"']?/i"; preg_match_all($pattern, $r, $images); $c = array(); for($i=0; $i<sizeof($images[0]); $i++) { if(strstr($images[0][$i], 'captcha') !== false) { $c = $images[0][$i]; } } $s1 = substr($c, 0, -8); echo $s1."<br />"; $s = substr($s1, 5, -1); echo $s."<br />"; curl_close($cl); ?> <img src="http://www.takeagift.pl/includes/modules/captcha.php?1270900968" /><br /> <img src="http://www.takeagift.pl/includes/modules/captcha.php?1270900968" /><br /> <img src="http://www.takeagift.pl/includes/modules/captcha.php?1270900968" /><br /> <img src="http://www.takeagift.pl/includes/modules/captcha.php?1270900968" /><br /> <img src="http://www.takeagift.pl/includes/modules/captcha.php?1270900968" /><br /> <?php include('cb.php'); cbreak("http://www.takeagift.pl/includes/modules/captcha.php?1270900968"); ?> Don't look at preg_match i still haven't learned regexp. So as You can see links are same: (captcha.php?1270900968), but the result - not. Help me, please (i'm not doing it to spam this portal)

    Read the article

  • Fluent Nhibernate - how do i specify table schemas when auto generating tables in SQL CE 4

    - by daffers
    I am using SQL CE as a database for running local and CI integration tests (normally our site runs on normal SQL server). We are using Fluent Nhibernate for our mapping and having it create our schema from our Mapclasses. There are only two classes with a one to many relationship between them. In our real database we use a non dbo schema. The code would not work with this real database at first until i added schema names to the Table() methods. However doing this broke the unit tests with the error... System.Data.SqlServerCe.SqlCeException : There was an error parsing the query. [ Token line number = 1,Token line offset = 26,Token in error = User ] These are the classes and associatad MapClasses (simplified of course) public class AffiliateApplicationRecord { public virtual int Id { get; private set; } public virtual string CompanyName { get; set; } public virtual UserRecord KeyContact { get; private set; } public AffiliateApplicationRecord() { DateReceived = DateTime.Now; } public virtual void AddKeyContact(UserRecord keyContactUser) { keyContactUser.Affilates.Add(this); KeyContact = keyContactUser; } } public class AffiliateApplicationRecordMap : ClassMap<AffiliateApplicationRecord> { public AffiliateApplicationRecordMap() { Schema("myschema"); Table("Partner"); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.CompanyName, "Name"); References(x => x.KeyContact) .Cascade.All() .LazyLoad(Laziness.False) .Column("UserID"); } } public class UserRecord { public UserRecord() { Affilates = new List<AffiliateApplicationRecord>(); } public virtual int Id { get; private set; } public virtual string Forename { get; set; } public virtual IList<AffiliateApplicationRecord> Affilates { get; set; } } public class UserRecordMap : ClassMap<UserRecord> { public UserRecordMap() { Schema("myschema"); Table("[User]");//Square brackets required as user is a reserved word Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Forename); HasMany(x => x.Affilates); } } And here is the fluent configuraton i am using .... public static ISessionFactory CreateSessionFactory() { return Fluently.Configure() .Database( MsSqlCeConfiguration.Standard .Dialect<MsSqlCe40Dialect>() .ConnectionString(ConnectionString) .DefaultSchema("myschema")) .Mappings(m => m.FluentMappings.AddFromAssembly(typeof(AffiliateApplicationRecord).Assembly)) .ExposeConfiguration(config => new SchemaExport(config).Create(false, true)) .ExposeConfiguration(x => x.SetProperty("connection.release_mode", "on_close")) //This is included to deal with a SQLCE issue http://stackoverflow.com/questions/2361730/assertionfailure-null-identifier-fluentnh-sqlserverce .BuildSessionFactory(); } The documentation on this aspect of fluent is pretty weak so any help would be appreciated

    Read the article

  • My xcode mapview wont work when combined with webview

    - by user1715702
    I have a small problem with my mapview. When combining the mapview code with the code for webview, the app does not zoom in on my position correctly (just gives me a world overview where i´m supposed to be somewhere in California - wish I was). And it doesn´t show the pin that I have placed on a specific location. These things works perfectly fine, as long as the code does not contain anything concerning webview. Below you´ll find the code. If someone can help me to solve this, I would be som thankful! ViewController.h #import <UIKit/UIKit.h> #import <MapKit/MapKit.h> #define METERS_PER_MILE 1609.344 @interface ViewController : UIViewController <MKMapViewDelegate>{ BOOL _doneInitialZoom; IBOutlet UIWebView *webView; } @property (weak, nonatomic) IBOutlet MKMapView *_mapView; @property (nonatomic, retain) UIWebView *webView; @end ViewController.m #import "ViewController.h" #import "NewClass.h" @interface ViewController () @end @implementation ViewController @synthesize _mapView; @synthesize webView; - (void)viewDidLoad { [super viewDidLoad]; [_mapView setMapType:MKMapTypeStandard]; [_mapView setZoomEnabled:YES]; [_mapView setScrollEnabled:YES]; MKCoordinateRegion region = { {0.0,0.0} , {0.0,0.0} }; region.center.latitude = 61.097557; region.center.longitude = 12.126545; region.span.latitudeDelta = 0.01f; region.span.longitudeDelta = 0.01f; [_mapView setRegion:region animated:YES]; newClass *ann = [[newClass alloc] init]; ann.title = @"Hjem"; ann.subtitle = @"Her bor jeg"; ann.coordinate = region.center; [_mapView addAnnotation:ann]; NSString *urlAddress = @"http://google.no"; //Create a URL object. NSURL *url = [NSURL URLWithString:urlAddress]; //URL Requst Object NSURLRequest *requestObj = [NSURLRequest requestWithURL:url]; //Load the request in the UIWebView. [webView loadRequest:requestObj]; } - (void)viewDidUnload { [self setWebView:nil]; [self set_mapView:nil]; [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } - (void)viewWillAppear:(BOOL)animated { // 1 CLLocationCoordinate2D zoomLocation; zoomLocation.latitude = 61.097557; zoomLocation.longitude = 12.126545; // 2 MKCoordinateRegion viewRegion = MKCoordinateRegionMakeWithDistance(zoomLocation, 0.5*METERS_PER_MILE, 0.5*METERS_PER_MILE); // 3 MKCoordinateRegion adjustedRegion = [_mapView regionThatFits:viewRegion]; // 4 [_mapView setRegion:adjustedRegion animated:YES]; } @end NewClass.h #import <UIKit/UIKit.h> #import <MapKit/MKAnnotation.h> @interface newClass : NSObject{ CLLocationCoordinate2D coordinate; NSString *title; NSString *subtitle; } @property (nonatomic, assign) CLLocationCoordinate2D coordinate; @property (nonatomic, copy) NSString *title; @property (nonatomic, copy) NSString *subtitle; @end NewClass.m #import "NewClass.h" @implementation newClass @synthesize coordinate, title, subtitle; @end

    Read the article

  • Java algorithm for normalizing audio

    - by Marty Pitt
    I'm trying to normalize an audio file of speech. Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter. I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak. I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here) @Override public void onAudioSamples(IAudioSamplesEvent event) { // get the raw audio byes and adjust it's value ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer(); for (int i = 0; i < buffer.limit(); ++i) buffer.put(i, (short)(buffer.get(i) * mVolume)); super.onAudioSamples(event); } Here, they modify the bytes in getAudioSamples() by a constant of mVolume. Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value). I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next. Here's an abridged version of what I'm currently doing. Step 1: Find peaks in file: Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples @Override public void onAudioSamples(IAudioSamplesEvent event) { IAudioSamples audioSamples = event.getAudioSamples(); ShortBuffer buffer = audioSamples.getByteBuffer().asShortBuffer(); short min = Short.MAX_VALUE; short max = Short.MIN_VALUE; for (int i = 0; i < buffer.limit(); ++i) { short value = buffer.get(i); min = (short) Math.min(min, value); max = (short) Math.max(max, value); } // assign of min/max ommitted for brevity. super.onAudioSamples(event); } Step 2: Normalize all values: In a loop similar to step1, replace the buffer with normalized values, calling: buffer.put(i, normalize(buffer.get(i)); public short normalize(short value) { if (isBackgroundNoise(value)) return value; short rawMin = // min from step1 short rawMax = // max from step1 short targetRangeMin = 1000; short targetRangeMax = 8000; int abs = Math.abs(value); double a = (abs - rawMin) * (targetRangeMax - targetRangeMin); double b = (rawMax - rawMin); double result = targetRangeMin + ( a/b ); // Copy the sign of value to result. result = Math.copySign(result,value); return (short) result; } Questions: Is this a valid approach for attempting to normalize an audio file? Is my math in normalize() valid? Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?

    Read the article

  • HTML 5 <video> tag vs Flash video. What are the pros and cons?

    - by Vilx-
    Seems like the new <video> tag is all the hype these days, especially since Firefox now supports it. News of this are popping up in blogs all over the place, and everyone seems to be excited. But what about? As much as I searched I could not find anything that would make it better than the good old Flash video. In fact, I see only problems with it: It will still be some time before all the browsers start supporting it, and much more time before most people upgrade; Flash is available already and everyone has it; You can couple Flash with whatever fancy UI you want for controlling the playback. I gather that the tag will be controllable as well (via JavaScript probably), but will it be able to go fullscreen? The only two pros for a <video> tag that I can see are: It is more "semantic" - which probably holds no importance to a whole lot of people, including me; It is not dependent on a single commercial 3rd party entity (Adobe) - which I also don't see as a compelling reason to switch, because free players and video converters are already available, and Adobe is not hindering the whole process in any way (it's not in their interests even). So... what's the big deal? Added: OK, so there is one more Pro... maybe. Support for mobile devices. Hard to say though. A number of thoughts race through my head about the subject: How many mobile devices are actually able to decode video at a decent speed anyway, Flash or otherwise? How long until mainstream mobile devices get the <video> support? Even if it is available through updates, how many people actually do that? How many people watch videos on web pages on their mobile phones at all? As for the semantics part - I understand that search engines might be able to detect videos better now, but... what will they do with them anyway? OK, so they know that there is a video in the page. And? They can't index a video! I'd like some more arguments here. Added: Just thought of another Cons. This opens up a whole new area of cross-browser incompatibility. HTML and CSS is quite messy already in this aspect. Flash at least is the same everywhere. But it's enough for at least one major browser vendor to decide against the <video> tag (can anyone say "Internet Explorer"?) and we have a nice new area of hell to explore. Added: A Pro just came in. More competition = more innovation. That's true. Giving Adobe more competition will probably force them to improve Flash in areas it has been lacking so far. Linux seems to be a weak spot for it, cited by many.

    Read the article

  • JQuery Checkbox with Textbox Validation

    - by Volrath
    I am using Jorn's validation plugin. I have a a group of checkboxes beside a group of textboxes. The textboxes are disabled by default and will enable when the matching checkbox is checked. At least 1 checkbox has to be checked which is not a problem. However, when I check more than 2 checkboxes only 1 textbox validates. The form still submits even when the second checkbox is empty. $count = 0; while($row = mysql_fetch_array($rs)) { ?> <tr> <td> <label> <input type="checkbox" name="tDays[]" id="tDays<?php echo $count; ?>" value="<?php echo $row['promoDayID'];?>" onClick="enableTxt();" <?php if((isset($arrTDays) && in_array_THours($row['promoDayID'], $arrTDays)) || (!empty($arrSelectedTHours) && in_array_THours($row['promoDayID'], $arrSelectedTHours))) { echo "checked='checked'"; }?> validate="required:true" /> <?php echo $row['promoDay'];?>: </label> </td> <td align="right"> <input type="textbox" size="45" style="font-size:12px" name="tHours[]" id="tHours<?php echo $count; ?>" <?php if(isset($arrTDays) && in_array_THours($row['promoDayID'], $arrTDays)) { echo "value='" .getHours($row['promoDayID'], $arrTDays) ."'"; } elseif (!empty($arrSelectedTHours) && in_array_THours($row['promoDayID'], $arrSelectedTHours)) { echo "value='" .getHours($row['promoDayID'], $arrSelectedTHours). "'"; } else { echo "value='' disabled='disabled'"; }?> class="required" /> <label for="tHours[]" class="error" id="tHourserror<?php echo $count; ?>">Please enter the Trading Hour.</label> </td> </tr> <?php $count++; }//while ?> This is done using javascript: function enableTxt() { for (i = 0; i <= 7; i++) { if (document.getElementById("tDays" + i) != null && document.getElementById("tDays" + i).checked == true) { document.getElementById('tHours' + i).disabled = false; document.getElementById('tHourserror' + i).style.visibility = "visible"; } else if (document.getElementById("tDays" + i) != null) { document.getElementById('tHours' + i).disabled = "disabled"; document.getElementById('tHours' + i).value = ""; document.getElementById('tHourserror' + i).style.visibility = "hidden"; } } } Please kindly advise in detail as to how this problem can be solved. I am fairly weak in JQuery.

    Read the article

  • CodePlex Daily Summary for Thursday, February 24, 2011

    CodePlex Daily Summary for Thursday, February 24, 2011Popular ReleasesInstant Feature Builder for Visual Studio 2010: Instant Feature Builder 1.2: This is the binary version of the Instant Feature Builder. Once downloaded, double click to install into Visual Studio. Version 1.2 fixes: Rename FX to IFB to shorten path lengths Fix issue in ExecuteCommand Fix issue to workaround problem with VS Template writer The Instant Feature Builder is a tool which enables you, via drag-and-drop, to build a specific type of Visual Studio extension (VSIX) known as a Feature Extension. A Feature Extension packages project and/or item templates,...DirectQ: Release 1.8.7 (Beta): Beta release of 1.8.7 to get feedback on what works well, what doesn't work well, and what doesn't work at all. D3D9 hardware with ps2.0 a must. Faster, more streamlined and more integrated rendering capabilities with additional MP features and support.Smartkernel: Smartkernel: ????,??????Chiave File Encryption: Chiave 0.9.1: Application for file encryption and decryption using 512 Bit rijndael encyrption algorithm with simple to use UI. Its written in C# and compiled in .Net version 3.5. It incorporates features of Windows 7 like Jumplists, Taskbar progress and Aero Glass. Change Log from 0.9 Beta to 0.9.1: ======================= >Added option for system shutdown, sleep, hibernate after operation completed. >Minor Changes to the UI. >Numerous Bug fixes. Feedbacks are Welcome!....DotNetNuke® Store: 03.00.00: What's New in this release? IMPORTANT: this version requires DotNetNuke 04.06.02 or higher! DO NOT REPORT BUGS HERE IN THE ISSUE TRACKER, INSTEAD USE THE DotNetNuke Store Forum! This version is the same code base as the version 02.01.51 RC, just some cleaning and source code release before submition to the release tracker for "official" release.ClosedXML - The easy way to OpenXML: ClosedXML 0.45.2: New on this release: 1) Added data validation. See Data Validation 2) Deleting or clearing cells deletes the hyperlinks too. New on v0.45.1 1) Fixed issues 6237, 6240 New on v0.45.2 1) Fixed issues 6257, 6266 New Examples Data ValidationOMEGA CMS: OMEGA CMA - Alpha 0.2: A few fixes for OMEGA Framework (DLL) A few tweeks for OMEGA CMSCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1.2: New control, Toast Prompt! Removed progress bar since Silverlight Toolkit Feb 2010 has it.Umbraco CMS: Umbraco 4.7: Service release fixing 31 issues. A full changelog will be available with the final stable release of 4.7 Important when upgradingUpgrade as if it was a patch release (update /bin, /umbraco and /umbraco_client). For general upgrade information follow the guide found at http://our.umbraco.org/wiki/install-and-setup/upgrading-an-umbraco-installation 4.7 requires the .NET 4.0 framework Web.Config changes Update the web web.config to include the 4 changes found in (they're clearly marked in...HubbleDotNet - Open source full-text search engine: V1.1.0.0: Add Sqlite3 DBAdapter Add App Report when Query Cache is Collecting. Improve the performance of index through Synchronize. Add top 0 feature so that we can only get count of the result. Improve the score calculating algorithm of match. Let the score of the record that match all items large then others. Add MySql DBAdapter Improve performance for multi-fields sort . Using hash table to access the Payload data. The version before used bin search. Using heap sort instead of qui...Xen: Graphics API for XNA: Xen 2.0: This is the final release of Xen; Xen 2.0. Xen 2.0 supports PC and Xbox 360 running XNA 4. The documentation download is coming soon Due to restrictions in XNA 4, Building Xen requires a DirectX 10 capable video card (Xen applications can still run on Windows Xp and DirectX 9 video cards)Silverlight????[???]: silverlight????[???]2.0: ???????,?????,????????silverlight??????。DBSourceTools: DBSourceTools_1.3.0.0: Release 1.3.0.0 Changed editors from FireEdit to ICSharpCode.TextEditor. Complete re-vamp of Intellisense ( further testing needed). Hightlight Field and Table Names in sql scripts. Added field dropdown on all tables and views in DBExplorer. Added data option for viewing data in Tables. Fixed comment / uncomment bug as reported by tareq. Included Synonyms in scripting engine ( nickt_ch ).IronPython: 2.7 Release Candidate 1: We are pleased to announce the first Release Candidate for IronPython 2.7. This release contains over two dozen bugs fixed in preparation for 2.7 Final. See the release notes for 60193 for details and what has already been fixed in the earlier 2.7 prereleases. - IronPython TeamCaliburn Micro: A Micro-Framework for WPF, Silverlight and WP7: Caliburn.Micro 1.0 RC: This is the official Release Candicate for Caliburn.Micro 1.0. The download contains the binaries, samples and VS templates. VS Templates The templates included are designed for situations where the Caliburn.Micro source needs to be embedded within a single project solution. This was targeted at government and other organizations that expressed specific requirements around using an open source project like this. NuGet This release does not have a corresponding NuGet package. The NuGet pack...Caliburn: A Client Framework for WPF and Silverlight: Caliburn 2.0 RC: This is the official Release Candidate for Caliburn 2.0. It contains all binaries, samples and generated code docs.Rawr: Rawr 4.0.20 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...MiniTwitter: 1.66: MiniTwitter 1.66 ???? ?? ?????????? 2 ??????????????????? User Streams ?????????Windows Phone 7 Isolated Storage Explorer: WP7 Isolated Storage Explorer v1.0 Beta: Current release features:WPF desktop explorer client Visual Studio integrated tool window explorer client (Visual Studio 2010 Professional and above) Supported operations: Refresh (isolated storage information), Add Folder, Add Existing Item, Download File, Delete Folder, Delete File Explorer supports operations running on multiple remote applications at the same time Explorer detects application disconnect (1-2 second delay) Explorer confirms operation completed status Explorer d...Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...New ProjectsCompetition Management Platform: Paragliding Competition Management PlatformCS424 B2 Group - Car Management: CS424 B2 Group - Car ManagementEMS: The main aim of the system to perform environment inventorization.EVVA: EVVA ist ein Softwareprojekt zur Unterstützung privater Arbeitsvermittler FileSocialVB: A library to work with the filesocial.com web site and API through VB.NET. This is actually an offshoot of the http://twittervb.codeplex.com project. Though smaller, this will lead to a more compact library.Hash Crack - IGProgram: Hash Crack is a software program for hashes and passwords cracking. Hash Crack use dictionary or set of symbols for hashes cracking, and also support pwdump file format for Windows passwords cracking NTHash MD4. MD2, MD4, MD5, SHA1, SHA256, SHA384, SHA512Imail Spammer Killer: Bots seem to love picking off the weak passwords of IpSwitch IMail v10 users and using SMTP auth to spam the world with their new found account access. This project is a windows service to isolate and stop this activity by disabling the violated user account as it occurs.JVS2USB: Montage permettant de relier une IO ( capcom / Sega ) sur un PC via l'USB !!Library Reminder: Library ReminderMAVI: mobile application for the visually impaired: bill recognition & tag and recognize objects based on a specific stickerMessage splliting without envelope in Biztalk 2009: Message splitting without envelope in Biztalk 2009. The project contains: - Source Code - Examples Article describing how to make it:Microsoft translator: Language translator designed to test Microsoft Translator web service API In Windows Phone 7 developed using Visual Studio 2010 in C#mrmuffin: Mr Muffin WP7 game for childrenNetwork Monitor: Simple application with a vu-meter style display of recent incoming network traffic. - Requires .NET 4, - Requires WinPCap (http://www.winpcap.org/) - Only tested to run under Windows 7NPhysics: NPhysics - Physical Data Types for .NETOpen SimRacing Results: Open SimRacing Results aims to provide an open standard for race results of PC SimRacing games (like iRacing, rFactor, NR2003, ...), allowing developers of league management systems to use a unique interface to get the results from, regardless of the simulation used.Orchard Image Gallery: Orchard Image Gallery project is intented to provide a Image Gallery content part and/or widget for the Orchard Project.Planning Poker for Windows Phone 7: Play planning poker on your Windows Phone 7.Publishing and consuming WCF in Biztalk 2009 and Visual Studio 2008: The file contains: Biztalk 2009 Project, C# Console Project, Example. Push Notification for Windows Phone 7 in php: WindowsPhonePushNotification enables you to use Microsoft Push Notification Service in phpQuksace Agjke: For more information, please visit <http://students.cs.tamu.edu/abe/IS_and_R/HW3/quksace%20agjke.html>. http://students.cs.tamu.edu/abe/IS_and_R/HW3/quksace%20agjke.html Read: a "GNU Make"-like utility for viewing Readme files: Read is a simple program to easily load and read README files on a UNIX-compliant system. It works in a similar way to GNU Make by searching a directory for a compatible file, in this case a Readme file, and loading it for reading using a text editor or viewer.SQL CE 3.5 persistence mapper for ECO IV: May need some adjustments for ECO V/VI, definitely needs some rewrite to support SQL CE 4.0 fullySQL Server Job Failure Notification System: A simple means of ensuring you know when SQLAgent jobs fail.SuperQuery: SuperQuery makes it easy to run the same batch of SQL across several databases on different SQL servers. SuperQuery supports all editions and versions of SQL Server from 2000 onwards. It is developed in C# using .NET 4 Client Profile.System.Net.Mail Extended: An Extension to the System.Net.Mail Namespace, adding a POP3 Client, Enhanced SMTP Client, IMAP Client, POP3 Server, SMTP Server, and IMAP Serve, all written in Visual C#.wow-combatlogs: A PowerShell module for working with combat logs generated by World of Warcraft.WPF UndoManager: WPF UndoManager provides a simple Undomanager on the base of WPF's CommandPattern. It use an implementation of the ICommand-interface to manage a history of actions.XO / TicTacToe game: Dynamic sized XO / TicTacToe game for Windows Phone 7 Build using Visual Studio 2010 and C#

    Read the article

  • A Communication System for XAML Applications

    - by psheriff
    In any application, you want to keep the coupling between any two or more objects as loose as possible. Coupling happens when one class contains a property that is used in another class, or uses another class in one of its methods. If you have this situation, then this is called strong or tight coupling. One popular design pattern to help with keeping objects loosely coupled is called the Mediator design pattern. The basics of this pattern are very simple; avoid one object directly talking to another object, and instead use another class to mediate between the two. As with most of my blog posts, the purpose is to introduce you to a simple approach to using a message broker, not all of the fine details. IPDSAMessageBroker Interface As with most implementations of a design pattern, you typically start with an interface or an abstract base class. In this particular instance, an Interface will work just fine. The interface for our Message Broker class just contains a single method “SendMessage” and one event “MessageReceived”. public delegate void MessageReceivedEventHandler( object sender, PDSAMessageBrokerEventArgs e); public interface IPDSAMessageBroker{  void SendMessage(PDSAMessageBrokerMessage msg);   event MessageReceivedEventHandler MessageReceived;} PDSAMessageBrokerMessage Class As you can see in the interface, the SendMessage method requires a type of PDSAMessageBrokerMessage to be passed to it. This class simply has a MessageName which is a ‘string’ type and a MessageBody property which is of the type ‘object’ so you can pass whatever you want in the body. You might pass a string in the body, or a complete Customer object. The MessageName property will help the receiver of the message know what is in the MessageBody property. public class PDSAMessageBrokerMessage{  public PDSAMessageBrokerMessage()  {  }   public PDSAMessageBrokerMessage(string name, object body)  {    MessageName = name;    MessageBody = body;  }   public string MessageName { get; set; }   public object MessageBody { get; set; }} PDSAMessageBrokerEventArgs Class As our message broker class will be raising an event that others can respond to, it is a good idea to create your own event argument class. This class will inherit from the System.EventArgs class and add a couple of additional properties. The properties are the MessageName and Message. The MessageName property is simply a string value. The Message property is a type of a PDSAMessageBrokerMessage class. public class PDSAMessageBrokerEventArgs : EventArgs{  public PDSAMessageBrokerEventArgs()  {  }   public PDSAMessageBrokerEventArgs(string name,     PDSAMessageBrokerMessage msg)  {    MessageName = name;    Message = msg;  }   public string MessageName { get; set; }   public PDSAMessageBrokerMessage Message { get; set; }} PDSAMessageBroker Class Now that you have an interface class and a class to pass a message through an event, it is time to create your actual PDSAMessageBroker class. This class implements the SendMessage method and will also create the event handler for the delegate created in your Interface. public class PDSAMessageBroker : IPDSAMessageBroker{  public void SendMessage(PDSAMessageBrokerMessage msg)  {    PDSAMessageBrokerEventArgs args;     args = new PDSAMessageBrokerEventArgs(      msg.MessageName, msg);     RaiseMessageReceived(args);  }   public event MessageReceivedEventHandler MessageReceived;   protected void RaiseMessageReceived(    PDSAMessageBrokerEventArgs e)  {    if (null != MessageReceived)      MessageReceived(this, e);  }} The SendMessage method will take a PDSAMessageBrokerMessage object as an argument. It then creates an instance of a PDSAMessageBrokerEventArgs class, passing to the constructor two items: the MessageName from the PDSAMessageBrokerMessage object and also the object itself. It may seem a little redundant to pass in the message name when that same message name is part of the message, but it does make consuming the event and checking for the message name a little cleaner – as you will see in the next section. Create a Global Message Broker In your WPF application, create an instance of this message broker class in the App class located in the App.xaml file. Create a public property in the App class and create a new instance of that class in the OnStartUp event procedure as shown in the following code: public partial class App : Application{  public PDSAMessageBroker MessageBroker { get; set; }   protected override void OnStartup(StartupEventArgs e)  {    base.OnStartup(e);     MessageBroker = new PDSAMessageBroker();  }} Sending and Receiving Messages Let’s assume you have a user control that you load into a control on your main window and you want to send a message from that user control to the main window. You might have the main window display a message box, or put a string into a status bar as shown in Figure 1. Figure 1: The main window can receive and send messages The first thing you do in the main window is to hook up an event procedure to the MessageReceived event of the global message broker. This is done in the constructor of the main window: public MainWindow(){  InitializeComponent();   (Application.Current as App).MessageBroker.     MessageReceived += new MessageReceivedEventHandler(       MessageBroker_MessageReceived);} One piece of code you might not be familiar with is accessing a property defined in the App class of your XAML application. Within the App.Xaml file is a class named App that inherits from the Application object. You access the global instance of this App class by using Application.Current. You cast Application.Current to ‘App’ prior to accessing any of the public properties or methods you defined in the App class. Thus, the code (Application.Current as App).MessageBroker, allows you to get at the MessageBroker property defined in the App class. In the MessageReceived event procedure in the main window (shown below) you can now check to see if the MessageName property of the PDSAMessageBrokerEventArgs is equal to “StatusBar” and if it is, then display the message body into the status bar text block control. void MessageBroker_MessageReceived(object sender,   PDSAMessageBrokerEventArgs e){  switch (e.MessageName)  {    case "StatusBar":      tbStatus.Text = e.Message.MessageBody.ToString();      break;  }} In the Page 1 user control’s Loaded event procedure you will send the message “StatusBar” through the global message broker to any listener using the following code: private void UserControl_Loaded(object sender,  RoutedEventArgs e){  // Send Status Message  (Application.Current as App).MessageBroker.    SendMessage(new PDSAMessageBrokerMessage("StatusBar",      "This is Page 1"));} Since the main window is listening for the message ‘StatusBar’, it will display the value “This is Page 1” in the status bar at the bottom of the main window. Sending a Message to a User Control The previous example sent a message from the user control to the main window. You can also send messages from the main window to any listener as well. Remember that the global message broker is really just a broadcaster to anyone who has hooked into the MessageReceived event. In the constructor of the user control named ucPage1 you can hook into the global message broker’s MessageReceived event. You can then listen for any messages that are sent to this control by using a similar switch-case structure like that in the main window. public ucPage1(){  InitializeComponent();   // Hook to the Global Message Broker  (Application.Current as App).MessageBroker.    MessageReceived += new MessageReceivedEventHandler(      MessageBroker_MessageReceived);} void MessageBroker_MessageReceived(object sender,  PDSAMessageBrokerEventArgs e){  // Look for messages intended for Page 1  switch (e.MessageName)  {    case "ForPage1":      MessageBox.Show(e.Message.MessageBody.ToString());      break;  }} Once the ucPage1 user control has been loaded into the main window you can then send a message using the following code: private void btnSendToPage1_Click(object sender,  RoutedEventArgs e){  PDSAMessageBrokerMessage arg =     new PDSAMessageBrokerMessage();   arg.MessageName = "ForPage1";  arg.MessageBody = "Message For Page 1";   // Send a message to Page 1  (Application.Current as App).MessageBroker.SendMessage(arg);} Since the MessageName matches what is in the ucPage1 MessageReceived event procedure, ucPage1 can do anything in response to that event. It is important to note that when the message gets sent it is sent to all MessageReceived event procedures, not just the one that is looking for a message called “ForPage1”. If the user control ucPage1 is not loaded and this message is broadcast, but no other code is listening for it, then it is simply ignored. Remove Event Handler In each class where you add an event handler to the MessageReceived event you need to make sure to remove those event handlers when you are done. Failure to do so can cause a strong reference to the class and thus not allow that object to be garbage collected. In each of your user control’s make sure in the Unloaded event to remove the event handler. private void UserControl_Unloaded(object sender, RoutedEventArgs e){  if (_MessageBroker != null)    _MessageBroker.MessageReceived -=         _MessageBroker_MessageReceived;} Problems with Message Brokering As with most “global” classes or classes that hook up events to other classes, garbage collection is something you need to consider. Just the simple act of hooking up an event procedure to a global event handler creates a reference between your user control and the message broker in the App class. This means that even when your user control is removed from your UI, the class will still be in memory because of the reference to the message broker. This can cause messages to still being handled even though the UI is not being displayed. It is up to you to make sure you remove those event handlers as discussed in the previous section. If you don’t, then the garbage collector cannot release those objects. Instead of using events to send messages from one object to another you might consider registering your objects with a central message broker. This message broker now becomes a collection class into which you pass an object and what messages that object wishes to receive. You do end up with the same problem however. You have to un-register your objects; otherwise they still stay in memory. To alleviate this problem you can look into using the WeakReference class as a method to store your objects so they can be garbage collected if need be. Discussing Weak References is beyond the scope of this post, but you can look this up on the web. Summary In this blog post you learned how to create a simple message broker system that will allow you to send messages from one object to another without having to reference objects directly. This does reduce the coupling between objects in your application. You do need to remember to get rid of any event handlers prior to your objects going out of scope or you run the risk of having memory leaks and events being called even though you can no longer access the object that is responding to that event. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A Communication System for XAML Applications” from the drop down list.

    Read the article

  • custom tabbars and make them circulate

    - by pengwang
    i want to custom tabbars and want to circulate slide,default tab bar only have 5 items show at the same time,it not meet me,i have 11 items,so i want to make 3 tabbars ,every have 5 items,for example A(0-4)--B(5-9)--C(10)--A--B--C--A. at print i only finish A(0-4)--B(5-9)--C(10),how to make them circulate? my code : .h file #import <UIKit/UIKit.h> @protocol InfiniTabBarDelegate; @interface InfiniTabBar : UIScrollView <UIScrollViewDelegate, UITabBarDelegate> { __weak id <InfiniTabBarDelegate> infiniTabBarDelegate; NSMutableArray *tabBars; UITabBar *aTabBar; UITabBar *bTabBar; } @property (nonatomic, weak) id infiniTabBarDelegate; @property (strong,nonatomic) NSMutableArray *tabBars; @property (strong,nonatomic) UITabBar *aTabBar; @property (strong,nonatomic) UITabBar *bTabBar; - (id)initWithItems:(NSArray *)items; - (void)setBounces:(BOOL)bounces; // Don't set more items than initially - (void)setItems:(NSArray *)items animated:(BOOL)animated; - (int)currentTabBarTag; - (int)selectedItemTag; - (BOOL)scrollToTabBarWithTag:(int)tag animated:(BOOL)animated; - (BOOL)selectItemWithTag:(int)tag; @end @protocol InfiniTabBarDelegate <NSObject> - (void)infiniTabBar:(InfiniTabBar *)tabBar didScrollToTabBarWithTag:(int)tag; - (void)infiniTabBar:(InfiniTabBar *)tabBar didSelectItemWithTag:(int)tag; @end .m file @implementation InfiniTabBar @synthesize infiniTabBarDelegate; @synthesize tabBars; @synthesize aTabBar; @synthesize bTabBar; - (id)initWithItems:(NSArray *)items { self = [super initWithFrame:CGRectMake(0.0, 411.0, 320.0, 49.0)]; // TODO: //self = [super initWithFrame:CGRectMake(self.superview.frame.origin.x + self.superview.frame.size.width - 320.0, self.superview.frame.origin.y + self.superview.frame.size.height - 49.0, 320.0, 49.0)]; // Doesn't work. self is nil at this point. if (self) { self.pagingEnabled = YES; self.delegate = self; self.tabBars = [[NSMutableArray alloc] init]; float x = 0.0; for (double d = 0; d < ceil(items.count / 5.0); d ++) { UITabBar *tabBar = [[UITabBar alloc] initWithFrame:CGRectMake(x, 0.0, 320.0, 49.0)]; tabBar.delegate = self; int len = 0; for (int i = d * 5; i < d * 5 + 5; i ++) if (i < items.count) len ++; tabBar.items = [items objectsAtIndexes:[NSIndexSet indexSetWithIndexesInRange:NSMakeRange(d * 5, len)]]; // NSLog(@"####%@",NSMakeRange(d * 5, len)); [self.tabBars addObject:tabBar]; [self addSubview:tabBar]; x += 320.0; } self.contentSize = CGSizeMake(x, 49.0); } return self; } - (void)setBounces:(BOOL)bounces { if (bounces) { int count = self.tabBars.count; if (count > 0) { if (self.aTabBar == nil) self.aTabBar = [[UITabBar alloc] initWithFrame:CGRectMake(-320.0, 0.0, 320.0, 49.0)]; [self addSubview:self.aTabBar]; if (self.bTabBar == nil) self.bTabBar = [[UITabBar alloc] initWithFrame:CGRectMake(count * 320.0, 0.0, 320.0, 49.0)]; [self addSubview:self.bTabBar]; } } else { [self.aTabBar removeFromSuperview]; [self.bTabBar removeFromSuperview]; } [super setBounces:bounces]; } - (void)setItems:(NSArray *)items animated:(BOOL)animated { for (UITabBar *tabBar in self.tabBars) { int len = 0; for (int i = [self.tabBars indexOfObject:tabBar] * 5; i < [self.tabBars indexOfObject:tabBar] * 5 + 5; i ++) if (i < items.count) len ++; [tabBar setItems:[items objectsAtIndexes:[NSIndexSet indexSetWithIndexesInRange:NSMakeRange([self.tabBars indexOfObject:tabBar] * 5, len)]] animated:animated]; } self.contentSize = CGSizeMake(ceil(items.count / 5.0) * 320.0, 49.0); } - (int)currentTabBarTag { return self.contentOffset.x / 320.0; } - (int)selectedItemTag { for (UITabBar *tabBar in self.tabBars) if (tabBar.selectedItem != nil) return tabBar.selectedItem.tag; // No item selected return 0; } - (BOOL)scrollToTabBarWithTag:(int)tag animated:(BOOL)animated { for (UITabBar *tabBar in self.tabBars) if ([self.tabBars indexOfObject:tabBar] == tag) { UITabBar *tabBar = [self.tabBars objectAtIndex:tag]; [self scrollRectToVisible:tabBar.frame animated:animated]; if (animated == NO) [self scrollViewDidEndDecelerating:self]; return YES; } return NO; } - (BOOL)selectItemWithTag:(int)tag { for (UITabBar *tabBar in self.tabBars) for (UITabBarItem *item in tabBar.items) if (item.tag == tag) { tabBar.selectedItem = item; [self tabBar:tabBar didSelectItem:item]; return YES; } return NO; } - (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView { [infiniTabBarDelegate infiniTabBar:self didScrollToTabBarWithTag:scrollView.contentOffset.x / 320.0]; } - (void)scrollViewDidEndScrollingAnimation:(UIScrollView *)scrollView { [self scrollViewDidEndDecelerating:scrollView]; } - (void)tabBar:(UITabBar *)cTabBar didSelectItem:(UITabBarItem *)item { // Act like a single tab bar for (UITabBar *tabBar in self.tabBars) if (tabBar != cTabBar) tabBar.selectedItem = nil; [infiniTabBarDelegate infiniTabBar:self didSelectItemWithTag:item.tag]; } @end

    Read the article

< Previous Page | 14 15 16 17 18 19  | Next Page >