Search Results

Search found 12289 results on 492 pages for 'apache license'.

Page 435/492 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • ArchBeat Facebook Friday: Top 10 Posts - August 8-14, 2014

    - by Bob Rhubart-Oracle
    5,307 people pay attention to the OTN ArchBeat Facebook Page. Here are the Top 10 posts from that page for the last seven days, August 8-14, 2014. Podcast: ODTUG Kscope 2014: Anatomy of a User Conference - Part 3 There is more to a great user conference than a shared interest in Oracle products. In the final segment of this 3-part OTN ArchBeat Podcast panelists Danny Bryant , Chet "ORACLENERD" Justice, Cameron Lackpour, Debra Lilley, and Mike Riley discuss the nature and importance of community Oracle SOA Suite 12c: The LDAP Adapter quick and easy | Maarten Smeets Maarten Smeets' how-to post describes the installation and configuration of an LDAP server and browser (ApacheDS and Apache Directory Studio). Process level Exception Handling in BPM12c | Abhishek Mittal When an exception occurs while running a process flow you have two choices: 1) retry running the flow object that caused that process flow or 2) move the process instance to the next flow object in the main process flow. Abhishek Mittal shows you how to do both. Building a Responsive WebCenter Portal Application | JayJay Zheng Oracle ACE JayJay Zheng's article addresses the essentials of responsive web design, shows you how to design and develop a responsive WebCenter Portal application, and reviews key development considerations. Cloud Control authorization with Active Directory | Jeroen Gouma Jeroen Gouma takes you step-by-step through the user authortization process in Oracle Enterprise Manager Cloud Control 12c. Video: CIOs Guide to Oracle Products and Solutions | Jessica Keyes The CIO's Guide to Oracle Products and Solutions author Jessica Keyes talks about why input from users and developers is essential to CIOs who want to avoid being escorted out of the building by security guards. Read A CIO's Guide to Oracle Cloud Computing, a sample chapter from the book. Twitter Tuesday - Top 10 @ArchBeat Tweets - August 5-11, 2014 @OTNArchBeat followers from across the galaxy have spoken! Here are the Top 10 tweets for the past seven days. Topics include: Hyperion, OBIEE, ODI, Oracle MAF, and SOA Suite. Recap: Fusion Middleware Summer Camps - Lisbon 2014 | Simon Haslam Oracle ACE Director Simon Haslam's recap of his experience at the Oracle Fusion Middleware Summer Camp in Lisbon, Portugal will make you wish you had been there. WebLogic Data Source Connection Labeling | Steve Felts The connection labeling feature was added in WLS release 10.3.6, and enhanced in release WLS 12.1.3. This post by Steve Felts describes two new connection properties that can be configured on the data source descriptor. Why Mobile Apps <3 REST/JSON | Martin Jarvis Martin Jarvis explores the preference for REST and JSON over SOAP and XML for mobile web services.

    Read the article

  • Advice for moving from custom domain hosted on blogspot to new custom domain on hosting provider [closed]

    - by Chan
    Need some advice :) a) Facts: 1. Currently, we have a blog www.thebigbigsky.net hosted on blogspot.com. The site has been up for around a year and google had cached some of the posts. 2. The idea for setting up the blog was to promote/share articles/write-up about personal development and self-growth, and also to offer counseling services related to them. 3. There is another subdomain - chinese.thebigbigsky.net hosting similar blog posts but in Chinese. This is hosted on blogspot as well. b) What we want to do: 1. We did some research on the internet and understood that, to make a website more popular, it is better to have a domain name that is related to the content of the site. Hence, in our case, a domain name containing "personal development" would be more ideal and easier for people to remember. 2. Hence, we are thinking to move away from blogspot to a meaningful new domain (example: personaldevelopmentwithxyz.com) and move all contents there. The new domain will be hosted on a hosting provider e.g. hostgator.com c) Here are the steps we have in mind: 1. Register the new domain (example: personaldevelopmentwithxyz.com) 2. Get a hosting provider hostgator.com 3. Setup a new site based on Wordpress, import all contents from existing blog to this new site - personaldevelopmentwithxyz.com. 4. Switch current blog back to thebigbigsky.blogspot.com 5. Switch DNS of www.thebigbigsky.net to point to the new site on hostgator.com 6. On hostgator.com, setup permanent redirect 301 of www.thebigbigsky.net to point to personaldevelopmetnwithxyz.com by following the tutorial mentioned here - http://www.blogbloke.com/migrating-redirecting-blogger-wordpress-htaccess-apache-best-method/ Questions: 1. Will it have major impact on the current google pagerank or SEO of thebigbigsky.net? As the site is not that popular, we assumed that the impact would not be a major one. 2. Is the domain personaldevelopmentwithxyz.com considered too long? (For SEO etc) 3. Steps c.4 and c.5 above - Will this work for our case? 4. How about the subdomain - chinese.thebigbigsky.net? We would like keep it as separate blog with the domain e.g. chinese.personaldevelopmentwithxyz.com 5. We added "facebook comments" to the existing post on www.thebigtbigsky.net and would not want to lose it. Will the comments still be there after we moved to the new domain? 6. Any better idea to do it? :) Thank you very much! regards, -chan

    Read the article

  • AJAX event, prevents other page actions

    - by cobaltduck
    Here's a fairly average scenario, using JSF as an example, but this same concept I have observed in ASP.NET, Apache Wicket, and other frameworks with ajax capabilities. <h:inputText id="text1" value="#{myBacker.myBean.myStringVar}" styleClass="goodCSS"> <f:ajax event="change" listener="#{myBacker.text1ChangeEventMethod}" update="someOtherField" /> </h:inputText> <h:selectBooleanCheckbox id="check1" value="#{myBacker.myBean.myBoolVar}" /> Let's suppose that the 'text1ChangeEventListener' is essential to 'someOtherField' and perhaps toggles its disabled attribute, or changes its available options, based on the value of 'myStringVar.' The particulars aren't important, let's just accept that for some reason we need an ajax call when the 'text1' value is changed. So Jane User is working her way down the form. She arrives at the 'text1' field and types some value. The cursor focus is still in the text field, as she moves her mouse to the 'check1' box and clicks. It appears to her that nothing has happened. She clicks again, and this time the checkbox highlights and the icon indicating a selection appears in the box. Jane has to do several entries in the form today, and sees this happen every time, and it becomes very frustrating for her. Likewise, Jeff Admin is also perusing this form, and begins to type in 'text1.' He then realizes he doesn't really want to enter this data, and so moves his mouse to the "cancel" button elsewhere on the page, and clicks. Nothing seems to happen. Jeff clicks again, and after confirming he really does want to cancel, is returned to the home page. Jeff scratches his head. The problem is simply that the first thing the system does after 'text1' looses focus is run the listener and perform the ajax operation. It may only take a fraction of a second, but still, you can click other buttons all you want, but until that ajax has finished, everything else is ignored. I've spent the morning searching and reading, and it seems no one else has even noticed this. I could find not one article, blog, past question here or at SO, or anyting that addresses this obvious and glaring deficiency in ajax. So first of all, am I truly alone in thinking this is a big problem? Second, does anyone have a solution?

    Read the article

  • why installing lame it is getting failed

    - by Rahul Mehta
    I want to install ffmpeg with mp3lame enabled for this m following this tutorial , http://ubuntuforums.org/showpost.php?p=9868359&postcount=1289 and in step 2 error is libfaac is not found ? and in step 5 installing lame is giving this error , why it is getting failed , please advised what to do ? reach121@youngib:~/lame-3.98.4$ sudo checkinstall --pkgname=lame-ffmpeg --pkgversion="3.98.4" --backup=no --default --deldoc=yes checkinstall 1.6.2, Copyright 2009 Felipe Eduardo Sanchez Diaz Duran This software is released under the GNU GPL. ***************************************** **** Debian package creation selected *** ***************************************** This package will be built according to these values: 0 - Maintainer: [ root@youngib ] 1 - Summary: [ Package created with checkinstall 1.6.2 ] 2 - Name: [ lame-ffmpeg ] 3 - Version: [ 3.98.4 ] 4 - Release: [ 1 ] 5 - License: [ GPL ] 6 - Group: [ checkinstall ] 7 - Architecture: [ amd64 ] 8 - Source location: [ lame-3.98.4 ] 9 - Alternate source location: [ ] 10 - Requires: [ ] 11 - Provides: [ lame-ffmpeg ] 12 - Conflicts: [ ] 13 - Replaces: [ ] Enter a number to change any of them or press ENTER to continue: Installing with make install... ========================= Installation results =========================== Making install in mpglib make[1]: Entering directory `/home/reach121/lame-3.98.4/mpglib' make[2]: Entering directory `/home/reach121/lame-3.98.4/mpglib' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/mpglib' make[1]: Leaving directory `/home/reach121/lame-3.98.4/mpglib' Making install in libmp3lame make[1]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' Making install in i386 make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/i386' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/i386' Making install in vector make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[3]: Nothing to be done for `install-exec-am'. make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame/vector' make[2]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' make[3]: Entering directory `/home/reach121/lame-3.98.4/libmp3lame' test -z "/usr/local/lib" || /bin/mkdir -p "/usr/local/lib" /bin/bash ../libtool --mode=install /usr/bin/install -c 'libmp3lame.la' '/usr/local/lib/libmp3lame.la' /usr/bin/install -c .libs/libmp3lame.lai /usr/local/lib/libmp3lame.la /usr/bin/install -c .libs/libmp3lame.a /usr/local/lib/libmp3lame.a chmod 644 /usr/local/lib/libmp3lame.a ranlib /usr/local/lib/libmp3lame.a PATH="$PATH:/sbin" ldconfig -n /usr/local/lib ---------------------------------------------------------------------- Libraries have been installed in: /usr/local/lib If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,--rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- make[3]: Nothing to be done for `install-data-am'. make[3]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' make[2]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' make[1]: Leaving directory `/home/reach121/lame-3.98.4/libmp3lame' Making install in frontend make[1]: Entering directory `/home/reach121/lame-3.98.4/frontend' make[2]: Entering directory `/home/reach121/lame-3.98.4/frontend' test -z "/usr/local/bin" || /bin/mkdir -p "/usr/local/bin" /bin/bash ../libtool --mode=install /usr/bin/install -c 'lame' '/usr/local/bin/lame' /usr/bin/install -c lame /usr/local/bin/lame make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/frontend' make[1]: Leaving directory `/home/reach121/lame-3.98.4/frontend' Making install in Dll make[1]: Entering directory `/home/reach121/lame-3.98.4/Dll' make[2]: Entering directory `/home/reach121/lame-3.98.4/Dll' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/Dll' make[1]: Leaving directory `/home/reach121/lame-3.98.4/Dll' Making install in debian make[1]: Entering directory `/home/reach121/lame-3.98.4/debian' make[2]: Entering directory `/home/reach121/lame-3.98.4/debian' make[2]: Nothing to be done for `install-exec-am'. make[2]: Nothing to be done for `install-data-am'. make[2]: Leaving directory `/home/reach121/lame-3.98.4/debian' make[1]: Leaving directory `/home/reach121/lame-3.98.4/debian' Making install in doc make[1]: Entering directory `/home/reach121/lame-3.98.4/doc' Making install in html make[2]: Entering directory `/home/reach121/lame-3.98.4/doc/html' make[3]: Entering directory `/home/reach121/lame-3.98.4/doc/html' make[3]: Nothing to be done for `install-exec-am'. test -z "/usr/local/share/doc/lame/html" || /bin/mkdir -p "/usr/local/share/doc/lame/html" /bin/mkdir: cannot create directory `/usr/local/share/doc': No such file or directory make[3]: *** [install-pkghtmlDATA] Error 1 make[3]: Leaving directory `/home/reach121/lame-3.98.4/doc/html' make[2]: *** [install-am] Error 2 make[2]: Leaving directory `/home/reach121/lame-3.98.4/doc/html' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/home/reach121/lame-3.98.4/doc' make: *** [install-recursive] Error 1 **** Installation failed. Aborting package creation. Cleaning up...OK Bye. reach121@youngib:~/lame-3.98.4$

    Read the article

  • Limiting Audit Exposure and Managing Risk – Q&A and Follow-Up Conversation

    - by Tanu Sood
    Thanks to all who attended the live ISACA webcast on Limiting Audit Exposure and Managing Risk with Metrics-Driven Identity Analytics. We were really fortunate to have Don Sparks from ISACA moderate the webcast featuring Stuart Lincoln, Vice President, IT P&L Client Services, BNP Paribas, North America and Neil Gandhi, Principal Product Manager, Oracle Identity Analytics. Stuart’s insights given the team’s role in providing IT for P&L Client Services and his tremendous experience in identity management and establishing sustainable compliance programs were true value-add at yesterday’s webcast. And if you are a healthcare organization looking to solve your compliance and security challenges, we recommend you join us for a live webcast on Tuesday, November 29 at 10 am PT. The webcast will feature experts from Kaiser Permanente, PricewaterhouseCoopers and Oracle and the focus of the discussion will be around the compliance challenges a healthcare organization faces and best practices for tackling those. Here are the details: Healthcare IT News Webcast: Managing Risk and Enforcing Compliance in Healthcare with Identity Analytics Tuesday, November 29, 201110:00 a.m. PT / 1:00 p.m. ET Register Today The ISACA webcast replay is now available on-demand and the slides are also available for download. Since we didn’t have time to address all the questions we received during the live Q&A portion of the webcast, we have captured responses to the remaining questions here. Please continue to provide us your feedback and insights from your experience in deploying identity compliance solutions. Q. Can you please clarify the mechanism utilized to populate the Identity Warehouse from each individual application's access management function / files? A. Oracle Identity Analytics (OIA) supports direct imports from applications. Data collection is based on Extract, Transform and Load (ETL) that eliminates the need to write connectors to different applications. Oracle Identity Analytics’ import engine supports complex entitlement feeds saved as either text files or XML. The imports can be scheduled on a periodic basis or triggered as needed. If the applications are synchronized with a user provisioning solution like Oracle Identity Manager, Oracle Identity Analytics has a seamless integration to pull in data from Oracle Identity Manager. Q.  Can you provide a short summary of the new features in your latest release of Oracle Identity Analytics? A. Oracle recently announced availability of enhanced Oracle Identity Analytics. This release focused on easing the certification process by offering risk analytics driven certification, advanced certification screens, business centric views and significant improvement in performance including 3X faster data imports, 3X faster certification campaign generation and advanced auto-certification features, that  will allow organizations to improve user productivity by up to 80%. Closed-loop risk feedback and IT policy monitoring with Oracle Identity Manager, a leading user provisioning solution, allows for more accurate certification reviews. And, OIA's improved performance enables customers to scale compliance initiatives supporting millions of user entitlements across thousands of applications, whether on premise or in the cloud, without compromising speed or integrity. Q. Will ISACA grant a CPE credit for attending this ISACA-sponsored webinar today? A. From ISACA: Hello and thank you for your interest in the 2011 ISACA Webinar Program!  Unfortunately, there are no CPEs offered for this program, archived or live.  We will be looking into the feasibility of offering them in the future.  Q. Would you be able to use this to help manage licenses for software? That is to say - could it track software that is not used by a user, thus eliminating the software license? A. OIA’s integration with Oracle Identity Manager, a leading user provisioning solution, allows organizations to detect ghost accounts or unused accounts via account reconciliation. Based on company’s policies, this could trigger an automated workflow for account deletion or asking for further investigation. Closed-loop feedback between the two solutions would then allow visibility into the complete audit trail of when the account was detected, the action taken, by whom, when and the current status. Q. We have quarterly attestations and .xls mechanisms are not working. Once the identity data is correlated in Identity Analytics, do you then automate access certification? A. OIA’s identity warehouse analyzes and correlates identity data across various resources that allows OIA to determine a user’s risk profile, who the access review request should go to, along with all the relevant access details of the user. The access certification manager gets notification on what to review, when and the relevant data is presented in a business friendly screen. Based on the result of the access certification process, actions are triggered and results recorded and archived. Access review managers have visual risk indicators that also allow them to prioritize access certification tasks and efforts. Q. How does Oracle Identity Analytics work with Cloud Security? A. For enterprises looking to build their own cloud(s), Oracle offers a set of security services that cloud developers can leverage including Oracle Identity Analytics.  For enterprises looking to manage their compliance requirements but without hosting those in-house and instead having a hosting provider offer managed Identity Management services to the organizations, Oracle Identity Analytics can be leveraged much the same way as you’d in an on-premise (within the enterprise) environment. In fact, organizations today are leveraging Oracle Identity Analytics to manage identity compliance in both these ways. Q. Would you recommend this as a cost effective solution for a smaller organization with @ 2,500 users? A. The key return-on-investment (ROI) on Oracle Identity Analytics is derived from automating compliance processes thereby eliminating administrative overhead, minimizing errors, maintaining cost- and time-effective sustainable compliance processes and minimizing audit exposures and penalties.  Of course, there are other tangible benefits that are derived from an Oracle Identity Analytics implementation as outlined in the webcast. For a quantitative analysis of your requirements and potential ROI calculation, we recommend you refer to the Forrester Study on Total Economic Impact of Oracle Identity Analytics. For an in-person discussion, please email Richard Caldwell.

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • JUDCon 2013 Trip Report

    - by reza_rahman
    JUDCon (JBoss Users and Developers Conference) 2013 was held in historic Boston on June 9-11 at the Hynes Convention Center. JUDCon is the largest get together for the JBoss community, has gone global in recent years but has it's roots in Boston. The JBoss folks graciously accepted a Java EE 7 talk from me and actually referenced my talk in their own sessions. I am proud to say this is my third time speaking at JUDCon/the Red Hat Summit over the years (this was the first time on behalf of Oracle). I had great company with many of the rock stars of the JBoss ecosystem speaking such as Lincoln Baxter, Jay Balunas, Gavin King, Mark Proctor, Andrew Lee Rubinger, Emmanuel Bernard and Pete Muir. Notably missing from JUDCon were Bill Burke, Burr Sutter, Aslak Knutsen and Dan Allen. Topics included Java EE, Forge, Arquillian, AeroGear, OpenShift, WildFly, Errai/GWT, NoSQL, Drools, jBPM, OpenJDK, Apache Camel and JBoss Tools/Eclipse. My session titled "JavaEE.Next(): Java EE 7, 8, and Beyond" went very well and it was a full house. This is our main talk covering the changes in JMS 2, the Java API for WebSocket (JSR 356), the Java API for JSON Processing (JSON-P), JAX-RS 2, JPA 2.1, JTA 1.2, JSF 2.2, Java Batch, Bean Validation 1.1, Java EE Concurrency and the rest of the APIs in Java EE 7. I also briefly talked about the possibilities for Java EE 8. The slides for the talk are here: JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman Besides presenting my talk, it was great to catch up with the JBoss gang and attend a few interesting sessions. On Sunday night I went to one of my favorite hangouts in Boston - the exalted Middle East Club as Rolling Stone refers to it (other cool spots in an otherwise pretty boring town is "the Church"). As contradictory as it might sound to the uninitiated, the Middle East Club is possibly the best place in Boston to simultaneously get great Middle Eastern (primarily Lebanese) food and great underground metal. For folks with a bit more exposure, this is probably not contradictory at all given bands like Acrassicauda and documentaries like Heavy Metal in Baghdad. Luckily for me they were featuring a few local Thrash metal bands from the greater Boston area. It wasn't too bad considering it was primarily amateur twenty-something guys (although I'm not sure I'm a qualified critic any more since I all but stopped playing about at that age). It's great Boston has the Middle East as an incubator to keep the rock, metal, folk, jazz, blues and indie scene alive. I definitely enjoyed JUDCon/Boston and hope to be part of the conference next year again.

    Read the article

  • Xobni Plus for Outlook [Review]

    - by The Geek
    Overview Xobni Plus is an addin that will bring a sidebar to Outlook which allows you to search through your inbox and contacts a lot easier. It provides the ability to search and keep track of your favorite social networks. Searching with Xobni is a lot more powerful than the default search feature in Outlook. It let’s you drill down your searches to conversations, email, links, and attachments. It now supports Outlook 2010 both 32 & 64-bit versions. Installation & Setup Installation is easy following the wizard. After completing the wizard you can tell you’re friends on Facebook and Twitter that you are now using it. You can also decide to join their Product Improvement Program if you want. After installation when you open Outlook, Xobni appears as a sidebar on the right side. Don’t worry about it always being in the way, as you can hide it if you need more room for other Outlook functions. After Xobni free is installed, you can upgrade to the Plus version at any time. A new window will open up and you can use your Credit Card, PayPal, or redeem a code if you have one. Features & Use Where to begin with the amount of features available in Xobni Plus? It really has an amazing amount of cool features. Of course you’ll have all of the features of the Free Version which we previously covered…and a lot more. After Xobni is installed you’ll notice a section for it on the Ribbon. From here you can search Xobni, show or hide the Sidebar, and change other options. It allows you to easily keep up with various social networks like Facebook, Twitter, and LinkedIn… Check out email analytics and contact ranks. Click on the Files Exchanged tab to search for specific attachments. Quickly search links exchanged with your contacts. Hover over a link to get a preview of what it entails. It gives you the ability to index all of your Yahoo mail as well, without the need for purchasing Yahoo Plus! Then your Yahoo messages appear in the Xobni sidebar. When you select a contact you can see related messages from you Yahoo account. Easily index all of your mail…including Yahoo mail for better organization and faster search results. There are several options you can select to change the way Xobni works. From setting up your Yahoo email, Indexing options, and much more. Additional Features of Xobni Plus Advanced Search Capabilities – Filter results, Boolean & Phrase Search, Ability to search Appointments & Tasks, Advanced Search Builder Search unlimited PST data files Xobni contacts in the compose screen Find links exchanged with your contacts View calendar appointments One year premium tech support No Ads! Performance We ran Xobni Plus on Outlook 2010 32-bit on a Dual-Core AMD Athlon system with 4GB of RAM and found it to run quite smoothly. However, we did notice it would sometimes slow down launching Outlook, especially if other apps are running at the same time. Product Support When you buy a license for Xobni Plus you get a full year of premium tech support. They provide a Questions and Answers page on their site where you can run a search query and answers appear instantly. You can contact support directly as a Plus member through their web form and they advise the turn around time is 2 business days. However, when we tested it, we received a response within 24 hours. They also provide FAQ, Community forum, and you can download the Owners Manual in PDF format from the support page. Conclusion Xobni Plus is a very powerful addin for Outlook and includes a lot more features that we didn’t cover in this review. You can download Xobni free edition which includes an 8 day free trial of the Plus version. This provides a good way to start getting familiar with it. Then upgrade to Xobni Plus at any time for $29.95. Once you get started, you’ll find the sidebar is nicely laid out and intuitive to use. If you live out of Outlook during the day, Xobni Plus is a great addition for fast and powerful searches. It provides an easy way to keep all of your contacts and messages well organized and easy to find. Xobni Plus works with XP, Vista, and Windows 7 (32 & 64-bit editions) Outlook 2003, 2007 and both 32 & 64-bit editions of Outlook 2010. Download Xobni Plus Download Xobni Free Edition Rating Installation: 8 Ease of Use: 8 Features: 9 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Xobni Free Powers Up Outlook’s Search and ContactsCreate an Email Template in Outlook 2003Add Social Elements to Your Gmail Contacts with RapportiveChange Outlook Startup FolderClear Outlook Searches and MRU (Most Recently Used) Lists TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 10 Superb Firefox Wallpapers OpenDNS Guide Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-11

    - by Bob Rhubart
    Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's TinselTown, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems—and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 JDeveloper extensions where? | Peter Paul van de Beek "Where does the downloaded stuff go after you installed JDeveloper extensions, like SOA Composite Editor, Oracle BPM Studio, or AIA Service Constructor?" Peter Paul van de Beek has the answer. Using Apache Derby Database with WebLogic (the express way) | Frank Munz Another technical how-to video from Dr. Frank Munz. Compensation Hello World | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen's post addresses several question that came up during the "Effective Fault Handling in SOA Suite 11g" session that he and fellow Oracle ACE Guido Schmutz presented at Oracle OpenWorld. Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes," the blog explains, "contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Maximum Availability Whitepaper for IDM 11gR2 | Oracle Fusion Middleware Security The Oracle Fusion Middelware A-Team shares an overview of and a link to a new white paper: "Identity Management 11.1.2 Enterprise Deployment Blueprint." Thought for the Day "The trouble with the world is that the stupid are sure and the intelligent are full of doubt." — Bertrand Russell (May 18, 1872 – February 2, 1970) Source: SoftwareQuotes.com

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • Yes, I did it - Skydiving in Mauritius

    Finally, I did it or better said we did it. Already back in November last year I saw the big billboard advertisement of Skydive Austral Mauritius near Caudan Waterfront in Port Louis and decided for myself that this is going to be the perfect birthday gift for my wife. Simply out of curiosity I would join her tandem jump with a second instructor. Due to her pregnancy of our son I had to be patient... But then finally, her birthday had arrived and on our midnight celebration session I showed her her netbook with the website preloaded. Actually, it was the "perfect" timing... Recovery from her cesarean is fine, local weather conditions are gorgious and the children were under surveillance of my mum - spending her annual holidays on the island. So, after late wake-up in the morning, we packed our stuff and off we went. According to Google Maps direction indication we had to drive for roughly 50km (only) but traffic here in Mauritius is always challenging. The dropzone is at the Zone Industrielle Mon Loisir Sugar Estate near Riviere du Rempart at the northern east coast. Anyways, we were not in a hurry and arrived there shortly after noon. The access road to the airfield are just small down-driven paths through sugar cane fields and according to our daughter "it's bumpy!". True true true... The facilities at Skydive Austral Mauritius are complete except for food. Enough space for parking, easy handling at the reception and a lot to see for the kids. There's even a big terrace with several sets of tables and chairs, small bar for soft drinks, strictly non-alcoholic. The team over there is all welcoming and warm-hearthy! Having the kids with us was no issue at all. Quite the opposite, our daugther was allowed to discover a lot of things than we adults did. Even visiting the small air plane was on the menu for her. Really great stuff! While waiting for our turn we enjoyed watching other people getting ready in the jump gear, taking off with the Cessna, and finally coming back down on the tandem parachute. Actually, the different expressions on their faces was one of the best parts while waiting. Great mental preparation as my wife was getting more anxious about her first jump... {loadposition content_adsense} First, we got some information about the procedures on the plane about how to get seated, tight up with our instructors and how to get ready for the jump off the plane as soon as we arrive the height of 10.000 ft. All well explained and easy to understand after all.Next, we met with our jumpers Chris and Lee aka "Rasta" to get dressed and ready for take-off. Those guys are really cool and relaxed for their job. From that point on, the DVD session / recording for my wife's birthday started and we really had a lot of fun... The difference between that small Cessna and a commercial flight with an Airbus or a Boeing is astronomic! The climb up to 10.000 ft took us roughly 25 minutes and we enjoyed the magnificent view over the turquoise lagunes near Poste de Flacq, Lafayette and Isle d'Ambre on the north-east coast. After flying through the clouds we sun-bathed and looked over "iced-sugar covered" Mauritius. You might have a look at the picture gallery of Skydive Mauritius for better imagination. The moment of truth, or better said, point of no return came after approximately 25 minutes. The door opens, moving into position on the side on top of the wheel and... out! Back flip and free fall! Slight turns and Wooooohooooo! through the clouds... It so amazing and breath-taking! So undescribable! You have to experience this yourself! Some seconds later the parachute opened and we glided smoothly with some turns and spins back down to the dropzone. The rest of the family could hear and see us soon and the landing was easy going. We never had any doubts or fear about our instructors. They did a great job and we are looking forward to book our next job. I might even consider to follow educational classes on skydiving and earn a license. By the way, feel free to get in touch with Skydive Austral Mauritius. Either via contact details on their website or tweeting a little bit with them. Follow the tweets of Chris and fellows on SkydiveAustral.

    Read the article

  • Visual Tree Enumeration

    - by codingbloke
    I feel compelled to post this blog because I find I’m repeatedly posting this same code in silverlight and windows-phone-7 answers in Stackoverflow. One common task that we feel we need to do is burrow into the visual tree in a Silverlight or Windows Phone 7 application (actually more recently I found myself doing this in WPF as well).  This allows access to details that aren’t exposed directly by some controls.  A good example of this sort of requirement is found in the “Restoring exact scroll position of a listbox in Windows Phone 7”  question on stackoverflow.  This required that the scroll position of the scroll viewer internal to a listbox be accessed. A caveat One caveat here is that we should seriously challenge the need for this burrowing since it may indicate that there is a design problem.  Burrowing into the visual tree or indeed burrowing out to containing ancestors could represent significant coupling between module boundaries and that generally isn’t a good idea. Why isn’t this idea just not cast aside as a no-no?  Well the whole concept of a “Templated Control”, which are in extensive use in these applications, opens the coupling between the content of the visual tree and the internal code of a control.   For example, I can completely change the appearance and positioning of elements that make up a ComboBox.  The ComboBox control relies on specific template parts having set names of a specified type being present in my template.  Rightly or wrongly this does kind of give license to writing code that has similar coupling. Hasn’t this been done already? Yes it has.  There are number of blogs already out there with similar solutions.  In fact if you are using Silverlight toolkit the VisualTreeExtensions class already provides this feature.  However I prefer my specific code because of the simplicity principle I hold to.  Only write the minimum code necessary to give all the features needed.  In this case I add just two extension methods Ancestors and Descendents, note I don’t bother with “Get” or “Visual” prefixes.  Also I haven’t added Parent or Children methods nor additional “AndSelf” methods because all but Children is achievable with the addition of some other Linq methods.  I decided to give Descendents an additional overload for depth hence a depth of 1 is equivalent to Children but this overload is a little more flexible than simply Children. So here is the code:- VisualTreeEnumeration public static class VisualTreeEnumeration {     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root, int depth)     {         int count = VisualTreeHelper.GetChildrenCount(root);         for (int i = 0; i < count; i++)         {             var child = VisualTreeHelper.GetChild(root, i);             yield return child;             if (depth > 0)             {                 foreach (var descendent in Descendents(child, --depth))                     yield return descendent;             }         }     }     public static IEnumerable<DependencyObject> Descendents(this DependencyObject root)     {         return Descendents(root, Int32.MaxValue);     }     public static IEnumerable<DependencyObject> Ancestors(this DependencyObject root)     {         DependencyObject current = VisualTreeHelper.GetParent(root);         while (current != null)         {             yield return current;             current = VisualTreeHelper.GetParent(current);         }     } }   Usage examples The following are some examples of how to combine the above extension methods with Linq to generate the other axis scenarios that tree traversal code might require. Missing Axis Scenarios var parent = control.Ancestors().Take(1).FirstOrDefault(); var children = control.Descendents(1); var previousSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).TakeWhile(c => c != control)); var followingSiblings = control.Ancestors().Take(1)     .SelectMany(p => p.Descendents(1).SkipWhile(c => c != control).Skip(1)); var ancestorsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Ancestors()); var descendentsAndSelf = Enumerable.Repeat((DependencyObject)control, 1)     .Concat(control.Descendents()); You might ask why I don’t just include these in the VisualTreeEnumerator.  I don’t on the principle of only including code that is actually needed.  If you find that one or more of the above  is needed in your code then go ahead and create additional methods.  One of the downsides to Extension methods is that they can make finding the method you actually want in intellisense harder. Here are some real world usage scenarios for these methods:- Real World Scenarios //Gets the internal scrollviewer of a ListBox ScrollViewer sv = someListBox.Descendents().OfType<ScrollViewer>().FirstOrDefault(); // Get all text boxes in current UserControl:- var textBoxes = this.Descendents().OfType<TextBox>(); // All UIElement direct children of the layout root grid:- var topLevelElements = LayoutRoot.Descendents(0).OfType<UIElement>(); // Find the containing `ListBoxItem` for a UIElement:- var container = elem.Ancestors().OfType<ListBoxItem>().FirstOrDefault(); // Seek a button with the name "PinkElephants" even if outside of the current Namescope:- var pinkElephantsButton = this.Descendents()     .OfType<Button>()     .FirstOrDefault(b => b.Name == "PinkElephants"); //Clear all checkboxes with the name "Selector" in a Treeview foreach (CheckBox checkBox in elem.Descendents()     .OfType<CheckBox>().Where(c => c.Name == "Selector")) {     checkBox.IsChecked = false; }   The last couple of examples above demonstrate a common requirement of finding controls that have a specific name.  FindName will often not find these controls because they exist in a different namescope. Hope you find this useful, if not I’m just glad to be able to link to this blog in future stackoverflow answers.

    Read the article

  • How to deal with malicious domain redirections?

    - by user359650
    It is possible for anybody to buy a domain name containing negative terms and point it to someone's website in order to damage their reputation. For instance someone could buy the domain child-pornography.com and point it to the address 64.34.119.12 which is the address behind stackoverflow.com and people navigating to the domain in question would end up visualizing content from StackExchange which would be detrimental to StackExchange's image. To illustrate this, I added the entry 64.34.119.12 child-pornography.com to my /etc/hosts file and tested. Here is what I obtained: I personally found this user experience terrible as someone could think that Stack Exchange are in favor of child pornography and awaiting support from the community to create a Q&A site about it. I tested with other websites and experienced other behaviors that I would categorize as follows: 1 - Useful 404 page (happens with stackoverflow.com): For me the worst way of handling this as the image of the targeted website is directly associated with the offending domain. The more useful the 404 page, the bigger the impression that the targeted website would be willing to help with child pornography. 2 - Redirection (happens with microsoft.com): For instance when accessing child-pornography.com you get redirected to www.microsoft.com. It isn't as bad as above as the offending domain name never appears alongside the targeted website's content, but still bad in my opinion as it gives the impression the targeted website bought the offending domain and redirected it to their website to get more traffic. 3 - Server error (happens with lemonde.fr): You get an error from the webserver which page doesn't contain any content that can be associated with the targeted website (e.g. default Apache 404 page, completely blank page). I believe that is good as the identify of the targeted website isn't revealed. Above are the various behaviors I experienced, but I also thought about a fourth way of dealing with this which is described below. 4 - Disclaimer page (haven't found any website implementing that technique): Display a message such as : "You ended here because someone bought and linked the child-pornography.com domain to our website. We do not own this domain and do not associate ourselves with it. This request has been logged by our servers and we will raise this issue with the competent authorities to have this domain taken down. If you want to access our website, please click here." The good thing about this method is that it can be implemented at application layer (good if you don't have control over web server which happens with some hosting solutions), allows you to protect yourself from any liability, and offer the visitor to be redirected to your own website. Which of the above options would you implement to deal with malicious domain linking (IMO only options 3 and 4 are worth considering) ?

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Unable to connect to mail server via IMAP and roundcube

    - by mrhatter
    I am having trouble getting the final parts of my mail server up and working. I followed this tutorial to get everything set up on the mail server side. I have installed roundcube for webmail and configured it but it is saying "error connecting, connection refused" when attempting to connect to it using IMAP. This is thorough the "test imap" section of its installer. Also it is giving me an error message about perissions for it's log and temp folders but that's not as important as acutally getting mail to work. I have also tried connecting to the mail server using thunderbird however it cannot establish a connection either and I know my login information is correct. I know that the databases are working correctly based on the roundcube installer telling me that they have been "successfully initialized". Here are my firewall rules -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -p tcp -m tcp --dport 487 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -j DROP Which I set up in iptables. I have modified them from what I used in this tutorial I'm not sure what to try next. Any help would be wonderful! I am using Ubuntu 14.04 server, apache 2.4.7, roundcube 1.0.1, and the latest versions of dovecot and postfix. The email databases are contained in mysql. I am running this on a VPS server. UPDATE: I have changed from iptables to using ufw. I have run the following commands to set up a basic firewall with ufw. ufw default deny ufw allow ssh ufw allow http ufw allow https ufw allow imap ufw allow imaps ufw allow smtp I then used telnet to check all of the mail ports. But Port 993 isnt working even though ufw says both 993 and 993/tcp are open. What am I missing?

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • iOS Support with Windows Azure Mobile Services – now with Push Notifications

    - by ScottGu
    A few weeks ago I posted about a number of improvements to Windows Azure Mobile Services. One of these was the addition of an Objective-C client SDK that allows iOS developers to easily use Mobile Services for data and authentication.  Today I'm excited to announce a number of improvement to our iOS SDK and, most significantly, our new support for Push Notifications via APNS (Apple Push Notification Services).  This makes it incredibly easy to fire push notifications to your iOS users from Windows Azure Mobile Service scripts. Push Notifications via APNS We've provided two complete tutorials that take you step-by-step through the provisioning and setup process to enable your Windows Azure Mobile Service application with APNS (Apple Push Notification Services), including all of the steps required to configure your application for push in the Apple iOS provisioning portal: Getting started with Push Notifications - iOS Push notifications to users by using Mobile Services - iOS Once you've configured your application in the Apple iOS provisioning portal and uploaded the APNS push certificate to the Apple provisioning portal, it's just a matter of uploading your APNS push certificate to Mobile Services using the Windows Azure admin portal: Clicking the “upload” within the “Push” tab of your Mobile Service allows you to browse your local file-system and locate/upload your exported certificate.  As part of this you can also select whether you want to use the sandbox (dev) or production (prod) Apple service: Now, the code to send a push notification to your clients from within a Windows Azure Mobile Service is as easy as the code below: push.apns.send(deviceToken, {      alert: 'Toast: A new Mobile Services task.',      sound: 'default' }); This will cause Windows Azure Mobile Services to connect to APNS (Apple Push Notification Service) and send a notification to the iOS device you specified via the deviceToken: Check out our reference documentation for full details on how to use the new Windows Azure Mobile Services apns object to send your push notifications. Feedback Scripts An important part of working with any PNS (Push Notification Service) is handling feedback for expired device tokens and channels. This typically happens when your application is uninstalled from a particular device and can no longer receive your notifications. With Windows Notification Services you get an instant response from the HTTP server.  Apple’s Notification Services works in a slightly different way and provides an additional endpoint you can connect to poll for a list of expired tokens. As with all of the capabilities we integrate with Mobile Services, our goal is to allow developers to focus more on building their app and less on building infrastructure to support their ideas. Therefore we knew we had to provide a simple way for developers to integrate feedback from APNS on a regular basis.  This week’s update now includes a new screen in the portal that allows you to optionally provide a script to process your APNS feedback – and it will be executed by Mobile Services on an ongoing basis: This script is invoked periodically while your service is active. To poll the feedback endpoint you can simply call the apns object's getFeedback method from within this script: push.apns.getFeedback({       success: function(results) {           // results is an array of objects with a deviceToken and time properties      } }); This returns you a list of invalid tokens that can now be removed from your database. iOS Client SDK improvements Over the last month we've continued to work with a number of iOS advisors to make improvements to our Objective-C SDK. The SDK is being developed under an open source license (Apache 2.0) and is available on github. Many of the improvements are behind the scenes to improve performance and memory usage. However, one of the biggest improvements to our iOS Client API is the addition of an even easier login method.  Below is the Objective-C code you can now write to invoke it: [client loginWithProvider:@"twitter"                     onController:self                        animated:YES                      completion:^(MSUser *user, NSError *error) {      // if no error, you are now logged in via twitter }]; This code will automatically present and dismiss our login view controller as a modal dialog on the specified controller.  This does all the hard work for you and makes login via Twitter, Google, Facebook and Microsoft Account identities just a single line of code. My colleague Josh just posted a short video demonstrating these new features which I'd recommend checking out: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Are there Negative Impact of opensource on commercial environment?

    - by Lostsoul
    I know this is not a good fit for Stack Overflow but wasn't sure if it was good for this site also so let me know if its not and I'll delete it. I love programming for fun but my role in my company is not technical. I have always loved the hacker culture and have been trying to drive that openness within my company from day one. My company has a very broad range of products and there are a few that are not strategic to us so I wanted to open source them (so we can focus on what makes us unique and open source the products that every firm has). Our industry does not open source(we would be the first firm to try this) and the feedback I'm getting from my management team is either 1) we'll destroy the industry or 2) all competitive commercial firms will unite against us and we'll be wiped out either way. I disagreed on both points because I think transparency will only grow our industry and our firm (think of McDonalds/KFC sharing their recipe openly, people may copy you, competitors may target you, but customers also may feel more comfortable buying your product. The value add, I believe, is in the delivery and experience not in hoarding the recipe). It's a big battle in my firm right now between the IT people who have seen the positive effects of sharing and the business people who think we'll be giving up everything (they prefer we sell parts we want to opensource, but in their defense this is standard when divesting something). Our industry is very secretive and I don't want to put anyone(even my competitors employees) out of a job yet I don't want to protect inefficient people by not being open with everyone. Yet I've seen so many amazing technologies created in interesting ways just by giving people freedom to take apart code and put it back together. I'm interested in hearing people's thoughts(doesn't have to be to my specific situation, I'm looking for the general lessons). Its a very stressful decision(but one I feel I must make) because if we go the open source route then there will be no going back. So what are your thoughts? Does open sourcing apply generally or is it only really applicable to software? Is it overall good for people in the industry and outside? I'm actually more interested in the negativeness effects(although positive are welcomed as well) Update: Long story short, although code is involved this is not so much about code as it is more about the idea of open sourcing. We are a mid sized quant hedge fund. We have some unique strategies but also have the standard long/short, arbitrage, global macro, etc.. funds. We are keeping the unique funds we have but the other stuff that everyone else has we are considering open sourcing (We have put in years of work & millions of dollars into. Our funds is pretty popular and our performance is either in first or second quartile so I suspect there will be interest but I don't know to what extent). The goal is not to get a community to work for us or anything, the goal is to let anyone who wants to tinker with it do so and create anything they want (it will not be part of our product line although I may unofficially allocate some our of staff's time to assist any community that grows). Although the code base is quite large, the value in this is the industry knowledge and approaches we have acquired (there are many books on artificial intelligence and quant trading but they are often years behind what's really going on as most firms forbid their staff from discussing what they are doing). We are also considering after we move our clients out to let the software still run and output the resulting portfolios for free as well so people can at least see the results(as long as we have avail. infrastructure). I think our main choices are, we can continue to fight for market share in a products that are becoming commoditized, we can shut the funds/products down(and keep the code but no one outside of our firm will ever learn from it) or we can open source it and let people do what they want. By open sourcing it, my idea is that the talent pool in the industry will grow because right now most of our hires have the same background (CFA, MBA, similar school, same experience,etc.. because we can't spend time training people so the industry 'standardizes' most people and thus the firms themselves start to look/act similar) but this may allow us to identify talent that has never been in the industry before (if we put a GPU license then as people learn from what we did, we can learn from what they do as well and maybe apply it to other areas of our firm). I see a lot of benefits but not many negatives while my peers at the company see the opposite.

    Read the article

  • Teamcity build agent gives 504 gateway timeout

    - by Anthony
    I have a new teamcity build agent machine, which when started up tries to connect to the build server and fails. It never shows up in the connected, disconnected or unauthorised agents tabs of the build server web interface. The logs on the build agent show that it fails to connect with a 504 gateway timeout. This is from teamcity-agent.log [2012-09-04 15:34:59,776] INFO - buildServer.AGENT.registration - Registering on server http://10.0.10.16, AgentDetails{Name='my-local', AgentId=null, BuildId=null, AgentOwnAddress='10.0.1.14', AlternativeAddresses=[10.0.10.32], Port=8080, Version='21424', PluginsVersion='21424-md5-somechecksum', AvailableRunners=[ABunchOfPlugins], AvailableVcs=[SomeRunners], AuthorizationToken='sometoken'} [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Call http://10.0.10.16/RPC2 buildServer.registerAgent3: org.apache.xmlrpc.XmlRpcClientException: Server returned incorrect status code: 504 Gateway Time-out [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Connection to TeamCity server is probably lost. Will be trying to restore it. Take a look at logs/teamcity-agent.log for details (unless you're using custom logging). (I have edited some identifying data out of this log excerpt) But I can reach the build server. In fact, tracert shows that it is very nearby. Tracing route to TEAMCITYSERVER [10.0.10.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.0.2.1 2 <1 ms <1 ms <1 ms TEAMCITYSERVER [10.0.10.16] Trace complete. I can see a teamcity login page if I hit http://10.0.10.16 in the browser. The teamcity service is logging in as the same (local administrator) account as I used to log in and test the network. The build agent is a windows 2008 server VM hosted on Ubuntu 12.04 under Oracle VirtualBox. I have disabled firewalls on both the Windows and Ubuntu machines. Other VMS with similar configuration can connect fine and do not report this error. What can possibly be preventing this connection?

    Read the article

  • Invalid keystore format with SSL in Tomcat 6

    - by strauberry
    I'm trying to setup SSL in my local Tomcat 6 installation. For this, I followed the official How-To doing the following: $JAVA_HOME/bin/keytool -genkey -v -keyalg RSA -alias tomcat -keypass changeit -storepass changeit $JAVA_HOME/bin/keytool -export -alias tomcat -storepass changeit -file /root/server.crt Then changing the $CATALINA_BASE/conf/server.xml, in-commenting this: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/root/.keystore" keystorePass="changeit" /> After starting Tomcat, I get this Exception: INFO: Initializing Coyote HTTP/1.1 on http-8080 30.06.2011 10:15:24 org.apache.tomcat.util.net.jsse.JSSESocketFactory getStore SCHWERWIEGEND: Failed to load keystore type JKS with path /root/.keystore due to Invalid keystore format java.io.IOException: Invalid keystore format at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:633) at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38) at java.security.KeyStore.load(KeyStore.java:1185) When I look into the keystore with keytool -list I get root@host:~# $JAVA_HOME/bin/keytool -list Enter key store password: changeit Key store type: gkr Key store provider: GNU-CRYPTO Key store contains 1 entry(ies) Alias name: tomcat Creation timestamp: Donnerstag, 30. Juni 2011 - 10:13:40 MESZ Entry type: key-entry Certificate fingerprint (MD5): 6A:B9:...C:89:1C Obviously, the keystore types are different. How can I change the type and will this fix my problem? Thank you!

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >