Search Results

Search found 1982 results on 80 pages for 'tech kapil'.

Page 77/80 | < Previous Page | 73 74 75 76 77 78 79 80  | Next Page >

  • JMaghreb 2012 Trip Report

    - by arungupta
    JMaghreb is the inaugural Java conference organized by Morocco JUG. It is the biggest Java conference in Maghreb (5 countries in North West Africa). Oracle was the exclusive platinum sponsor with several others. The registrations had to be closed at 1412 for the free conference and several folks were already on the waiting list. Rabat with 531 registrations and Casablanca with 426 were the top cities. Some statistics ... 850+ attendees over 2 days, 500+ every day 30 sessions were delivered by 18 speakers from 10 different countries 10 sessions in French and 20 in English 6 of the speakers spoke at JavaOne 2012 8 will be at Devoxx Attendees from 5 different countries and 57 cities in Morocco 40.9% qualified them as professional and rest as students Topics ranged from HTML5, Java EE 7, ADF, JavaFX, MySQL, JCP, Vaadin, Android, Community, JCP Java EE 6 hands-on lab was sold out within 7 minutes and JavaFX in 12 minutes I gave the keynote along with Simon Ritter which was basically a recap of the Strategy and Technical keynotes presented at JavaOne 2012. An informal survey during the keynote showed the following numbers: 25% using NetBeans, 90% on Eclipse, 3 on JDeveloper, 1 on IntelliJ About 10 subscribers to free online Java magazine. This digital magazine is a comprehensive source of information for everything Java - subscribe for free!! About 10-15% using Java SE 7. Download JDK 7 and get started today! Even JDK 8 builds have been available for a while now. My second talk explained the core concepts of WebSocket and how JSR 356 is providing a standard API to build WebSocket-driven applications in Java EE 7. TOTD #183 explains how you can easily get started with WebSocket in GlassFish 4. The complete slide deck is available: Next day started with a community keynote by Sonya Barry. Some of us live the life of JCP, JSR, EG, EC, RI, etc every day, but not every body is. To address that, Sonya prepared an excellent introductory presentation providing an explanation of these terms and how java.net infrastructure supports Java development. The registration for the lab showed there is a definite demand for these technologies in this part of the world. I delivered the Java EE 6 hands-on lab to a packed room of about 120 attendees. Most of the attendees were able to progress and follow the lab instructions. Some of the attendees did not have a laptop but were taking extensive notes on paper notepads. Several attendees were already using Java EE 6 in their projects and typically they are the ones asking deep dive questions. Also gave out three copies of my recently released Java EE 6 Pocket Guide and new GlassFish t-shirts. Definitely feels happy to coach ~120 more Java developers learn standards-based enterprise Java programming. I also participated in a JCP BoF along with Werner, Sonya, and Badr. Adotp-a-JSR, java.net infrastructure, how to file a JSR, what is an RI, and other similar topics were discussed in a candid manner. You can follow @JMaghrebConf or check out their facebook page. java.net published a timely conversation with Badr El Houari - the fearless leader of the Morocco JUG team. Did you know that Morocco JUG stood for JCP EC elections (ADD LINK) ? Even though they did not get elected but did fairly well. Now some sample tweets from #JMaghreb ... #JMaghreb is over. Impressive for a first edition! Thanks @badrelhouari and all the @MoroccoJUG team ! Since you @speakjava : System.out.println("Thank you so much dear Tech Evangelist ! The JavaFX was pretty amazing !!! "); #JMaghreb @YounesVendetta @arungupta @JMaghrebConf Right ! hope he will be back to morocco again and again .. :) @Alji_ @arungupta @JMaghrebConf That dude is a genius ;) Put it on your wall :p @arungupta rocking Java EE 6 at @JMaghrebConf #Java #JavaEE #JMaghreb http://t.co/isl0Iq5p @sonyabarry you are an awesome speaker ;-) #JMaghreb rich more than 550 attendees in day one. Expecting more tomorrow! ongratulations @badrelhouari the organisation was great! The talks were pretty interesting, and the turnout was surprising at #JMaghreb! #JMaghreb is truly awesome... The speakers are unbelievable ! #JavaFX... Just amazing #JMaghreb Charmed by the talk about #javaFX ( nodes architecture, MVC, Lazy loading, binding... ) gotta start using it intead of SWT. #JMaghreb JavaFX is killing JFreeChart. It supports Charts a lot of kind of them ... #JMaghreb The british man is back #JMaghreb I do like him!! #JMaghreb @arungupta rocking @JMaghrebConf. pic.twitter.com/CNohA3PE @arungupta Great talk about the future of Java EE (JEE 7 & JEE 8) Thank you. #JMaghreb JEE7 more mooore power , leeess less code !! #JMaghreb They are simplifying the existing API for Java Message Service 2.0 #JMaghreb good to know , the more the code is simplified the better ! The Glassdoor guy #arungupta is doing it RIGHT ! #JMaghreb Great presentation of The Future of the Java Platform: Java EE 7, Java SE 8 & Beyond #jMaghreb @arungupta is a great Guy apparently #JMaghreb On a personal front, the hotel (Soiftel Jardin des Roses) was pretty nice and the location was perfect. There was a 1.8 mile loop dirt trail right next to it so I managed to squeeze some runs before my upcoming marathon. Also enjoyed some great Moroccan cuisine - Couscous, Tajine, mint tea, and moroccan salad. Visit to Kasbah of the Udayas, Hassan II (one of the tallest mosque in the world), and eating in a restaurant in a kasbah are some of the exciting local experiences. Now some pictures from the event (and around the city) ... And the complete album: Many thanks to Badr, Faisal, and rest of the team for organizing a great conference. They are already thinking about how to improve the content, logisitics, and flow for the next year. I'm certainly looking forward to JMaghreb 2.0 :-)

    Read the article

  • A Look Back at 2010 Predictions

    - by David Dorf
    Now is the time of year people make their predictions for next year, but before I start thinking about 2011 it's worth a look back to see how my predictions for 2010 fared. 1. Borders and Blockbuster bite the dust. I would have never predicted a strong brand such as Circuit City could die, but now I know it can happen to anyone. Borders has lost the battle with Barnes & Noble and Blockbuster has lost to Netflix. And just to be sure, Amazon put an extra nail in each coffin. Borders received additional investment from Bennett LeBow to keep it afloat, but the stock is down around $1.25 with no profits in sight. Blockbuster filed for bankruptcy back in September. 2. Every retailer finally has a page on Facebook... but very few figure out how to keep fans engaged. Retailer postings become noise, and fans start to unsubscribe. Twitter goes in the same direction. A few standout retailers will figure out how to use social media, and the rest will remain dumbfounded. Most retailers are on the Facebook bandwagon, and their fan bases seem to be increasing thanks to promotions like The Gap's logo redesign, Lowes' black Friday sneak peak, and Walmart's Crowd Savers. There are several examples of f-commerce advancements, including some interesting integrations from Amazon.3. Smartphones consolidate and grow. More and more people will step-up to smartphones, most of which will choose iPhone, Blackberry, and Android phones. Other smartphones will vanish, and networks will start to strain. But retailers will finally embrace mobile as the next big channel. Retail marketing departments will build mobile apps without the help of their IT department, and eventually they will get into a bind. Android has been on a tear lately stealing market share from Blackberry. Palm and Microsoft are trending down, and Apple is holding steady. Smartphone sales are up 15% and expected to continue. Retailers understand the importance of mobile, and some innovative applications have been produced this year. 4. Google helps the little guys. Google will push its Favorite Places project to help give exposure to small retailers and restaurants. They will enable small retailers to act like big ones by providing storefronts, detailed product information, and coupons for consumers. Google will find a way to bring augmented reality to the masses. I can't say I've seen much new from Google regarding Favorite Places, but they've continued to push local product search. From the PC or smartphone, consumers can search for products and see which nearby stores have it stock. Oracle Retail even productized an integration to Google to support this effort. I suppose if Google ever buys Groupon then it will bring them even closer to local shopping. Google talked about augmented humanity, but that has nothing to do with augmented reality. 5. Steve Jobs Is Bugs Bunny and Steve Ballmer is Elmer Fudd. (OK, I stole that headline from an InformationWeek article. I couldn't resist.) Both Apple and Microsoft will continue to open new stores, but only Apple will show real growth. POSReady 2009 (formerly WEPOS) will continue to share the POS market with Linux. The iPhone and iPod will continue to capture market share, but there won't be an Apple tablet. There won't be an Apple tablet? What was I thinking? While Apple has well over 300 stores, there are less than 10 Microsoft stores. Initial impressions show that even though Microsoft is locating its store near Apple Stores, they are not converting customers, with shoppers citing a lack of assortment and high prices. 6. Consolidation of e-commerce software providers. Software vendors in the areas of search, reviews, online call-centers, payments, and e-commerce will consolidate, partly driven by the success of m-commerce and SaaS. Amazon will find someone else to buy, and eBay will continue to lose momentum. Consolidation of e-commerce providers continued with IBM acquiring Sterling Commerce and CoreMetrics, and Oracle recently announcing the acquisition of ATG. Amazon grabbed Zappos, Woot, and Diapers.com to continue its dominance of online selling. While eBay's Marketplace growth may have slowed, its PayPal division is doing quite well, fueled in part by demand for mobile payments. 7. Book publishers mirror music labels. Just as the iPod brought digital downloads to the masses, the Kindle and Nook will power the e-book revolution. Books will continue to use DRM for a few more years before following the path of music. Publishers will try to preserve the margins of hardbacks by associating e-book releases with paperbacks. Amazon has done a good job providing e-reader clients for smartphones, PCs, and tablets. Competition from Barnes & Noble has forced Amazon to support book loaning, and both companies are making it easier for people to publish ebooks (with or without DRM). Progress is slow but steady. 8. NFC makes inroads, RFID treads water. Near Field Communications start to appear in mobile phones, and retailers beta test its use for payments and loyalty programs. RFID tag costs come down a bit, but not enough to spur accelerated adoption.Nokia announced plans to offer NFC-enabled phones in 2011, and rumors are swirling about NFC in the upcoming iPhone.  I think NFC is heading in the right direction, and I've heard more interest from retailers about specialized uses for RFID.9. Digital Signage goes the way of augmented reality. People use their camera phones to leave geo-tagged notes all over cities, rating stores and restaurants, and "painting" graffiti. But people get tired of holding their phones in front of their faces, so AR glasses are offered in much the same way bluetooth headsets emerged. Retailers experiement with in-store advertising using AR. Several retailers like Pizza Hut, Benetton, and Target have experimented with AR but its still somewhat of a gimmick used by marketing.  I think this prediction is a year or two too early. 10. JDA flip-flops again. After announcing their embracing of the .Net architecture, then switching to J2EE after the Manugistics acquisition, JDA will finally decide to standardize on Apple's Objective C. Everything will be ported to the iPhone and be available on the AppStore. After all, there's not much left to try. This was, of course, a joke but the sentiment is still valid.  JDA seems more supply-chain focused than retail focused, which is a an outcrop if their i2 acquisition.  Of the 10 predictions, I'm going to say I got 6 somewhat correct.  (Don't you just love grading your own paper?)  Soon I'll post my predictions for 2011 so be on the lookout.  Until then here's one more prediction:  Va Tech beats Stanford in the Orange Bowl -- count on it!

    Read the article

  • How to deal with a poor team leader and a tester manager from hell? [closed]

    - by Google
    Let me begin by explaining my situation and give a little context to the situation. My company has around 15 developers but we're split up on two different areas. We have a fresh product team and the old product team. The old product team does mostly bug fixes/maintenance and a feature here and there. The fresh product had never been released and was new from the ground up. I am on the fresh product team. The team consists of three developers (myself, another developer and a senior developer). The senior is also our team leader. Our roles are as follows: Myself: building the administration client as well as build/release stuff Other dev: building the primary client Team lead: building the server In addition to the dev team, we interact with the test manager often. By "we" I mean me since I do the build stuff and give him the builds to test. Trial 1: The other developer on my team and I have both tried to talk to our manager about our team leader. About two weeks before release we went in his office and had a closed door meeting before our team lead got to work. We expressed our concerns about the product, its release date and our team leader. We expressed our team leader had a "rosey" image of the product's state. Our manager seemed to listen to what we said and thanked us for taking the initiative to speak with him about it. He got us an extra two weeks before release. The situation with the leader didn't change. In fact, it got a little worse. While we were using the two weeks to fix issues he was slacking off quite a bit. Just to name a few things, he installed Windows 8 on his dev machine during this time (claimed him machine was broke), he wrote a plugin for our office messenger that turned turned messages into speech, and one time when I went in his office he was making a 3D model in Blender (for "fun"). He felt the product was "pretty good" and ready for release. During this time I dealt with the test manager on a daily basis. Every bug or issue that popped up he would pretty much attack me personally (regardless of which component the bug was in). The test manager would often push his "views" of what needed to be done with the product. He virtually ordered me to change text on our installer and to add features to the installer and administration client. I tried to express how his suggestions were "valid ideas" but it was too close to release to do those kinds of things and to make matters worse, our technical writer had already finished documentation and such a change would not only affect the dev team but would affect the technical writer and marketing as well. I expressed I wasn't going to make those changes without marketing's consent as well as the technical writer and my manager's. He pretty much said I don't care about the product and said I don't do my job. I would like to take a moment to say I take my job seriously and I do my best. I am the kind of person that goes to work 30-40 mins early and usually leaves 30 minutes later than everyone else. Saying I don't care or do my job is just insulting. His "attacks" on me grew from day to day. Every bug that popped up he would usually comment on in some manner that jabbed me and the other developer. "Oh that bug! Yeah that should have been fixed by now, figures! If someone would do their job!" and other similar kinds of comments. Keep in mind 8 out of 10 bugs were in the server and had nothing to do with me and the other developer. That didn't seem to matter.. On one occasion they got pretty bad and we almost got into a yelling match so I decided to stop talking to him all together. I carried all communication through office email (with my manager cc'd). He never attacked me via email. He still attempted to get aggressive with me in person but I completely ignore him and my only response to any question is, "Ask my team leader." or "Ask a product manager." The product launched after our two week extension. Trial 2: The day after the product launch our team leader went on vacation (thanks....). At this time we got a lot of questions from the tech support... major issues with the product. All of these issues were bugs marked "resolved" by our lovely team leader (a typical situation that often popped up). This is where we currently are. The other developer has been with the company for about three years (I've been there only five months) and told me he was going to speak with our manager alone and hoped it would help get our concerns across a little better in a one-on-one. He spoke with the manager and directly addressed all of our concerns regarding our team leader and the test manager giving us (mostly me) hell. Our manager basically said he understood how hard we work and said he noticed it and there's no doubt about it. He said he spoke with the test manager about his temper. Regarding the team leader, he didn't say a whole lot. He suggested we sit down with the team leader and address our concerns (isn't that the manager's job?). We're still waiting to see if anything has changed but we doubt it. What can we do next? 1) Talk to the team leader (may stress relationship and make work awkward) I admit the team leader is generally a nice guy. He is just a horrible leader and working closely with him is painful. I still don't believe bringing this directly to the team leader would help at all and may negatively impact the situation. 2) I could quit. Other than this situation the job is pretty fantastic. I really like my other coworkers and we have quite a bit of freedom. 3) I could take the situation with the team leader to one of the owners. I would then be throwing my manager under the bus. 4) I could take the situation with the test manager to HR. Any suggestions? Comments?

    Read the article

  • B2B and B2C alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester Research, Inc.’s first Commerce Wave focused on B2B, “The Forrester Wave™: B2B Commerce Suites, Q4 2013,” released earlier this month. We believe that the report validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, we feel that the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: front-office content, community, and commerce features that meet customer expectations for 24x7x365 ordering, real-time customer service, and expedited shipping — both online and on mobile devices: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). It seems that this market momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management. Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery. We would like to expand on each of these 3 areas: As Forrester suggests, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.  On the topic of web content management, we were pleased to see Forrester cite Oracle’s differentiated digital experience capability in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one. So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle certainly has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we'll address in an exciting new release planned for the next 12 months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP. To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: -       2013 B2B Commerce Trends Report -       B2B Commerce Whitepaper: Consumerization, Complexity, Change -       B2B Commerce Webcast: What Industry Trend Setters Do Right -       Internet Retailer, Web Drives Sales for B2B Companies -       Internet Retailer Article, The Web Means Business: B2B Companies Beef Up Their Websites,        borrowing from b2c retailers and breaking new ground -       Internet Retailer Article, B2B e-Commerce is poised for growth

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • SQL SERVER – Thinking about Deprecated, Discontinued Features and Breaking Changes while Upgrading to SQL Server 2012 – Guest Post by Nakul Vachhrajani

    - by pinaldave
    Nakul Vachhrajani is a Technical Specialist and systems development professional with iGATE having a total IT experience of more than 7 years. Nakul is an active blogger with BeyondRelational.com (150+ blogs), and can also be found on forums at SQLServerCentral and BeyondRelational.com. Nakul has also been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway. Blog | LinkedIn | Twitter | Google+ Let us hear the thoughts of Nakul in first person - Those who have been following my blogs would be aware that I am recently running a series on the database engine features that have been deprecated in Microsoft SQL Server 2012. Based on the response that I have received, I was quite surprised to know that most of the audience found these to be breaking changes, when in fact, they were not! It was then that I decided to write a little piece on how to plan your database upgrade such that it works with the next version of Microsoft SQL Server. Please note that the recommendations made in this article are high-level markers and are intended to help you think over the specific steps that you would need to take to upgrade your database. Refer the documentation – Understand the terms Change is the only constant in this world. Therefore, whenever customer requirements, newer architectures and designs require software vendors to make a change to the keywords, functions, etc; they ensure that they provide their end users sufficient time to migrate over to the new standards before dropping off the old ones. Microsoft does that too with it’s Microsoft SQL Server product. Whenever a new SQL Server release is announced, it comes with a list of the following features: Breaking changes These are changes that would break your currently running applications, scripts or functionalities that are based on earlier version of Microsoft SQL Server These are mostly features whose behavior has been changed keeping in mind the newer architectures and designs Lesson: These are the changes that you need to be most worried about! Discontinued features These features are no longer available in the associated version of Microsoft SQL Server These features used to be “deprecated” in the prior release Lesson: Without these changes, your database would not be compliant/may not work with the version of Microsoft SQL Server under consideration Deprecated features These features are those that are still available in the current version of Microsoft SQL Server, but are scheduled for removal in a future version. These may be removed in either the next version or any other future version of Microsoft SQL Server The features listed for deprecation will compose the list of discontinued features in the next version of SQL Server Lesson: Plan to make necessary changes required to remove/replace usage of the deprecated features with the latest recommended replacements Once a feature appears on the list, it moves from bottom to the top, i.e. it is first marked as “Deprecated” and then “Discontinued”. We know of “Breaking change” comes later on in the product life cycle. What this means is that if you want to know what features would not work with SQL Server 2012 (and you are currently using SQL Server 2008 R2), you need to refer the list of breaking changes and discontinued features in SQL Server 2012. Use the tools! There are a lot of tools and technologies around us, but it is rarely that I find teams using these tools religiously and to the best of their potential. Below are the top two tools, from Microsoft, that I use every time I plan a database upgrade. The SQL Server Upgrade Advisor Ever since SQL Server 2005 was announced, Microsoft provides a small, very light-weight tool called the “SQL Server upgrade advisor”. The upgrade advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Refer the links towards the end of the post to know how to get the Upgrade Advisor. The SQL Server Profiler Another great tool that you can use is the one most SQL Server developers & administrators use often – the SQL Server profiler. SQL Server Profiler provides functionality to monitor the “Deprecation” event, which contains: Deprecation announcement – equivalent to features to be deprecated in a future release of SQL Server Deprecation final support – equivalent to features to be deprecated in the next release of SQL Server You can learn more using the links towards the end of the post. A basic checklist There are a lot of finer points that need to be taken care of when upgrading your database. But, it would be worth-while to identify a few basic steps in order to make your database compliant with the next version of SQL Server: Monitor the current application workload (on a test bed) via the Profiler in order to identify usage of features marked as Deprecated If none appear, you are all set! (This almost never happens) Note down all the offending queries and feature usages Run analysis sessions using the SQL Server upgrade advisor on your database Based on the inputs from the analysis report and Profiler trace sessions, Incorporate solutions for the breaking changes first Next, incorporate solutions for the discontinued features Revisit and document the upgrade strategy for your deployment scenarios Revisit the fall-back, i.e. rollback strategies in case the upgrades fail Because some programming changes are dependent upon the SQL server version, this may need to be done in consultation with the development teams Before any other enhancements are incorporated by the development team, send out the database changes into QA QA strategy should involve a comparison between an environment running the old version of SQL Server against the new one Because minimal application changes have gone in (essential changes for SQL Server version compliance only), this would be possible As an ongoing activity, keep incorporating changes recommended as per the deprecated features list As a DBA, update your coding standards to ensure that the developers are using ANSI compliant code – this code will require a change only if the ANSI standard changes Remember this: Change management is a continuous process. Keep revisiting the product release notes and incorporate recommended changes to stay prepared for the next release of SQL Server. May the power of SQL Server be with you! Links Referenced in this post Breaking changes in SQL Server 2012: Link Discontinued features in SQL Server 2012: Link Get the upgrade advisor from the Microsoft Download Center at: Link Upgrade Advisor page on MSDN: Link Profiler: Review T-SQL code to identify objects no longer supported by Microsoft: Link Upgrading to SQL Server 2012 by Vinod Kumar: Link Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Upgrade

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Get Information to Your Blog with Microsoft Broadcaster

    - by Matthew Guay
    Do you often have people ask you for advice about technology, or do you write tech-focused blog or newsletter?  Here’s how you can get information to share with your readers about Microsoft technology with Microsoft Broadcaster. Microsoft Broadcaster is a new service from Microsoft to help publishers, bloggers, developers, and other IT professionals find relevant information and resources from Microsoft.  You can use it to help discover things to write about, or simply discover new information about the technology you use.  Broadcaster will also notify you when new resources are available about the topics that interest you.  Let’s look at how you could use this to expand your blog and help your users. Getting Started Head over to the Microsoft Broadcaster site (link below), and click Join to get started. Sign in with your Windows Live ID, or create a new account if you don’t already have one. Near the bottom of the page, add information about your blog, newsletter, or group that you want to share Broadcaster information with.  Click Add when you’re done entering information.  You can enter as many sites or groups as you wish. When you’ve entered all of your information, click the Apply button at the bottom of the page.  Broadcaster will then let you know your information has been submitted, but you’ll need to wait several days to see if you are approved or not. Our application was approved about 2 days after applying, though this may vary.  When you’re approved, you’ll receive an email letting you know.  Return to the Broadcaster website (link below), but this time, click Sign in. Accept the terms of use by clicking I Accept at the bottom of the page. Confirm that your information entered previously is correct, and then click Configure my keywords at the bottom of the page. Now you can pick the topics you want to stay informed about.  Type keywords in the textbox, and it will bring up relevant topics with IntelliSense. Here we’ve added several topics to keep up with. Next select the Microsoft Products you want to keep track of.  If the product you want to keep track of is not listed, make sure to list it in the keywords section as above. Finally, select the types of content you wish to see, including articles, eBooks, webcasts, and more. Finally, when everything’s entered, click Configure My Alerts at the bottom of the page. Broadcaster can automatically email you when new content is found.  If you would like this, click Subscribe.  Otherwise, simply click Access Dashboard to go ahead and find your personalized content. If you choose to receive emails of new content, you’ll have to configure it with Windows Live Alerts.  Click Continue to set this up. Select if you want to receive Messenger alerts, emails, and/or text messages when new content is available.  Click Save when you’re finished. Finally, select how often you want to be notified, and then click Access Dashboard to view the content currently available. Finding Content For Your Blog, Site, or Group Now you can find content specified for your interests from the dashboard.  To access the dashboard in the future, simply go to the Broadcaster site and click Sign In. Here you can see available content, and can search for different topics or customize the topics shown. You’ll see snippets of information from various Microsoft videos, articles, whitepapers, eBooks, and more, depending on your settings.  Click the link at the top of the snippet to view the content, or right-click and copy the link to use in emails or on social networks like Twitter. If you’d like to add this snippet to your website or blog, click the Download content link at the bottom.   Now you can preview what the snippet will look like on your site, and change the width or height to fit your site.  You can view and edit the source code of the snippet from the box at the bottom, and then copy it to use on your site. Copy the code, and paste it in the HTML of a blog post, email, webpage, or anywhere else you wish to share it.  Here we’re pasting it into the HTML editor in Windows Live Writer so we can post it to a blog. After adding a title and opening paragraph, we have a nice blog post that only took a few minutes to put together but should still be useful for our readers.  You can check out the blog post we created at the link below. Readers can click on the links, which will direct them to the content on Microsoft’s websites. Conclusion If you frequently need to find educational and informative content about Microsoft products and services, Broadcaster can be a great service to keep you up to date.  The service worked quite good in our tests, and generally found relevant content to our keywords.  We had difficulty embedding links to eBooks that were listed by Broadcaster, but everything else worked for us.  Now you can always have high quality content to help your customers, coworkers, friends, and more, and you just might find something that will help you, too! Link Microsoft Broadcaster (registration required) Example Post at Techinch.com with Content from Microsoft Broadcaster Similar Articles Productive Geek Tips Create An Electronic Business Card In Outlook 2007Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPAnnouncing the How-To Geek BlogsNew Vista Syntax for Opening Control Panel Items from the Command-lineHow To Create and Publish Blog Posts in Word 2010 & 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Fix Common Inkjet Printer Errors Dual Boot Ubuntu and Windows 7 What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com

    Read the article

  • Emit Knowledge - social network for knowledge sharing

    - by hajan
    Emit Knowledge, as the words refer - it's a social network for emitting / sharing knowledge from users by users. Those who can benefit the most out of this network is perhaps all of YOU who have something to share with others and contribute to the knowledge world. I've been closely communicating with the core team of this very, very interesting, brand new social network (with specific purpose!) about the concept, idea and the vision they have for their product and I can say with a lot of confidence that this network has real potential to become something from which we will all benefit. I won't speak much about that and would prefer to give you link and try it yourself - http://www.emitknowledge.com Mainly, through the past few months I've been testing this network and it is getting improved all the time. The user experience is great, you can easily find out what you need and it follows some known patterns that are common for all social networks. They have some real good ideas and plans that are already under development for the next updates of their product. You can do micro blogging or you can do regular normal blogging… it’s up to you, and the way it works, it is seamless. Here is a short Question and Answers (QA) interview I made with the lead of the team, Marijan Nikolovski: 1. Can you please explain us briefly, what is Emit Knowledge? Emit Knowledge is a brand new knowledge based social network, delivering quality content from users to users. We believe that people’s knowledge, experience and professional thoughts compose quality content, worth sharing among millions around the world. Therefore, we created the platform that matches people’s need to share and gain knowledge in the most suitable and comfortable way. Easy to work with, Emit Knowledge lets you to smoothly craft and emit knowledge around the globe. 2. How 'old' is Emit Knowledge? In hamster’s years we are almost five years old start-up :). Just kidding. We’ve released our public beta about three months ago. Our official release date is 27 of June 2012. 3. How did you come up with this idea? Everything started from a simple idea to solve a complex problem. We’ve seen that the social web has become polluted with data and is on the right track to lose its base principles – socialization and common cause. That was our start point. We’ve gathered the team, drew some sketches and started to mind map the idea. After several idea refactoring’s Emit Knowledge was born. 4. Is there any competition out there in the market? Currently we don't have any competitors that share the same cause. What makes our platform different is the ideology that our product promotes and the functionalities that our platform offers for easy socialization based on interests and knowledge sharing. 5. What are the main technologies used to build Emit Knowledge? Emit Knowledge was built on a heterogeneous pallet of technologies. Currently, we have four of separation: UI – Built on ASP.NET MVC3 and Knockout.js; Messaging infrastructure – Build on top of RabbitMQ; Background services – Our in-house solution for job distribution, orchestration and processing; Data storage – Build on top of MongoDB; What are the main reasons you've chosen ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 6. What are the main reasons for choosing ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 7. Did you use some of the latest Microsoft technologies? If yes, which ones? Yes, we like to rock the cutting edge tech house. Currently we are using Microsoft’s latest technologies like ASP.NET MVC, Web API (work in progress) and the best for the last; we are utilizing Windows Azure IaaS to the bone. 8. Can you please tell us shortly, what would be the benefit of regular bloggers in other blogging platforms to join Emit Knowledge? Well, unless you are some of the smoking ace gurus whose blogs are followed by a large number of users, our platform offers knowledge based segregated community equipped with tools that will enable both current and future users to expand their relations and to self-promote in the community based on their activity and knowledge sharing. 10. I see you are working very intensively and there is already integration with some third-party services to make the process of sharing and emitting knowledge easier, which services did you integrate until now and what do you plan do to next? We have “reemit” functionality for internal sharing and we also support external services like: Twitter; LinkedIn; Facebook; For the regular bloggers we have an extra cream, Windows Live Writer support for easy blog posts emitting. 11. What should we expect next? Currently, we are working on a new fancy community feature. This means that we are going to support user groups to be formed. So for all existing communities and user groups out there, wait us a little bit, we are coming for rescue :). One of the top next features they are developing is the Community Feature. It means, if you have your own User Group, Community Group or any other Group on which you and your users are mostly blogging or sharing (emitting) knowledge in various ways, Emit Knowledge as a platform will help you have everything you need to promote your group, make new followers and host all the necessary stuff that you have had need of. I would invite you to try the network and start sharing knowledge in a way that will help you gather new followers and spread your knowledge faster, easier and in a more efficient way! Let’s Emit Knowledge!

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • Create and Backup Multiple Profiles in Google Chrome

    - by Asian Angel
    Other browsers such as Firefox and SeaMonkey allow you to have multiple profiles but not Chrome…at least not until now. If you want to use multiple profiles and create backups for them then join us as we look at Google Chrome Backup. Note: There is a paid version of this program available but we used the free version for our article. Google Chrome Backup in Action During the installation process you will run across this particular window. It will have a default user name filled in as shown here…you will not need to do anything except click on Next to continue installing the program. When you start the program for the first time this is what you will see. Your default Chrome Profile will already be visible in the window. A quick look at the Profile Menu… In the Tools Menu you can go ahead and disable the Start program at Windows Startup setting…the only time that you will need the program running is if you are creating or restoring a profile. When you create a new profile the process will start with this window. You can access an Advanced Options mode if desired but most likely you will not need it. Here is a look at the Advanced Options mode. It is mainly focused on adding Switches to the new Chrome Shortcut. The drop-down menu for the Switches available… To create your new profile you will need to choose: A profile location A profile name (as you type/create the profile name it will automatically be added to the Profile Path) Make certain that the Create a new shortcut to access new profile option is checked For our example we decided to try out the Disable plugins switch option… Click OK to create the new profile. Once you have created your new profile, you will find a new shortcut on the Desktop. Notice that the shortcut’s name will be Google Chrome + profile name that you chose. Note: On our system we were able to move the new shortcut to the “Start Menu” without problems. Clicking on our new profile’s shortcut opened up a fresh and clean looking instance of Chrome. Just out of curiosity we did decide to check the shortcut to see if the Switch set up correctly. Unfortunately it did not in this instance…so your mileage with the Switches may vary. This was just a minor quirk and nothing to get excited or upset over…especially considering that you can create multiple profiles so easily. After opening up our default profile of Chrome you can see the individual profile icons (New & Default in order) sitting in the Taskbar side-by-side. And our two profiles open at the same time on our Desktop… Backing Profiles Up For the next part of our tests we decided to create a backup for each of our profiles. Starting the wizard will allow you to choose between creating or restoring a profile. Note: To create or restore a backup click on Run Wizard. When you reach the second part of the process you can go with the Backup default profile option or choose a particular one from a drop-down list using the Select a profile to backup option. We chose to backup the Default Profile first… In the third part of the process you will need to select a location to save the profile to. Once you have selected the location you will see the Target Path as shown here. You can choose your own name for the backup file…we decided to go with the default name instead since it contained the backup’s calendar date. A very nice feature is the ability to have the cache cleared before creating the backup. We clicked on Yes…choose the option that best suits your needs. Once you have chosen either Yes or No the backup will then be created. Click Finish to complete the process. The backup file for our Default Profile at 14.0 MB in size. And the backup file for our Chrome Fresh Profile…2.81 MB. Restoring Profiles For the final part of our tests we decided to do a Restore. Select Restore and click Next to get the process started. In the second step you will need to browse for the Profile Backup File (and select the desired profile if you have created multiples). For our example we decided to overwrite the original Default Profile with the Chrome Fresh Profile. The third step lets you choose where to restore the chosen profile to…you can go with the Default Profile or choose one from the drop-down list using the Restore to a selected profile option. The final step will get you on your way to restoring the chosen profile. The program will conduct a check regarding the previous/old profile and ask if you would like to proceed with overwriting it. Definitely nice in case you change your mind at the last moment. Clicking Yes will finish the restoration. The only other odd quirk that we noticed while using the program was that the Next Button did not function after restoring the profile. You can easily get around the problem by clicking to close the window. Which one is which? After the restore process we had identical twins. Conclusion If you have been looking for a way to create multiple profiles in Google Chrome, then you might want to add this program to your system. Links Download Google Chrome Backup Similar Articles Productive Geek Tips Backup and Restore Firefox Profiles EasilyBackup Different Browsers Easily with FavBackupBackup Your Browser with the New FavBackupStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • First PC Build (Part 1)

    - by Anthony Trudeau
    Originally posted on: http://geekswithblogs.net/tonyt/archive/2014/08/05/157959.aspxA couple of months ago I made the decision to build myself a new computer. The intended use is gaming and for using the last real version of Photoshop. I was motivated by the poor state of console gaming and a simple desire to do something I haven’t done before – build a PC from the ground up. I’ve been using PCs for more than two decades. I’ve replaced a component hear and there, but for the last 10 years or so I’ve only used laptops. Therefore, this article will be written from the perspective of someone familiar with PCs, but completely new at building. I’m not an expert and this is not a definitive guide for building a PC, but I do hope that it encourages you to try it yourself. Component List Research There was a lot of research necessary, because building a PC is completely new to me, and I haven’t kept up with what’s out there. The first thing you want to do is nail down what your goals are. Your goals are going to be driven by what you want to do with your computer and personal choice. Don’t neglect the second one, because if you’re doing this for fun you want to get what you want. In my case, I focused on three things: performance, longevity, and aesthetics. The performance aspect is important for gaming and Photoshop. This will drive what components you get. For example, heavy gaming use is going to drive your choice of graphics card. Longevity is relevant to me, because I don’t want to be changing things out anytime soon for the next hot game. The consequence of performance and longevity is cost. Finally, aesthetics was my next consideration. I could have just built a box, but it wouldn’t have been nearly as fun for me. Aesthetics might not be important to you. They are for me. I also like gadgets and that played into at least one purchase for this build. I used PC Part Picker to put together my component list. I found it invaluable during the process and I’d recommend it to everyone. One caveat is that I wouldn’t trust the compatibility aspects. It does a pretty good job of not steering you wrong, but do your own research. The rest of it isn’t really sexy. I started out with what appealed to me and then I made changes and additions as I dived deep into researching each component and interaction I could find. The resources I used are innumerable. I used reviews, product descriptions, forum posts (praises and problems), et al. to assist me. I also asked friends into gaming what they thought about my component list. And when I got near the end I posted my list to the Reddit /r/buildapc forum. I cannot stress the value of extra sets of eyeballs and first hand experiences. Some of the resources I used: PC Part Picker Tom’s Hardware bit-tech Reddit Purchase PC Part Picker favors certain vendors. You should look at others too. In my case I found their favorites to be the best. My priorities were out-the-door price and shipping time. I knew that once I started getting parts I’d want to start building. Luckily, I timed it well and everything arrived within the span of a few days. Here are my opinions on the vendors I ended up using in alphabetical order. Amazon.com is a good, reliable choice. They have excellent customer service in my experience, and I knew I wouldn’t have trouble with them. However, shipping time is often a problem when you use their free shipping unless you order expensive items (I’ve found items over $100 ship quickly). Ultimately though, price wasn’t always the best and their collection of sales tax in my state turned me off them. I did purchase my case from them. I ordered the mouse as well, but I cancelled after it was stuck four days in a “shipping soon” state. I purchased the mouse locally. Best Buy is not my favorite place to do business. There’s a lot of history with poor, uninterested sales representatives and they used to have a lot of bad anti-consumer policies. That’s a lot better now, but the bad taste is still in my mouth. I ended up purchasing the accessories from them including mouse (locally) and headphones. NCIX is a company that I’ve never heard of before. It popped up as a recommendation for my CPU cooler on PC Part Picker. I didn’t do a lot of research on the company, because their policy on you buying insurance for your orders turned me off. That policy makes it clear to me that the company finds me responsible for the shipment once it leaves their dock. That’s not right, and may run afoul of state laws. Regardless they shipped my CPU cooler quickly and I didn’t have a problem. NewEgg.com is a well known company. I had never done business with them, but I’m glad I did. They shipped quickly and provided good visibility over everything. The prices were also the best in most cases. My main complaint is that they have a lot of exchange only return policies on components. To their credit those policies are listed in the cart underneath each item. The visibility tells me that they’re not playing any shenanigans and made me comfortable dealing with that risk. The vast majority of what I ordered came from them. Coming Next In the next part I’ll tackle my build experience.

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • CodePlex Daily Summary for Wednesday, September 05, 2012

    CodePlex Daily Summary for Wednesday, September 05, 2012Popular ReleasesDesktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.BLACK ORANGE: HPAD TEXT EDITOR 0.9 Beta: HOW TO RUN THE TEXT EDITOR Download the HPAD ARCHIVED FILES which is in .rar format Extract using Winrar Make sure that extracted files are in the same folder Double-Click on HPAD.exe application fileTelerikMvcGridCustomBindingHelper: Version 1.0.15.247-RC2: TelerikMvcGridCustomBindingHelper 1.0.15.247 RC2 Release notes: This is a RC version (hopefully the last one), please test and report any error or problem you encounter. This release is all about performance and fixes Support: "Or" and "Does Not contain" filter options Improved BooleanSubstitutes, Custom Aggregates and expressions-to-queryover Add EntityFramework examples in ExampleWebApplication Many other improvements and fixes Fix invalid cast on CustomAggregates Support for ...ServiceMon - Extensible Real-time, Service Monitoring Utility: ServiceMon Release 0.9.0.44: Auto-uploaded from build serverJavaScript Grid: Release 09-05-2012: Release 09-05-2012xUnit.net Contrib: xunitcontrib-dotCover 0.6.1 (dotCover 2.1 beta): xunitcontrib release 0.6.1 for dotCover 2.1 beta This release provides a test runner plugin for dotCover 2.1 beta, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release adds support for running xUnit.net tests to dotCover 2.1 beta's Visual Studio plugin. PLEASE NOTE: You do NOT need this if you also have ReSharper and the existing 0.6.1 release installed. DotCover will use ReSharper's test runners if available. This release includes th...B INI Sharp Library: B INI Sharp Library v1.0.0.0 Realsed: The frist realsedActive Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsA Simple Eng-Hindi CMS: A simple English- Hindi dual language content management system for small business/personal websites.Active Social Migrator: This project for managing the Active Social migration tool.ANSI Console User Control: Custom console control for .NET WinformsAutoSPInstallerGUI: GUI Configuration Tool for SPAutoInstaller Codeplex ProjectCode Documentation Checkin Policy: This checkin policy for Visual Studio 2012 checks if c# code is documented the way it's configured in the config of the policy. Code Dojo/Kata - Free Time Coding: Doing some katas of the Coding Dojo page. http://codingdojo.org/cgi-bin/wiki.pl?KataCataloguefjycUnifyShow: fjycUnifyShowHidden Capture (HC): HC is simple and easy utility to hidden and auto capture desktop or active windowHRC Integration Services: Fake SQL Server Integration Services. LOLKooboo CMS Sites Switcher: Kooboo CMS Sites SwitcherMod.CookieDetector: Orchard module for detecting whether cookies are enabledMyCodes: Created!MySQL Statement Monitor: MySQL Statement Monitor is a monitoring tool that monitors SQL statements transferred over the network.NeoModulusPIRandom: The idea with PI Random is to use easy string manipulation and simple math to generate a pseudo random number. Net Core Tech - Medical Record System: This is a Medical Record System ProjectOraPowerShell: PowerShell library for backup and maintenance of a Oracle Database environment under Microsoft Windows 2008PinDNN: PinDNN is a module that imparts Pinterest-like functionality to DotNetNuke sites. This module works with a MongoDB database and uses the built-in social relatioPyrogen Code Generator: PyroGen is a simple code generator accepting C# as the markup language.restMs: wil be deleted soonScript.NET: Script.NET is a script management utility for web forms and MVC, using ScriptJS-like features to link dependencies between scripts.SpringExample-Pagination: Simple Spring example with PaginationXNA and Component Based Design: This project includes code for XNA and Component Based Design

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • If Nvidia Shield can stream a game via WiFi (~150-300Mbps), where is the 1-10Gbps wired streaming?

    - by Enigma
    Facts: It is surprising and uncharacteristic that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. 150-300mbps WiFi is in no way superior to a 1000mbps+ LAN connection aside from well wireless mobility. Throughout time, (since the internet was created) wired services have **always come first yet in this particular case, the opposite seems to be true. We had wired internet first, wired audio streaming, and wired video streaming all before their wireless counterparts. Why? Largely because the wireless bandwidth was and is inferior. Even today despite being significantly better and capable of a lot more, it is still inferior to a wired connection. Situation: Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable to me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. While this secondary project will provide a wired connection, it still shouldn't be necessary to purchase a Shield to have this benefit. Not only this but Intel's WiDi claims game streaming support as well - wirelessly. Where is the wired streaming? All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I should be able to stream to my second desktop or my laptop both of which by hardware comparison are superior to the Shield. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? Reiteration of questions: Is there a technical reason (non marketing) for why Nvidia opted to bottleneck the streaming service with a wireless connection limiting the resolution to 720p and introducing intermittent video choppiness when on a wired connection one could achieve, presumably, 1080p with significantly less or zero choppiness? Is there anything limiting developers from creating a PC/Desktop application emulating the same H.264 decoding functionality that circumvents the need to get an Nvidia Shield altogether? (It is not a matter of being too cheap to support Nvidia - I have many Nvidia cards that aren't being used. One should not have to purchase specialty hardware when = hardware already exists) Same questions go for Intel Widi also. I am just utterly perplexed that there are wireless live streaming solution and yet no wired. How on earth can wireless be the goto transmission medium? Is there another solution that takes advantage of H.264 video compression allowing live streaming over a wired connection? (*) - Perhaps this isn't the first but afaik it is the first complete package. (**) - I cant back that up with hard evidence/links but someone probably could. Edit: Maybe this will be the solution I am looking for but I still find it hard to believe that they would be the first and after wireless solutions already exist. In-home Streaming You can play all your Windows and Mac games on your SteamOS machine, too. Just turn on your existing computer and run Steam as you always have - then your SteamOS machine can stream those games over your home network straight to your TV! - http://store.steampowered.com/livingroom/SteamOS/

    Read the article

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

  • Heroku Problem During Database Pull of Rails App: Mysql::Error MySQL server has gone away

    - by Rich Apodaca
    Attempting to pull my database from Heroku gives an error partway through the process (below). Using: Snow Leopard; heroku-1.8.2; taps-0.2.26; rails-2.3.5; mysql-5.1.42. Database is smallish, as you can see from the error message. Heroku tech support says it's a problem on my system, but offers nothing in the way of how to solve it. I've seen the issue reported before - for example here. How can I get around this problem? The error: $ heroku db:pull Auto-detected local database: mysql://[...]@localhost/[...]?encoding=utf8 Receiving schema Receiving data 17 tables, 9,609 records [...] /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:166:in `query': Mysql::Error MySQL server has gone away (Sequel::DatabaseError) from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:166:in `_execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:125:in `execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/connection_pool.rb:101:in `hold' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:461:in `synchronize' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:125:in `execute' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:296:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset.rb:276:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:365:in `execute_dui' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `each' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:144:in `transaction' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/connection_pool.rb:108:in `hold' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/database.rb:461:in `synchronize' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/adapters/mysql.rb:138:in `transaction' from /Library/Ruby/Gems/1.8/gems/sequel-3.0.0/lib/sequel/dataset/convenience.rb:126:in `import' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:211:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:203:in `loop' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:203:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:196:in `each' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:196:in `cmd_receive_data' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:175:in `cmd_receive' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:17:in `pull' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:119:in `taps_client' from /Library/Ruby/Gems/1.8/gems/taps-0.2.26/lib/taps/client_session.rb:21:in `start' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:115:in `taps_client' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/commands/db.rb:16:in `pull' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:45:in `send' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:45:in `run_internal' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/../lib/heroku/command.rb:17:in `run' from /Library/Ruby/Gems/1.8/gems/heroku-1.8.2/bin/heroku:14 from /usr/bin/heroku:19:in `load' from /usr/bin/heroku:19

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80  | Next Page >