Search Results

Search found 2142 results on 86 pages for 'tech learner'.

Page 52/86 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • keep HTMLformat after replace some text (using PHP and JS)

    - by Sadi
    I would like modify HTML like I am <b>Sadi, novice</b> programmer. to I am <b>Sadi, learner</b> programmer. To do it I will search using a string "novice programmer". How can I do it please? Any idea? Thank you Sadi More clarification: I get some nice reply with possible solution. But please keep posting if you have any idea in mind. I would like to more clarify the problem just in case anyone missed it. Main post shows the problem as an example scenario. 1) Now the problem is find and replace some string without considering the tags. The tags may shows up within a single word. String may contain multiple word. Tag only appear in the content string or the document. The search phrase never contain any tags. We can easily remove all tags and do some text operation. But here the another problem shows up. 2) The tags must be preserve, even after replacing the text. That is what the example shows. Thank you Again for helping

    Read the article

  • This is a great job opportunity!!! [closed]

    - by Stuart Gordon
    ASP.NET MVC Web Developer / London / £450pd / £25-£50,000pa / Interested contact [email protected] ! As a web developer within the engineering department, you will work with a team of enthusiastic developers building a new ASP.NET MVC platform for online products utilising exciting cutting edge technologies and methodologies (elements of Agile, Scrum, Lean, Kanban and XP) as well as developing new stand-alone web products that conform to W3C standards. Key Responsibilities and Objectives: Develop ASP.NET MVC websites utilising Frameworks and enterprise search technology. Develop and expand content management and delivery solutions. Help maintain and extend existing products. Formulate ideas and visions for new products and services. Be a proactive part of the development team and provide support and assistance to others when required. Qualification/Experience Required: The ideal candidate will have a web development background and be educated to degree level in a Computer Science/IT related course plus ASP.NET MVC experience. The successful candidate needs to be able to demonstrate commercial experience in all or most of the following skills: Essential: ASP.NET MVC with C# (Visual Studio), Castle, nHibernate, XHTML and JavaScript. Experience of Test Driven Development (TDD) using tools such as NUnit. Preferable: Experience of Continuous Integration (TeamCity and MSBuild), SQL Server (T-SQL), experience of source control such as Subversion (plus TortioseSVN), JQuery. Learn: Fluent NHibernate, S#arp Architecture, Spark (View engine), Behaviour Driven Design (BDD) using MSpec. Furthermore, you will possess good working knowledge of W3C web standards, web usability, web accessibility and understand the basics of search engine optimisation (SEO). You will also be a quick learner, have good communication skills and be a self-motivated and organised individual.

    Read the article

  • What's the next steps for moving from appengine to full django?

    - by tomcritchlow
    Hey guys, I'm super new to programming and I've been using appengine to help me learn python and general coding. I'm getting better quickly and I'm loving it all the way :) Appengine was awesome for allowing me to just dive into writing my app and getting something live that works (see http://www.7bks.com/). But I'm realising that the longer I continue to learn on appengine the more I'm constraining myself and locking myself into a single system. I'd like to move to developing on full django (since django looks super cool!). What are my next steps? To give you a feel for my level of knowledge: I'm not a unix user I'm not familiar with command line controls (I still use appengine/python completely via the appengine SDK) I've never programmed in anything other than python, anywhere other than appengine I know the word SQL, but don't know what MySQL is really or how to use it. So, specifically: What are the skills I need to learn to get up and running with full django/python? If I'm going to host somewhere else I suppose I'll need to learn some sysadmin type skills (maybe even unix?). Is there anywhere that offers easy hosting (like appengine) but that supports django? I hear such great things about heroku I'm considering switching to RoR and going there I appreciate that I'm likely not quite ready to move away from appengine just yet but I'm a fiercely passionate learner (http://www.7bks.com/blog/179001) and would love it if I knew all the steps I needed to learn so I could set about learning them. At the moment, I don't even know what the steps are I need to learn! Thank you very much. Sorry this isn't a specific programming question but I've looked around and haven't found a good how-to for someone of my level of experience and I think others would appreciate a good roadmap for the things we need to learn to get up and running. Thanks, Tom PS - if anyone is in London and fancies showing me the ropes in person that would be super awesome :)

    Read the article

  • How do I create a simple Windows form to access a SQL Server database?

    - by NoCatharsis
    I believe this is a very novice question, and if I'm using the wrong forum to ask, please advise. I have a basic understanding of databasing with MS SQL Server, and programming with C++ and C#. I'm trying to teach myself more by setting up my own database with MS SQL Server Express 2008 R2 and accessing it via Windows forms created in C# Express 2010. At this point, I just want to keep it to free or Express dev tools (not necessarily Microsoft though). Anyway, I created a database using the instructions provided here and I set the data types appropriately for each column (no errors in setup at least). Now I'm designing the GUI in C# Express but I've kind of hit a wall as far as the database connection. Is there a simple way to access the database I created locally using C# Express? Can anyone suggest a guide that has all this spelled out already? I am a self-learner so I look forward to teaching myself how to use these applications, but any pointers to start me off in the right direction would be greatly appreciated.

    Read the article

  • can JockerSoft.Media read/get video file from remote location?

    - by Lynx
    here is the code for JockerSoft.Media // Path of the video and frame storing path string _videopath = "http://www.test.com/Video/test.avi"; //"C:\\test.avi"; string _imagepath = "C:\\test.jpg" Bitmap bmp = FrameGrabber.GetFrameFromVideo(_videopath, 0.1d); bmp.Save(_imagepath, System.Drawing.Imaging.ImageFormat.Gif); // Save directly frame on specified location FrameGrabber.SaveFrameFromVideo(_videopath, 0.1d, _imagepath); it work perfectly is the video file is from my own computer, but when i try to get video file from remote location it not getting the frame. well, all the example is for windwos form app and i trying to use this for web-application. is there maybe an additional coding that enable me to use jockersoft to grab a video frame from remote location? here is the error that i got: Attempted to access an unloaded appdomain. (Exception from HRESULT: 0x80131014) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.AppDomainUnloadedException: Attempted to access an unloaded appdomain. (Exception from HRESULT: 0x80131014) New Learner, please guide me..

    Read the article

  • C# Linq to SQL connection string (newbie)

    - by Chris'o
    i am a new linq to sql learner and this is my very first attempt to create a data viewer program. The idea is simple, i'd like to create a software that is able to view content of a table in a database. That's it. I got an early problem here already and i have seen many tutes and articles online but I still cant fix the bug. Here is my code: static void Main(string[] args) { string cs = "Data Source=localhost;Initial Catalog=somedb;Integrated Security=SSPI;"; var db = new DataClasses1DataContext(cs); db.Connection.Open(); foreach (var b in db.Mapping.GetTables()) Console.WriteLine(b.TableName); Console.ReadKey(true); } When I tried to check db.connection.equals(null); it returns false, so i thought i have connected successfully to the database since there is no error at all. But the code above doesn't print anything out to the screen. I kind of lost and don't know what's going on here. Does anyone know what is going wrong here?

    Read the article

  • PHP transfer files from server to server in LAN

    - by cheapez
    So, I have 5-6 pages of requirements. I'm trying to build this application in PHP based on the requirements. I want to transfer files from one server to the other server in LAN, and then send a shell command to the other server to find out if the file has been transferred successfully. In php, I can transfer files using FTP, and send shell commands using SSH. Using the methods above, I will need to open connection to the server first, but I don't know the ftp server name, domain name, ip address, or anything like that. I only know the the server ID (I'm not sure what this ID is, but I guess it is like the computer's name). An example of the server ID is: "c23bap234" How do I open a connection with just that server ID? These servers are in the same building, have LAN connection, don't have connection to the outside world. These machines have PHP, Apache, ... installed. If my post doesn't make sense to you, it's because I'm a learner. I hope someone can help me on this. Thanks in advance.

    Read the article

  • What is the proper way to wait a script loaded completely before another

    - by FatDogMark
    I am making a website that is divided by several sections <div id='section-1' class='section'>web content</div> <div id='section-2' class='section'>web content</div> I have like ten sections on my webpage ,each sections height is set to the user window height when document is ready by javascript $('.section').height($(window).height()); Some effects like slideshows on my webpage require the calculated height of the section in order to work properly. Therefore I always use something like this at document ready as a solution. setTimeout(startslideshow,1000); setTimeout(startanimations,1000); ...etc To make sure the section height is the user window height before the slideshow's code start because the sections cannot change to user window height instantly once the webpage loaded will generate serious problems in my slideshow code,like wrong calculated positions. Therefore there will be a situation that's after the page loaded, there will be about a second everything is messed up,before everything can works properly, how could I avoid that being seen by the user? I tried to $(document).hide(),or $('html,body').hide(), then fade in after a second,but I get other weird problems, especially on ipad,my fixed position top navigation bar will always become 'not fixed' while user is scrolling. As I am a self-learner, I afraid my method is not typical. I want to know what is the common ways of real web programmers usually do when they have to divide his webpage into different sections and set its height to window height , then make sure the other effects that's depends on the section height works properly and avoid to wait the height change for a second?

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Rockmelt, the technology adoption model, and Facebook's spare internet

    - by Roger Hart
    Regardless of how good it is, you'd have to have a heart of stone not to make snide remarks about Rockmelt. After all, on the surface it looks a lot like some people spent two years building a browser instead of just bashing out a Chrome extension over a wet weekend. It probably does some more stuff. I don't know for sure because artificial scarcity is cool, apparently, so the "invitation" is still in the post*. I may in fact never know for sure, because I'm not wild about Facebook sign-in as a prerequisite for anything. From the video, and some initial reviews, my early reaction was: I have a browser, I have a Twitter client; what on earth is this for? The answer, of course, is "not me". Rockmelt is, in a way, quite audacious. Oh, sure, on launch day it's Bay Area bar-chat for the kids with no lenses in their retro specs and trousers that give you deep-vein thrombosis, but it's not really about them. Likewise,  Facebook just launched Google Wave, or something. And all the tech snobbery and scorn packed into describing it that way is irrelevant next to what they're doing with their platform. Here's something I drew in MS Paint** because I don't want to get sued: (see: The technology adoption lifecycle) A while ago in the Guardian, John Lanchester dusted off the idiom that "technology is stuff that doesn't work yet". The rest of the article would be quite interesting if it wasn't largely about MySpace, and he's sort of got a point. If you bolt on the sentiment that risk-averse businessmen like things that work, you've got the essence of Crossing the Chasm. Products for the mainstream market don't look much like technology. Think for  a second about early (1980s ish) hi-fi systems, with all the knobs and fiddly bits, their ostentatious technophile aesthetic. Then consider their sleeker and less (or at least less conspicuously) functional successors in the 1990s. The theory goes that innovators and early adopters like technology, it's a hobby in itself. The rest of the humans seem to like magic boxes with very few buttons that make stuff happen and never trouble them about why. Personally, I consider Apple's maddening insistence that iTunes is an acceptable way to move files around to be more or less morally unacceptable. Most people couldn't care less. Hence Rockmelt, and hence Facebook's continued growth. Rockmelt looks pointless to me, because I aggregate my social gubbins with Digsby, or use TweetDeck. But my use case is different and so are my enthusiasms. If I want to share photos, I'll use Flickr - but Facebook has photo sharing. If I want a short broadcast message, I'll use Twitter - Facebook has status updates. If I want to sell something with relatively little hassle, there's eBay - or Facebook marketplace. YouTube - check, FB Video. Email - messaging. Calendaring apps, yeah there are loads, or FB Events. What if I want to host a simple web page? Sure, they've got pages. Also Notes for blogging, and more games than I can count. This stuff is right there, where millions and millions of users are already, and for what they need it just works. It's not about me, because I'm not in the big juicy area under the curve. It's what 1990s portal sites could never have dreamed of achieving. Facebook is AOL on speed, crack, and some designer drugs it had specially imported from the future. It's a n00b-friendly gateway to the internet that just happens to serve up all the things you want to do online, right where you are. Oh, and everybody else is there too. The price of having all this and the social graph too is that you have all of this, and the social graph too. But plenty of folks have more incisive things to say than me about the whole privacy shebang, and it's not really what I'm talking about. Facebook is maintaining a vast, and fairly fully-featured training-wheels internet. And it makes up a large proportion of the online experience for a lot of people***. It's the entire web (2.0?) experience for the early and late majority. And sure, no individual bit of it is quite as slick or as fully-realised as something like Flickr (which wows me a bit every time I use it. Those guys are good at the web), but it doesn't have to be. It has to be unobtrusively good enough for the regular humans. It has to not feel like technology. This is what Rockmelt sort of is. You're online, you want something nebulously social, and you don't want to faff about with, say, Twitter clients. Wow! There it is on a really distracting sidebar, right in your browser. No effort! Yeah - fish nor fowl, much? It might work, I guess. There may be a demographic who want their social web experience more simply than tech tinkering, and who aren't just getting it from Facebook (or, for that matter, mobile devices). But I'd be surprised. Rockmelt feels like an attempt to grab a slice of Facebook-style "Look! It's right here, where you already are!", but it's still asking the mature market to install a new browser. Presumably this is where that Facebook sign-in predicate comes in handy, though it'll take some potent awareness marketing to make it fly. Meanwhile, Facebook quietly has the entire rest of the internet as a product management resource, and can continue to give most of the people most of what they want. Something that has not gone un-noticed in its potential to look a little sinister. But heck, they might even make Google Wave popular.     *This was true last week when I drafted this post. I got an invite subsequently, hence the screenshot. **MS Paint is no fun any more. It's actually good in Windows 7. Farewell ironically-shonky diagrams. *** It's also behind a single sign-in, lending a veneer of confidence, and partially solving the problem of usernames being crummy unique identifiers. I'll be blogging about that at some point.

    Read the article

  • How to Achieve OC4J RMI Load Balancing

    - by fip
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public. Here is the tech note: Overview A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions: 1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster? 2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function? That is the topic of how to achieve load balancing with OC4J RMI client. Solutions You need to configure and code RMI load balancing in two places: 1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs. 2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI) More details below: About the PROVIDER_URL The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs. A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1 A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query. When the PROVIDER URL reference a single OPMN server Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup. For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts: appName on oc4j_instance1 on host1 appName on oc4j_instance1 on host2, appName on oc4j_instance1 on host3,  (provided that host1, host2, host3 are all in the same cluster) Please note that One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3. When the PROVIDER URL reference a comma separated list of multiple OPMN servers When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster. The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server. About the oracle.j2ee.rmi.loadBalance Property After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list. Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance. Value Description client If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation. context Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked. lookup Used for a standalone client that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup(). Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node. About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster) What we have discussed above is about load balancing. Let's also discuss high availability. This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    Read the article

  • It was a figure of speech!

    - by Ratman21
    Yesterday I posted the following as attention getter / advertisement (as well as my feelings). In the groups, (I am in) on the social networking site, LinkedIn and boy did I get responses.    I am fighting mad about (a figure of speech, really) not having a job! Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can no longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage. I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!     This was the responses I got.   I hear you. We just need to retrain and get our skills up to speed is all. That is what I am doing. I have not given up. Just got to stay on top of the game. Experience is on our side if we have the credentials and we are reasonable about our salaries this should not be an issue.   Already on it, going back to school and have got three certifications (CompTIA A+, Security+ and Network+. I am now studying for my CISCO CCNA certification. As to my salary, I am willing to work at very reasonable rate.   You need to re-brand yourself like a product, market and sell yourself. You need to smarten up, look and feel a million dollars, re-energize yourself, regain your confidents. Either start your own business, or re-write your CV so it stands out from the rest, get the template off the internet. Contact every recruitment agent in your town, state, country and overseas, and on the web. Apply to every job you think you could do, you may not get it but you will make a contact for your network, which may lead to a job at the end of the tunnel. Get in touch with everyone you know from past jobs. Do charity work. I maintain the IT Network, stage electrical and the Telecom equipment in my church,   Again already on it. I have email the world is seems with my resume and cover letters. So far, I have rewritten or had it rewrote, my resume and cover letters; over seven times so far. Re-energize? I never lost my energy level or my self-confidents in my work (now if could get some HR personal to see the same). I also volunteer at my church, I created and maintain the church web sit.   I share your frustration. Sucks being over 50 and looking for work. Please don't sell yourself short at min wage because the employer will think that’s your worth. Keep trying!!   I never stop trying and min wage is only for 90 days. If some one takes up the challenge. Some post asked if I am keeping up technology.   Do you keep up with the latest technology and can speak the language fluidly?   Yep to that and as to speaking it also a yep! I am a geek you know. I heard from others over the 50 year mark and younger too.   I'm with you! I keep getting told that I don't have enough experience because I just recently completed a Masters level course in Microsoft SQL Server, which gave me a project-intensive equivalent of between 2 and 3 years of experience. On top of that training, I have 19 years as an applications programmer and database administrator. I can normalize rings around experienced DBAs and churn out effective code with the best of them. But my 19 years is worthless as far as most recruiters and HR people are concerned because it is not the specific experience for which they're looking. HR AND RECRUITERS TAKE NOTE: Experience, whatever the language, translates across platforms and technology! By the way, I'm also over 55 and still have "got it"!   I never lost it and I also can work rings round younger techs.   I'm 52 and female and seem to be having the same issues. I have over 10 years experience in tech support (with a BS in CIS) and can't get hired either.   Ow, I only have an AS in computer science along with my certifications.   Keep the faith, I have been unemployed since August of 2008. I agree with you...I am willing to return to the beginning of my retail career and work myself back through the ranks, if someone will look past the grey and realize the knowledge I would bring to the table.   I also would like some one to look past the gray.   Interesting approach, volunteering to work for minimum wage for 90 days. I'm in the same situation as you, being 55 & balding w/white hair, so I know where you're coming from. I've been out of work now for a year. I'm in Michigan, where the unemployment rate is estimated to be 15% (the worst in the nation) & even though I've got 30+ years of IT experience ranging from mainframe to PC desktop support, it's difficult to even get a face-to-face interview. I had one prospective employer tell me flat out that I "didn't have the energy required for this position". Mostly I never get any feedback. All I can say is good luck & try to remain optimistic.   He said WHAT! Yes remaining optimistic is key. Along with faith in God. Then there was this (for lack of better word) jerk.   Give it up already. You were too old to work in high tech 10 years ago. Scratch that, 20 years ago! Try selling hot dogs in front of Fry's Electronics. At least you would get a chance to eat lunch with your previous colleagues....   You know funny thing on this person is that I checked out his profile. He is older than I am.

    Read the article

  • Can Google Employees See My Saved Google Chrome Passwords?

    - by Jason Fitzpatrick
    Storing your passwords in your web browser seems like a great time saver, but are the passwords secure and inaccessible to others (even employees of the browser company) when squirreled away? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader MMA is curious if Google employees have (or could have) access to the passwords he stores in Google Chrome: I understand that we are really tempted to save our passwords in Google Chrome. The likely benefit is two fold, You don’t need to (memorize and) input those long and cryptic passwords. These are available wherever you are once you log in to your Google account. The last point sparked my doubt. Since the password is available anywhere, the storage must in some central location, and this should be at Google. Now, my simple question is, can a Google employee see my passwords? Searching over the Internet revealed several articles/messages. Do you save passwords in Chrome? Maybe you should reconsider: Talks about your passwords being stolen by someone who has access to your computer account. Nothing mentioned about the central storage security and vulnerability. There is even a response from Chrome browser security tech lead about the first issue. Chrome’s insane password security strategy: Mostly along the same line. You can steal password from somebody if you have access to the computer account. How to Steal Passwords Saved in Google Chrome in 5 Simple Steps: Teaches you how to actually perform the act mentioned in the previous two when you have access to somebody else’s account. There are many more (including this one at this site), mostly along the same line, points, counter-points, huge debates. I refrain from mentioning them here, simply carry a search if you want to find them. Coming back to my original query, can a Google employee see my password? Since I can view the password using a simple button, definitely they can be unhashed (decrypted) even if encrypted. This is very different from the passwords saved in Unix-like OS’s where the saved password can never be seen in plain text. They use a one-way encryption algorithm to encrypt your passwords. This encrypted password is then stored in the passwd or shadow file. When you attempt to login, the password you type in is encrypted again and compared with the entry in the file that stores your passwords. If they match, it must be the same password, and you are allowed access. Thus, a superuser can change my password, can block my account, but he can never see my password. So are his concerns well founded or will a little insight dispel his worry? The Answer SuperUser contributor Zeel helps put his mind at ease: Short answer: No* Passwords stored on your local machine can be decrypted by Chrome, as long as your OS user account is logged in. And then you can view those in plain text. At first this seems horrible, but how did you think auto-fill worked? When that password field gets filled in, Chrome must insert the real password into the HTML form element – or else the page wouldn’t work right, and you could not submit the form. And if the connection to the website is not over HTTPS, the plain text is then sent over the internet. In other words, if chrome can’t get the plain text passwords, then they are totally useless. A one way hash is no good, because we need to use them. Now the passwords are in fact encrypted, the only way to get them back to plain text is to have the decryption key. That key is your Google password, or a secondary key you can set up. When you sign into Chrome and sync the Google servers will transmit the encrypted passwords, settings, bookmarks, auto-fill, etc, to your local machine. Here Chrome will decrypt the information and be able to use it. On Google’s end all that info is stored in its encrpyted state, and they do not have the key to decrypt it. Your account password is checked against a hash to log in to Google, and even if you let chrome remember it, that encrypted version is hidden in the same bundle as the other passwords, impossible to access. So an employee could probably grab a dump of the encrypted data, but it wouldn’t do them any good, since they would have no way to use it.* So no, Google employees can not** access your passwords, since they are encrypted on their servers. * However, do not forget that any system that can be accessed by an authorized user can be accessed by an unauthorized user. Some systems are easier to break than other, but none are fail-proof. . . That being said, I think I will trust Google and the millions they spend on security systems, over any other password storage solution. And heck, I’m a wimpy nerd, it would be easier to beat the passwords out of me than break Google’s encryption. ** I am also assuming that there isn’t a person who just happens to work for Google gaining access to your local machine. In that case you are screwed, but employment at Google isn’t actually a factor any more. Moral: Hit Win + L before leaving machine. While we agree with zeel that it’s a pretty safe bet (as long as your computer is not compromised) that your passwords are in fact safe while stored in Chrome, we prefer to encrypt all our logins and passwords in a LastPass vault. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Should I choose KVM/XEN over OpenVZ or use them together?

    - by Krystian
    I've got a dual xeon e5504 server, with [for now] only 8GB of ram. Storage is'n impressive either: 3x 146GB sas in raid5 + 500GB sata drives. Currently it works as a development server, but it's over speced for our needs and since our development methods changed through last 2 years we decided it will work as a production system for some of our applications + we would like to have a separate system for testing/research. Our apps are mainly web apps deployed on tomcats [plural as some of the apps require older versions] and connected to Postgres. I would like to have a production system, where only httpd+tomcat+db are setup and nothing else runs there. Sterile system. Apart from that, I would like a test system, where I can play with different JVM settings, deploy my test apps, play with tomcat/httpd settings and restart them without interfering with the production system. Apart from that, I would like to be able to play with different linux flavors, with newer kernels to test how they work etc. I know, this is not possible with OpenVZ and I would have to choose KVM for that. I am thinking about merging the two, and setting up a KVM to be able to work with different systems [linux only to be frank] + use openVZ to setup separate machines for my development needs. I would simply go with that, but reading here and there about the performance impact full virtualization has over containers and looking at the specs of my server makes me think twice about it. I don't want to loose too much performance, especially because of the nature of my apps [few JVMs running at the same time]. It will be my first time with virtualization, apart from using desktop virtualbox/vmserver. Although I am a fast learner I don't want to mess with the main system so much that it will break the production apps or make them crawl. Although they are more or less internal apps and they don't produce much load, they need to be stable. I've read, that KVM host is a normal linux installation and it allows to run normal processes on it. If that is so, does it allow to run openVZ as well? I mean... can I have KVM and OpenVZ running on the same system/kernel? Or do I have to setup another system to run OpenVZ containers? How much performance impact can this have for me? Will my hardware suffice? oh and one more thing... unfortunately I'm quite limited with the funds... I'm looking for a free solution only :/

    Read the article

  • Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned how to become a Data Scientist for Big Data. In this article we will go over various learning resources related to Big Data. In this series we have covered many of the most essential details about Big Data. At the beginning of this series, I have encouraged readers to send me questions. One of the most popular questions is - “I want to learn more about Big Data. Where can I learn it?” This is indeed a great question as there are plenty of resources out to learn about Big Data and it is indeed difficult to select on one resource to learn Big Data. Hence I decided to write here a few of the very important resources which are related to Big Data. Learn from Pluralsight Pluralsight is a global leader in high-quality online training for hardcore developers.  It has fantastic Big Data Courses and I started to learn about Big Data with the help of Pluralsight. Here are few of the courses which are directly related to Big Data. Big Data: The Big Picture Big Data Analytics with Tableau NoSQL: The Big Picture Understanding NoSQL Data Analysis Fundamentals with Tableau I encourage all of you start with this video course as they are fantastic fundamentals to learn Big Data. Learn from Apache Resources at Apache are single point the most authentic learning resources. If you want to learn fundamentals and go deep about every aspect of the Big Data, I believe you must understand various concepts in Apache’s library. I am pretty impressed with the documentation and I am personally referencing it every single day when I work with Big Data. I strongly encourage all of you to bookmark following all the links for authentic big data learning. Haddop - The Apache Hadoop® project develops open-source software for reliable, scalable, distributed computing. Ambari: A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heat maps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner. Avro: A data serialization system. Cassandra: A scalable multi-master database with no single points of failure. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout: A Scalable machine learning and data mining library. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. Learn from Vendors One of the biggest issues with about learning Big Data is setting up the environment. Every Big Data vendor has different environment request and there are lots of things require to set up Big Data framework. Many of the users do not start with Big Data as they are afraid about the resources required to set up framework as well as a time commitment. Here Hortonworks have created fantastic learning environment. They have created Sandbox with everything one person needs to learn Big Data and also have provided excellent tutoring along with it. Sandbox comes with a dozen hands-on tutorial that will guide you through the basics of Hadoop as well it contains the Hortonworks Data Platform. I think Hortonworks did a fantastic job building this Sandbox and Tutorial. Though there are plenty of different Big Data Vendors I have decided to list only Hortonworks due to their unique setup. Please leave a comment if there are any other such platform to learn Big Data. I will include them over here as well. Learn from Books There are indeed few good books out there which one can refer to learn Big Data. Here are few good books which I have read. I will update the list as I will learn more. Ethics of Big Data Balancing Risk and Innovation Big Data for Dummies Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions If you search on Amazon there are millions of the books but I think above three books are a great set of books and it will give you great ideas about Big Data. Once you go through above books, you will have a clear idea about what is the next step you should follow in this series. You will be capable enough to make the right decision for yourself. Tomorrow In tomorrow’s blog post we will wrap up this series of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Remote Desktop fails without error message

    - by Adrian Grigore
    Hi, After rebooting my server Windows 2008 R2 server, I can't log into the remote desktop anymore. When I try to connect, the remote desktop zooms through different status messages, the last of them being "Configuring remote session" and then reverts to the initial Connection dialog again without giving me any error message. The server is seems to be up, since it's still deliverying web pages. Also, it does seem to be accepting my credentials. Is there any way to see why the connection fails? I've browsed through my system's even logs, but could not find anything related to remote desktop. Perhaps there's some hidden troubleshooting mode? Thanks, Adrian Edit: In the meantime the server has come back online. I'm not sure if it did so on it's own or if tech support did because I have not heard from them so far, but the problem is solved for the moment. It's a bit disappointing not to know the cause of thep problem though.

    Read the article

  • MSTSC crash on connect

    - by rotard
    We are supporting a client with primarily Windows XP machines. The users need to use Remote Desktop to connect to a terminal server. Unfortunately, after upgrading to SP3 on some machines MSTSC.exe crashes when they try to connect to the terminal server (a Win 2008 machine). The resolution I have found has been to revert to an older version of MSTSC as described here: http://it.tmod.pl/Blog/EntryId/115/Remote-Desktop-Connection-crashes.aspx . Another tech at my company independently arrived at a similar solution. Unfortunately, now some of the user's printers are missing (when connected tot he terminal server). Has anyone else seen this issue? How did you resolve it?

    Read the article

  • OpenVPN Error : TLS Error: local/remote TLS keys are out of sync: [AF_INET]

    - by Lucidity
    Fist off thanks for reading this, I appreciate any and all suggestions. I am having some serious problems reconnecting to my OpenVPN client using Riseup.net's VPN. I have spent a few days banging my head against the wall in attempts to set this up on my iOS devices....but that is a whole other issue. I was however able to set it up on my Mac OS X specifically on my Windows Vista 32 bit BootCamp VM with relatively little trouble. To originally connect I only had to modify the recommended Config file very slightly (Config file included at the end of this post): - I had to enter the code directly into my config file - And change "dev tap" to "dev tun" So I was connected. (Note - I did test to ensure the VPN was actually working after I originally connected, it was. Also verified the .pem file (inserted as the coding in my config file) for authenticity). I left the VPN running. My computer went to sleep. Today I went to use the internet expecting (possibly incorrectly - I am now unsure if I was wrong to leave it running) to still be connected to the VPN. However I saw immediately I was not. I went to reconnect. And was (am) unable to. My logs after attempting to connect (and getting a connection failed dialog box) show everything working as it should (as far as I can tell) until the end where I get the following lines: Mon Sep 23 21:07:49 2013 us=276809 Initialization Sequence Completed Mon Sep 23 21:07:49 2013 us=276809 MANAGEMENT: >STATE:1379995669,CONNECTED,SUCCESS, OMITTED Mon Sep 23 21:22:50 2013 us=390350 Authenticate/Decrypt packet error: packet HMAC authentication failed Mon Sep 23 21:23:39 2013 us=862180 TLS Error: local/remote TLS keys are out of sync: [AF_INET] VPN IP OMITTED [2] Mon Sep 23 21:23:57 2013 us=395183 Authenticate/Decrypt packet error: packet HMAC authentication failed Mon Sep 23 22:07:41 2013 us=296898 TLS: soft reset sec=0 bytes=513834601/0 pkts=708032/0 Mon Sep 23 22:07:41 2013 us=671299 VERIFY OK: depth=1, C=US, O=Riseup Networks, L=Seattle, ST=WA, CN=Riseup Networks, [email protected] Mon Sep 23 22:07:41 2013 us=671299 VERIFY OK: depth=0, C=US, O=Riseup Networks, L=Seattle, ST=WA, CN=vpn.riseup.net Mon Sep 23 22:07:46 2013 us=772508 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Mon Sep 23 22:07:46 2013 us=772508 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Mon Sep 23 22:07:46 2013 us=772508 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Mon Sep 23 22:07:46 2013 us=772508 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Mon Sep 23 22:07:46 2013 us=772508 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 2048 bit RSA So I have searched for a solution online and I have included what I have attempted below, however I fear (know) I am not knowledgeable enough in this area to fix this myself. I apologize in advance for my ignorance. I do tech support for a living, but not this kind of tech support unfortunately. Other notes and troubleshooting done - - Windows Firewall is disabled completely, as well as other Anti-virus programs - Tor is disabled completely - No Proxies running - Time is correct in all locations - Router Firmware is up to date - Able to connect to the internet and as far as I can tell all necessary ports are open. - No settings have been altered since I was able to connect successfully. - Ethernet as well as wifi connections attempted, resulted in same error. Also tried adding the following lines to my config file (without success or change in error): persist-key persist-tun proto tcp (after reading that this error generally occurs on UDP connections, and is extremely rare on TCP) resolv-retry infinite (thinking the connection may have timed out since the issues occurred after leaving VPN connected during about 10 hrs of computer in sleep mode) All attempts resulted in exact same error code included at the top of this post. The original suggestions I found online stated - (regarding the TLS Error) - This error should resolve itself within 60 seconds, or if not quit wait 120 seconds and try again. (Which isnt the case here...) (regarding the Out of Sync" error) - If you continue to get "out of sync" errors and the link does not come up, then it means that something is probably wrong with your config file. You must use either ping and ping-restart on both sides of the connection, or keepalive on the server side of a client/server connection, in order to gracefully recover from "local/remote TLS keys are out of sync" errors. I wouldn't be surprised if my config file is lacking, or not correct. However I can confirm I followed the instructions to a tee. And was able to connect originally (and have not modified my settings or config file since I was able to connect to when the error began occurring). I have a very simple config file: client dev tun tun-mtu 1500 remote vpn.riseup.net auth-user-pass ca RiseupCA.pem redirect-gateway verb 4 <ca> -----BEGIN CERTIFICATE----- [OMITTED] -----END CERTIFICATE----- </ca> I would really appreciate any help or suggestions. I am at a total loss here, I know I'm asking a lot here. Though I am a new user on this site I help others on many forums including Microsoft's support community and especially Apple's support communities, so I will definitely pass on anything I learn here to help others. Thanks so so so much in advance for reading this.

    Read the article

  • Upgrading a PERC h310 to a PERC H710 mini RAID controller on a Dell R620

    - by Gregg Leventhal
    I have an ESXi 5.0 Free license host using an internal Datastore (RAID 5, 5 Disk) that was configured with a Dell PERC h310 RAID controller. The disk performance was very poor, so I upgraded to the PERC H710 Mini. The IT Tech installed the controller and powered the host back on. I had to rescan the controller and the datastore appeared. Should any settings be changed in the RAID BIOS, or should the default settings be sufficient? Is they anything to be aware of when performing this type of upgrade in order to achieve the maximum performance?

    Read the article

  • LSI 9211-8i won't boot in KVM

    - by Paul McMillan
    I'm running ubuntu 11.04 with an LSI 9211-8i in IT mode with the latest firmware (10). I'm using KVM with VT-d enabled to pass the entire PCIe device through to the guest OS. I have the device disabled in the system BIOS. When I boot my virtual machine, the adapter quits during bios initialization with the following error: Unable to load LSI Corporation MPT BIOS MPT BIOS Fault 0Ch encountered at adapter PCI(00h,04h,00h) Press any key to continue... I know the virtualization is correctly enabled. I've blacklisted the kernel modules in the host OS. Has anyone else encountered this error? I'm about out of ideas here. Edit: I contacted LSI tech support about this, and they suggested that I try RedHat and Xen. Apparently they don't test or support anything else.

    Read the article

  • Wireless AP Placement and Digramming

    - by Matt Simmons
    I'm trying to research the best placement of wireless APs in a given space, and I'm running into problems in gathering information. I found (what I thought was) a great source in this tech republic article: http://techrepublic.com.com/5206-6230-0.html?forumID=82&threadID=163120 While this diagram seems detailed and overall very informative, there were a lot of comments talking about how it was lacking in things like "wire racks, microwaves, concrete walls, motors..." etc. Maybe I'm rash, but I just sort of looked around my office (which is, albeit, somewhat smaller than the one diagrammed), and went "uhhh, there", and hooked up the AP. It seems to cover everywhere. I imagine if my office quadrupled in size, I'd logically divide it up and put four APs in, with a similar amount of thought devoted to each. So, suppose I had a much more complex office. What tools (both diagramming and surveying) do you use to plan your AP placement?

    Read the article

  • Wireless AP Placement and Digramming

    - by Matt Simmons
    I'm trying to research the best placement of wireless APs in a given space, and I'm running into problems in gathering information. I found (what I thought was) a great source in this tech republic article: http://techrepublic.com.com/5206-6230-0.html?forumID=82&threadID=163120 While this diagram seems detailed and overall very informative, there were a lot of comments talking about how it was lacking in things like "wire racks, microwaves, concrete walls, motors..." etc. Maybe I'm rash, but I just sort of looked around my office (which is, albeit, somewhat smaller than the one diagrammed), and went "uhhh, there", and hooked up the AP. It seems to cover everywhere. I imagine if my office quadrupled in size, I'd logically divide it up and put four APs in, with a similar amount of thought devoted to each. So, suppose I had a much more complex office. What tools (both diagramming and surveying) do you use to plan your AP placement?

    Read the article

  • Can a usb cable carry 12v?

    - by zm15
    Here's what i'm wanting to do. I have a Acer Iconia A500 tablet. I want to plug it in, in the car, but it has a barrel plug and I don't want to buy an inverter. The car adapters are expensive for what they do. I already have a 2.1 amp usb car charger meant for the iPad: http://www.amazon.com/Kensington-K33497US-PowerBolt-Charger-Compatible/dp/tech-data/B003PU01M4/ref=de_a_smtd And i want to use this usb cable from the 2.1 amp port to plug into the A500: http://www.amazon.com/gp/product/B00304DZ7I/ref=ox_sc_act_title_2?ie=UTF8&m=A1HPBDJJIXKXS7 Here are the specs on the original wall charger if that helps: http://www.phihong.com/assets/pdf/PSA18R.pdf The usb cable says it's 5v, but the original charger says it outputs 12v. But since it's just a cable... wasn't sure if that really made a whole lot of difference since it's only 1.5 amps from the wall charger. Is it possible to use that usb cable through the powerbolt car charger, to charge the A500?

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >