Search Results

Search found 3444 results on 138 pages for 'research'.

Page 124/138 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Career guidance/advice for Junior-level Software Engineer [closed]

    - by John Do
    I have quite a few questions on my mind, so please bare with me. Please don't feel obligated to answer all of them, any as you choose will do. I'd appreciate if you could share some insight on any of these. Before I begin, some context: I currently have almost two years of professional experience as a Software Engineer, mainly developing software in Java. At this point, I feel that I have reached the peak in my career growth at the current company I am at and therefore I am looking for a new job, ideally again, as a Software Engineer. I have been interviewing for the past few months casually but have not had luck with companies I have a passion for. So, in no particular order - 1) In general, what are your thoughts on having graduate degrees in CS / Software Engineering. How much does it influence a salary increase, and do you think it's beneficial when working on real-world problems? I get the sense that a graduate degree in the field is trivial unless you really have a passion for research. 2) In general, in professional practice, how often had you have to write your own data structures and "complex" algorithms from scratch? In my own work, I have found myself relying mainly on third-party frameworks and the Java standard library to implement solutions as per business requirements. What are your thoughts on this? 3) In terms of resume, I feel the most ambivalent here. I want to be able to "blemish" my resume to a certain extent so that it stands out from others', but at the same time I do not want to over-exagerate my abilities. How do you strike a balance here? For example: I say that I am proficient in Java with data structures and algorithms. This is obviously a subjective and relative statement. I've taken the classes in my undergrad, and I've applied it in my work experience. What I feel as "prociency" can be seen as junior-level to others. How do you know what to say? Most of the time, recruiters (with no technical background) will be looking for keywords that stand out. This leads me to my next question (4). 4) Just from interviewing for the past few months (and getting plenty of rejections), I've come to realize that I may not be as proficient in data structures and algorithms as I thought I was. Do you think it's a good idea to remove the "proficient in java/data structure and algorithms"? I feel that being too hoenst on the resume will impede me from scoring opportunities to even have an interview with top-notch companies. What are your thoughts? 5) What is the absolute "must-have" knowledge going into a technical interview? I've been practicing several algorithmic and data sturcture problems now, and I feel that my abilities to solve arbitrary problems efficiently has not gotten significantly better. Do you think these abilities are something innate - it's either you have in you, or you don't? How can you teach yourself to learn, if you will? 6) How easy is it to go from industry/function to the next? I work mainly with backend technologies and I'm now interested in working with the frontend, i.e javascript,jquery,php or even mobile development. In your own experience, how did you not get pidgeon holed in your career? I feel that the choices you make now ultimately decide your future. As cliche as it sounds, I think it may be true. Here's what I mean: you've worked mainly as a backend engineer, people are interested in you doing the same thing since you've already accumulated experience in that function. How do get experience in a new function if people won't accept you because you don't already have it? It's a catch 22, you see... Are side projects the only real way to help you move from one function to another that you're truly interested in? For example: I could start writing my own mobile applications, even though I've worked mainly on the backend. Thanks so much for the long read. As a relatively new engineer to the real world, I am very humble and would like those who are experienced to shed some light. Thank you so much.

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • Silverlight Relay Commands

    - by George Evjen
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am fairly new at Silverlight development and I usually have an issue that needs research every day. Which I enjoy, since I like the idea of going into a day knowing that I am  going to learn something new. The issue that I am currently working on centers around relay commands. I have a pretty good handle on Relay Commands and how we use them within our applications. <Button Command="{Binding ButtonCommand}" CommandParameter="NewRecruit" Content="New Recruit" /> Here in our xaml we have a button. The button has a Command and a CommandParameter. The command binds to the ButtonCommand that we have in our ViewModel RelayCommand _buttonCommand;         /// <summary>         /// Gets the button command.         /// </summary>         /// <value>The button command.</value>         public RelayCommand ButtonCommand         {             get             {                 if (_buttonCommand == null)                 {                     _buttonCommand = new RelayCommand(                         x => x != null && x.ToString().Length > 0 && CheckCommandAvailable(x.ToString()),                         x => ExecuteCommand(x.ToString()));                 }                 return _buttonCommand;             }         }   In our relay command we then do some checks with a lambda expression. We check if the command  parameter is null, is the length greater than 0 and we have a CheckCommandAvailable method that will tell  us if the button is even enabled. After we check on these three items we then pass the command parameter to an action method. This is all pretty straight forward, the issue that we solved a few days ago centered around having a control that needed to use a Relay Command and this control was a nested control and was using a different DataContext. The example below illustrates how we handled this scenario. In our xaml usercontrol we had to set a name to this control. <Controls3:RadTileViewItem x:Class="RecruitStatusTileView"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"      xmlns:Controls1="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"      xmlns:Controls2="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Input"      xmlns:Controls3="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation"      mc:Ignorable="d" d:DesignHeight="400" d:DesignWidth="800" Header="{Binding Title,Mode=TwoWay}" MinimizedHeight="100"                             x:Name="StatusView"> Here we are using a telerik RadTileViewItem. We set the name of this control to “StatusView”. In our button control we set our command parameters and commands different than the example above. <HyperlinkButton Content="{Binding BigBoardButtonText, Mode=TwoWay}" CommandParameter="{Binding 'Position.PositionName'}" Command="{Binding ElementName=StatusView, Path=DataContext.BigBoardCommand, Mode=TwoWay}" /> This hyperlink button lives in a ListBox control and this listbox has an ItemSource of PositionSelectors. The Command Parameter is binding to the Position.Position property of that PositionSelectors object. This again is pretty straight forward again. What gets a bit tricky is the Command property in the hyperlink. It is binding to the element name we created in the user control (StatusView) Because this hyperlink is in a listbox and is in the item template it doesn’t have a direct handle on the DataContext that the RadTileViewItem has so we have to make sure it does. We do that by binding to the element name of status view then set the path to DataContext.BigBoardCommand. BigBoardCommand is the name of the RelayCommand in the view model. private RelayCommand _bigBoardCommand = null;         /// <summary>         /// Gets the big board command.         /// </summary>         /// <value>The big board command.</value>         public RelayCommand BigBoardCommand         {             get             {                 if (_bigBoardCommand == null)                 {                     _bigBoardCommand = new RelayCommand(x => true, x => AddToBigBoard(x.ToString()));                 }                 return _bigBoardCommand;             }         } From there we check for true again and then call the action and pass in the parameter that we had as the command parameter. What we are working on now is a bit trickier than this second example. In the above example we are only creating this TileViewItem with this name “StatusView” once. In another part of our application we are generating multiple TileViewItems, so we cannot set the name in the control as we cant have multiple controls with the same name. When we run the application we get an error that reads that the value is out of expected range. My searching has led me to think we cannot have multiple controls with the same name. This is today’s problem and Ill post the solution to this once it is found.

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

  • Taking a Flying Leap

    - by Lance Shaw
    Yesterday, I went skydiving with three of my children.  It was thrilling, scary, invigorating and exciting. While there is obvious risk involved, the reward and feeling of success was well worth it. You might already be wondering what skydiving would have to with WebCenter, so let me explain. Implementing a skydiving program and becoming an instructor does not happen overnight.  It does not happen with the purchase of the needed technology. Not one of us would go out, buy a parachute, the harnesses, helmet and all the gear and be able to convince anyone that we are now ready to be a skydiving instructor. The fact is that obtaining the technology is merely a small piece of the overall process and so is the case with managing content in your company. You don't just buy the right software (Oracle WebCenter Content) and go to your boss and declare information management success. There is planning, research and effort that goes into deploying software of any kind and especially when it is as mission-critical to the success of your business as Enterprise Content Management. To become a certified skydiving instructor takes at least 3 years of commitment and often longer. In the United States, candidates must complete over 500 solo jumps of their own over a minimum of 36 months and then must complete additional rigorous training under observation.  When you consider the amount of time and effort involved, it's not unlike getting a college degree and anyone that has trusted their lives to one of these instructors will no doubt appreciate their dedication to the curriculum.  Implementing an ECM system won't take that long, but it certainly requires commitment, analysis and consideration. But guess what?  Humans are involved and that means that mistakes can happen and that rules change.  This struck me while reading an excellent post on darkreading.com by Glenn S. Phillips entitled "Mission Impossible: 4 Reasons Compliance is Impossible".  His over-arching point was that with information management and security, environments change and people are involved meaning the work is never done.  He stated that you can never claim your compliance efforts are complete because of the following reasons. People are involved.  And lets face it, some are more trustworthy than others. Change is Constant. There is always some new technology coming along that is disruptive. Consumer grade cloud file sharing and sync tools come to mind here. Compliance is interpreted, not defined.  Laws and the judges that read them are always on the move. Technology is a tool, not a complete solution. There is no magic pill. The skydiving analogy holds true here as well.  Ultimately, a single person packs your parachute.  For obvious reasons, you prefer that this person be trustworthy but there are no absolute guarantees of a 100% error-free scenario.  Weather and wind conditions are never a constant and the best-laid plans for a great day of skydiving are easily disrupted by forces outside of your control.  Rules and regulations vary by location and may be updated at any time and as I mentioned early on, even the best technology on its own will only get you started. The good news is that, like skydiving, with the right technology, the right planning, the right team and a proper understanding of the rules and regulations that govern your industry, your ECM deployment can be a great success.  Failure to plan for any of the 4 factors that Glenn outlined in his article will certainly put your deployment and maybe even your company at risk, so consider them carefully. As a final aside, for those of you who consider skydiving an incredibly dangerous and risky pastime, consider this comparative statistic.  In 2012, the U.S. Parachute Association recorded 19 fatal skydiving accidents in the U.S. out of roughly 3.1 million jumps.  That’s 0.006 fatalities per 1,000 jumps. By comparison, the U.S. National Highway Traffic Safety Administration reports that there were 34,080 deaths due to car accidents in 2012.  Based on the percentages, one could argue that it is safer to jump out of a plane than to drive to the airport where the skydiving will take place. While the way you manage, secure, classify, control, retain and dispose of company files may not carry as much risk as driving or skydiving, it certainly carries risk for the organization when not planned and deployed appropriately.  Consider all the factors involved in your organization as you make your content management plans.  For additional areas of consideration, be sure to download our free whitepaper on the topic entitled "The Top 10 Criteria for Choosing an ECM System" which is available for download here.

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Most Innovative IDM Projects: Awards at OpenWorld

    - by Tanu Sood
    On Tuesday at Oracle OpenWorld 2012, Oracle recognized the winners of Innovation Awards 2012 at a ceremony presided over by Hasan Rizvi, Executive Vice President at Oracle. Oracle Fusion Middleware Innovation Awards recognize customers for achieving significant business value through innovative uses of Oracle Fusion Middleware offerings. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s Award honors customers for their cutting-edge solutions driving business innovation and IT modernization using Oracle Fusion Middleware. The program has grown over the past 6 years, receiving a record number of nominations from customers around the globe. The winners were selected by a panel of judges that ranked each nomination across multiple different scoring categories. Congratulations to both Avea and ETS for winning this year’s Innovation Award for Identity Management. Identity Management Innovation Award 2012 Winner – Avea Company: Founded in 2004, AveA is the sole GSM 1800 mobile operator of Turkey and has reached a nationwide customer base of 12.8 million as of the end of 2011 Region: Turkey (EMEA) Products: Oracle Identity Manager, Oracle Identity Analytics, Oracle Access Management Suite Business Drivers: ·         To manage the agility and scale required for GSM Operations and enable call center efficiency by enabling agents to change their identity profiles (accounts and entitlements) rapidly based on call load. ·         Enhance user productivity and call center efficiency with self service password resets ·         Enforce compliance and audit reporting ·         Seamless identity management between AveA and parent company Turk Telecom Innovation and Results: ·         One of the first Sun2Oracle identity management migrations designed for high performance provisioning and trusted reconciliation built with connectors developed on the ICF architecture that provides custom user interfaces for  dynamic and rapid management of roles and entitlements along with entitlement level attestation using closed loop remediation between Oracle Identity Manager and Oracle Identity Analytics. ·         Dramatic reduction in identity administration and call center password reset tasks leading to 20% reduction in administration costs and 95% reduction in password related calls. ·         Enhanced user productivity by up to 25% to date ·         Enforced enterprise security and reduced risk ·         Cost-effective compliance management ·         Looking to seamlessly integrate with parent and sister companies’ infrastructure securely. Identity Management Innovation Award 2012 Winner – Education Testing Service (ETS)       See last year's winners here --Company: ETS is a private nonprofit organization devoted to educational measurement and research, primarily through testing. Region: U.S.A (North America) Products: Oracle Access Manager, Oracle Identity Federation, Oracle Identity Manager Business Drivers: ETS develops and administers more than 50 million achievement and admissions tests each year in more than 180 countries, at more than 9,000 locations worldwide.  As the business becomes more globally based, having a robust solution to security and user management issues becomes paramount. The organizations was looking for: ·         Simplified user experience for over 3000 company users and more than 6 million dynamic student and staff population ·         Infrastructure and administration cost reduction ·         Managing security risk by controlling 3rd party access to ETS systems ·         Enforce compliance and manage audit reporting ·         Automate on-boarding and decommissioning of user account to improve security, reduce administration costs and enhance user productivity ·         Improve user experience with simplified sign-on and user self service Innovation and Results: 1.    Manage Risk ·         Centralized system to control user access ·         Provided secure way of accessing service providers' application using federated SSO. ·         Provides reporting capability for auditing, governance and compliance. 2.    Improve efficiency ·         Real-Time provisioning to target systems ·         Centralized provisioning system for user management and access controls. ·         Enabling user self services. 3.    Reduce cost ·         Re-using common shared services for provisioning, SSO, Access by application reducing development cost and time. ·         Reducing infrastructure and maintenance cost by decommissioning legacy/redundant IDM services. ·         Reducing time and effort to implement security functionality in business applications (“onboard” instead of new development). ETS was able to fold in new and evolving requirement in addition to the initial stated goals realizing quick ROI and successfully meeting business objectives. Congratulations to the winners once again. We will be sure to bring you more from these Innovation Award winners over the next few months.

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • YouTube SEO: Video Optimization

    - by Mike Stiles
    SEO optimization is still regarded as one of the primary tools in the digital marketing kit. However and wherever a potential customer is conducting a search, brands want their content to surface in the top results. Makes sense. But without a regular flow of good, relevant content, your SEO opportunities run shallow. We know from several studies video is one of the most engaging forms of content, so why not make sure that in addition to being cool, your videos are helping you win the SEO game? Keywords:-Decide what search phrases make the most sense for your video. Don’t dare use phrases that have nothing to do with the content. You’ll make people mad.-Research those keywords to see how competitive they are. Adjust them so there are still lots of people searching for it, but there are not as many links showing up for it.-Search your potential keywords and phrases to see what comes up. It’s amazing how many people forget to do that. Video Title: -Try to start and/or end with your keyword.-When you search on YouTube, visual action words tend to come up as suggested searches. So try to use action words. Video Description: -Lead with a link to your site (include http://). -Don’t stuff this with your keyword. It leads to bad writing and it won’t work anyway. This is where you convince people to watch, so write for humans. Use some showmanship. -At the end, do a call to action (subscribe, see the whole playlist, visit our social channels, etc.) Video Tags:-Don’t over-tag. 5-10 tags per video is plenty. -If you’re compelled to have more than 10, that means you should probably make more videos specifically targeting all those keywords. Find Linking Pals:-45% of videos are discovered on video sites. But 44% are found through links on blogs and sites.-Write a blog about your video’s content, then link to the video in it. -A good site for finding places to guest blog is myblogguest.com-Once you find good linking partners, they’ll link to your future videos (as long as they’re good and you’re returning the favor). Tap the Power of Similar Videos:-Use Video Reply to associate your video with other topic-related videos. That’s when you make a video responding to or referencing a video made by someone else. Content:-Again, build up a portfolio of videos, not just one that goes after 30 keywords.-Create shorter, sequential videos that pull them deeper into the content and closer to a desired final action.-Organize your video topics separately using Playlists. Playlists show up as a whole in search results like individual videos, so optimize playlists the same as you would for a video. Meta Data:-Too much importance is placed on it. It accounts for only 15% of search success.-YouTube reads Captions or Transcripts to determine what a video is about. If you’re not using them, you’re missing out.-You get the SEO benefit of captions and transcripts whether the viewers has them toggled on or not. Promotion:-This accounts for 25% of search success.-Promote the daylights out of your videos using your social channels and digital assets. Don’t assume it’s going to magically get discovered. -You can pay to promote your video. This could surface it on the YouTube home page, YouTube search results, YouTube related videos, and across the Google content network. Community:-Accounts for 10% of search success.-Make sure your YouTube home page is a fun place to spend time. Carefully pick your featured video, and make sure your Playlists are featured. -Participate in discussions so users will see you’re present. The volume of ratings/comments is as important as the number of views when it comes to where you surface on search. Video Sitemaps:-As with a web site, a video sitemap helps Google quickly index your video.-Google wants to know title, description, play page URL, the URL of the thumbnail image you want, and raw video file location.-Sitemaps are xml files you host or dynamically generate on your site. Once you’ve made your sitemap, sign in and submit it using Google webmaster tools. Just as with the broadcast and cable TV channels, putting a video out there is only step one. You also have to make sure everybody knows it’s there so the largest audience possible can see it. Here’s hoping you get great ratings. @mikestiles

    Read the article

  • Who could ask for more with LESS CSS? (Part 1 of 3&ndash;Features)

    - by ToStringTheory
    It wasn’t very long ago that I first began to get into CSS precompilers such as SASS (Syntactically Awesome Stylesheets) and LESS (The Dynamic Stylesheet Language) and I had been hooked on the idea since.  When I finally had a new project come up, I leapt at the opportunity to try out one of these languages. Introduction To be honest, I was hesitant at first to add either framework as I didn’t really know much more than what I had read on their homepages, and I didn’t like the idea of adding too much complexity to a project - I couldn’t guarantee I would be the only person to support it in the future. Thankfully, both of these languages just add things into CSS.  You don’t HAVE to know LESS or SASS to do anything, you can still do your old school CSS, and your output will be the same.  However, when you want to start doing more advanced things such as variables, mixins, and color functions, the functionality is all there for you to utilize. From what I had read, SASS has a few more features than LESS, which is why I initially tried to figure out how to incorporate it into a MVC 4 project. However, through my research, I couldn’t find a way to accomplish this without including some bit of the Ruby on Rails framework on the computer running it, and I hated the fact that I had to do that.  Besides SASS, there is little chance of me getting into the RoR framework, at least in the next couple years.  So in the end, I settled with using LESS. Features So, what can LESS (or SASS) do for you?  There are several reasons I have come to love it in the past few weeks. 1 – Constants Using LESS, you can finally declare a constant and use its value across an entire CSS file. The case that most people would be familiar with is colors.  Wanting to declare one or two color variables that comprise the theme of the site, and not have to retype out their specific hex code each time, but rather a variable name.  What’s great about this is that if you end up having to change it, you only have to change it in one place.  An important thing to note is that you aren’t limited to creating constants just for colors, but for strings and measurements as well. 2 – Inheritance This is a cool feature in my mind for simplicity and organization.  Both LESS and SASS allow you to place selectors within other selectors, and when it is compiled, the languages will break the rules out as necessary and keep the inheritance chain you created in the selectors. Example LESS Code: #header {   h1 {     font-size: 26px;     font-weight: bold;   }   p {     font-size: 12px;     a     {       text-decoration: none;       &:hover {         border-width: 1px       }     }   } } Example Compiled CSS: #header h1 {   font-size: 26px;   font-weight: bold; } #header p {   font-size: 12px; } #header p a {   text-decoration: none; } #header p a:hover {   border-width: 1px; } 3 - Mixins Mixins are where languages like this really shine.  The ability to mixin other definitions setup a parametric mixin.  There is really a lot of content in this area, so I would suggest looking at http://lesscss.org for more information.  One of the things I would suggest if you do begin to use LESS is to also grab the mixins.less file from the Twitter Bootstrap project.  This file already has a bunch of predefined mixins for things like border-radius with all of the browser specific prefixes.  This alone is of great use! 4 – Color Functions This is the last thing I wanted to point out as my final post in this series will be utilizing these functions in a more drawn out manner.  Both LESS and SASS provide functions for getting information from a color (R,G,B,H,S,L).  Using these, it is easy to define a primary color, and then darken or lighten it a little for your needs.  Example: Example LESS Code: @base-color: #111; @red:        #842210; #footer {   color: (@base-color + #003300);   border-left:  2px;   border-right: 2px;   border-color: desaturate(@red, 10%); } Example Compiled CSS: #footer {    color: #114411;    border-left:  2px;    border-right: 2px;    border-color: #7d2717; } I have found that these can be very useful and powerful when constructing a site theme. Conclusion I came across LESS and SASS when looking for the best way to implement some type of CSS variables for colors, because I hated having to do a Find and Replace in all of the files using the colors, and in some instances, you couldn’t just find/replace because of the color choices interfering with other colors (color to replace of #000, yet come colors existed like #0002bc).  So in many cases I would end up having to do a Find and manually check each one. In my next post, I am going to cover how I’ve come to set up these items and the structure for the items in the project, as well as the conventions that I have come to start using.  In the final post in the series, I will cover a neat little side project I built in LESS dealing with colors!

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • Slow login to load-balanced Terminal Server 2008 behind Gateway Server

    - by Frans
    I have a small load-balanced (using Session Broker) Terminal Server 2008 farm behind a Gateway Server which is accessed from the Internet. The problem I have is that there is a delay of 20-30 seconds if the session broker switches the user to another server during login. I think this is related to the fact that I am forcing the security layer to be RDP rather than SSL. The background The Gateway server has a public routeable IP addres and DNS name so it can be accessed from the Internet and all users come in via this route (the system is used to provide access to hosted applications to external customers). The actual terminal servers only have internal IP addresses. This works really well, except that with a Vista or Windows 7 client, the Remote Desktop client will negotiate with the server to use SSL for the security layer. This then exposes the auto-generated certificate that TS1 or TS2 has - but since they are internal, auto-generated certificates, the client will get a stern warning that the certificate is not valid. I can't give the servers a properly authorised certificate as the servers do not have public routeable IP address or DNS name. Instead, I am using Group Policy to force the connections to be over RDP instead of SSL. \Computer Configuration\Policies\Administrative Templates\Windows Components\Terminal Services\Terminal Server\Security\Require use of specific security layer for remote (RDP) connections The Windows 7 user now gets a much less stern warning that "the server's identity cannot be confirmed" which I can live with. I don't have enough control over the end-user's machines to ask them to install a new root certificate either. TS1 and TS2 are also load-balanced using the Session Broker, which is installed on the Gateway Server. I am using round-robin DNS, so the user's initial connection will go via Gateway1 to either TS1 or TS2. TS1/TS2 will then talk to the session broker and may pass the user to the other server. I.e. the user may get connected to TS2, but after talking to the session broker the user may be passed to TS1, which is where they will run their session. When this switching of servers happens, in my setup, the screen sits with the word "Welcome" for 20-30 seconds after which it flickers, Welcome is shown again and then flashing through nthe normal login screens (i.e. "wait for user profile manager" etc). Having done some research, I think what is happening is that the user is being fully logged on to TS2 (while "Welcome" is shown) before being passed to TS1, where they are then logged in again. It is interesting that normally when you see the ""Welcome" word, the little circle to left rotates. However, it does not rotate during this delay - the screen just looks frozen. This blog post leads me to think that this is because CredSSP is not being used, probably because I am disallowing SSL and forcing RDP. What I have tried I enabled SSL again which removes the "Welcome" delay. However, it seems to introduc a new delay much earlier in the process. Specifically, when the RDP client is saying "initialising connection" - this is now much slower. Quite apart from the fact that my certificate problem precludes me using that solution without considerable difficulty. I tried disabling the load balancing (just remove the servers from the session broker farm) and the connections do not have any delay. The problem is also intermittent in the sense that it only happens when the user gets bumped from one server to another. I tested this by trying to connect directly to TS1 (via the Gateway, of course) and then checking which server I actually got connected to. Just to be sure, I also by-passed the round-robin DNS to see if it had any impact and it doesn't. The setup is essentially in line with MS recommendations here: TS Session Broker Load Balancing Step-by-Step Guide I tried changing to using a dedicated redirector. Basically, rather than using a round-robin DNS, I pointed my DNS to the Gateway server and configured it to be a dedicated redirector (disallow logons, add it to the farm). Same problem, alas. Any ideas or suggestions gratefully received.

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • Trying to use Digest Authentication for Folder Protection

    - by Jon Hazlett
    StackOverflow users suggested I try my question here. I'm using Server 2008 EE and IIS 7. I've got a site that I've migrated over from XP Pro using IIS 5. On the old system, I was using IIS Password to use simple .htaccess files to control a couple of folders that I didn't want to be publicly viewable. Now that I'm running a full-blown DC with a more powerful version of IIS, I decided it'd be a good idea to start using something slightly more sophisticated. After doing my research and trying to keep things as cheap as possible with a touch of extra security, I decided that Digest Authentication would be the best way to go. My issue is this: With Anon access disabled and Digest enabled, I am never prompted for credentials. when on the server, viewing domain[dot]com/example will simply show my 401.htm page without prompting me for credentials. when on a different network/computer, viewing domain[dot]com/example again shows my 401.htm without prompting for credentials. At the site level I only have Anon enabled. Every subfolder, unless I want it protected, has just Anon enabled. Only the folders I want protected have Anon disabled and Digest enabled. I have tried editing the bindings to see if that would spark any kind of change... www.domain.com, domain.com, and localhost have all been tried. There was never a change in behavior at any permutation (aside from the page not being found when I un-bound localhost to the site). I might have screwed up when I deleted the default site from IIS. I didn't think I'd actually need it for anything, but some of what I have read online is telling me otherwise now. As for Digest settings, I have it pointed to local.domain.com, which is the name assigned to my AD Domain. I'm guessing that's right, but honestly have no clue about what a realm actually is. Would it matter that I have an A record for local.domain.com pointing to my IP address? I had problems initially with an absolute link for 401.htm pages, but have since resolved that. Instead of D:\HTTP\401.htm I've used /401.htm and all is well. I used to get error 500's because it couldn't find the custom 401.htm file, but now it loads just fine. As for some data, I was getting entries like this from access logs: 2009-07-10 17:34:12 10.0.0.10 GET /example/ - 80 - [workip] Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+1.1.4322;+.NET+CLR+2.0.50727;+InfoPath.2) 401 2 5 132 But after correcting my 401.htm links now get logs like this: 2009-07-10 18:56:25 10.0.0.10 GET /example - 80 - [workip] Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US;+rv:1.9.0.11)+Gecko/2009060215+Firefox/3.0.11 200 0 0 146 I don't know if that means anything or not. I still don't get any credential challenges, regardless of where I try to sign in from ( my workstation, my server, my cellphone even ). The only thing that's seemed to work is viewing localhost and I donno what could be preventing authentication from finding it's way out of the server. Thanks for any help! Jon

    Read the article

  • httpd.conf configuration - for internal/external access

    - by tom smith
    hey. after a lot of trail/error/research, i've decided to post here in the hopes that i can get clarification on what i've screwed up... i've got a situation where i have multiple servers behind a router/firewall. i want to be able to access the sites i have from an internal and external url/address, and get the same site. i have to use portforwarding on the router, so i need to be able to use proxyreverse to redirect the user to the approriate server, running the apache/web app... my setup the external urls joomla.gotdns.com forge.gotdns.com both of these point to my router's external ip address (67.168.2.2) (not really) the router forwards port 80 to my server lserver6 192.168.1.56 lserver6 - 192.168.1.56 lserver9 - 192.168.1.59 lserver6 - joomla app lserver9 - forge app i want to be able to have the httpd process (httpd.conf) configured on lserver6 to be able to allow external users accessing the system (foo.gotdns.com) be able to access the joomla app on lserver6 and the same for the forge app running on lserver9 at the same time, i would also like to be able to access the apps from the internal servers, so i'd need to be able to somehow configure the vhost setup/proxyreverse setup to handle the internal access... i've tried setting up multiple vhosts with no luck.. i've looked at the different examples online.. so there must be something subtle that i'm missing... the section of my httpd.conf file that deals with the vhost is below... if there's something else that's needed, let me know and i can post it as well.. thanks -tom ##joomla - file /etc/httpd/conf.d/joomla.conf Alias /joomla /var/www/html/joomla <Directory /var/www/html/joomla> </Directory> # Use name-based virtual hosting. #NameVirtualHost *:80 # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> NameVirtualHost 192.168.1.56:80 <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] #DocumentRoot /var/www/html #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName fforge.gotdns.com:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off ProxyPass / http://192.168.1.81:80/ ProxyPassReverse / http://192.168.1.81:80/ </VirtualHost> <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] DocumentRoot /var/www/html/joomla #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName 192.168.1.56:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off </VirtualHost>

    Read the article

  • Can't logging in file from tomcat6 with log4j

    - by Ivan Nakov
    I have one stupid problem, which is killing me from hours. I'm trying to configure loggin to my project. I started with a simple Spring MVC project generated by STS, then added org.apache.log4j.RollingFileAppender to the existing log4j.xml file. <?xml version="1.0" encoding="UTF-8"?> <!-- Appenders --> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-5p: %c - %m%n" /> </layout> </appender> <appender name="FilleAppender" class="org.apache.log4j.RollingFileAppender"> <param name="maxFileSize" value="100KB" /> <param name="maxBackupIndex" value="2" /> <param name="File" value="/home/ivan/Desktop/app.log" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{ABSOLUTE} %5p %c{1}: %m%n " /> </layout> </appender> <!-- Application Loggers --> <logger name="org.elsys.logger"> <level value="debug" /> </logger> <!-- 3rdparty Loggers --> <logger name="org.springframework.core"> <level value="info" /> </logger> <logger name="org.springframework.beans"> <level value="info" /> </logger> <logger name="org.springframework.context"> <level value="info" /> </logger> <logger name="org.springframework.web"> <level value="info" /> </logger> <!-- Root Logger --> <root> <priority value="debug" /> <appender-ref ref="FilleAppender" /> </root> When I deploy project to tomcat6 server and open the url, logger doesn't generate log file. I'm trying to log from this controller: @Controller public class HomeController { private static final Logger logger = LoggerFactory.getLogger(HomeController.class); /** * Simply selects the home view to render by returning its name. */ @RequestMapping(value = "/", method = RequestMethod.GET) public String home(Locale locale, Model model) { logger.info("Welcome home! the client locale is "+ locale.toString()); Date date = new Date(); DateFormat dateFormat = DateFormat.getDateTimeInstance(DateFormat.LONG, DateFormat.LONG, locale); String formattedDate = dateFormat.format(date); logger.debug("send view"); model.addAttribute("serverTime", formattedDate ); return "home"; } } When I log from this simple Main.class, it works correct. public class Main { public static void main(String[] args) { Logger log = LoggerFactory.getLogger(Main.class); log.debug("Test"); } } I'm using tomcat6 and Ubuntu 11.10. I made a research in net and i found various options to fix this problem, but they don't help me. Please if someone have ideas how to fix it, help me.

    Read the article

  • Yet again: "This device can perform faster" (Samsung Galaxy Tab 2)

    - by Mike C
    I've been doing a lot of research with no reasonable solution. Please excuse the length of my post. When I plug my Galaxy Tab 2 (7" / Wi-Fi only / Android ICS) into my Windows 7 64-bit machine, I (almost always) get this warning popup that "This device can perform faster." And in fact, transfers onto the Tab in this mode are slow. The two times I've been able to get a high-speed connection, the transfer has occurred at the expected speed. I just don't know what to do to get that high-speed transfer. (The first time I did, it was the first time I connected the Tab; the second time I did, I was fiddling around and unplugging/plugging in again.) That popup is telling me that the device is USB2, but that it thinks I've connected to a USB1 port. In fact, every USB port (there are ten) on this system is USB2. It's an ASUS M3A78-EMH mobo from late 2008. I'm not sure what the chipset is; the CPU is an AMD Athlon 4850e, but I've seen this message reported for non-AMD systems. (Every mobo reference I've seen in reports on this has been for Asus, but of course most reporters aren't reporting that info at all.) The Windows 7 installation is just a couple weeks old (I had a disk crash) but I saw the same warning on the WinXP/64 that was installed previously. In Device Manager, there are two "Standard Enhanced PCI to USB Host Controller" nodes which are the actual high-speed controllers. There are also five "Standard OpenHCD USB Host Controller" nodes, which I have determined are virtual USB1 controllers embedded in the "Enhanced" controllers. (In Device Manager, I'm using View|Devices by Connection.) My high-speed thumb drives, external disks, and iPod all show up as subnodes of the "Enhanced" controllers; the keyboard, mouse, and USB speakers under the "OpenHCD" ones -- and this is true no matter which ports these devices are plugged into. The Tab shows up under an OpenHCD node, unsurprisingly. It appears as a threesome: a top-level "Mobile USB Composite device" with two subs: "Galaxy Tab 2" and "Mobile USB Modem." (I have no idea what the modem device implies or how I might use it, but I don't care about it either: I just want the Tab to reliably connect at high speed.) On the Tab, the USB support has a switch between PTP and MTP, the latter being the default, and the preferred mode for me (as I'm usually hooking it up for music synch). I have tried, however, connecting it as PTP, and it still connects as USB 1. (As PTP, only the "Galaxy Tab 2" device appears -- no Composite, no Modem.) If it's plugged in as MTP and I change the setting to PTP, Windows unloads and reloads the device, and voila: The Tab appears under an "Enhanced" node, but eventually re-loads again to show a exclamation icon on the device; Properties then shows "This device cannot start." Same response if I plug it in as PTP and then change to MTP; in this case, only the Tab itself shows the exclamation, not the other two devices. One thing I have not tried, and really would prefer to avoid, is installing the "beta" chipset driver available on the Asus website, which is dated 2009. Windows tells me it has the most up-to-date drivers for the Tab, and for the chipset, and I'm inclined to believe that. I suspect the problem is with the Samsung drivers, or possibly the hardware. One suggestion I saw elsewhere which might, possibly, pertain is to ensure the USB cable is properly shielded; however, the Tab has one of those misbegotten 30-pin, not-quite-an-iPod connectors; I don't know if I could find a 3rd party one. It seems unlikely that this cable is improperly shielded, tho. (Is there a way to test that?) So, my question is: does anyone know how to get this working as one might reasonably expect it to?

    Read the article

  • BluRay audio/video stuttering with PowerDVD 11, WinDVD 11 Pro, etc? Xonar/Auzen HD audio option?

    - by jrista
    I recently upgraded my Windows 7 MediaCenter HTPC due to a motherboard failure (really old motherboard and cpu, it was on its last legs.) I chose to upgrade to an i5 system with everything built into the motherboard. I did my due diligence, researched, and found some hardware that was within my budget. I ended up with: Core i5 2500K (3.3Ghz) Corsair XMS3 2x2Gb DDR3 (4Gb) ASUS P8H 61-M LE/CSM MicroCenter 64Gb SSD (Previous BluRay player, forget the brand) The system is pretty awesome, and plays everything I have perfectly. I almost went with an Atom solution, however there have been numerous notes that they do not play NetFlix Instant Watch well...and I am a heavy Netflix IW user. High definition BluRay rips work well, although they usually contain lower audio quality than the BluRay's they were ripped from. The real problem I am encountering is playing back BluRay video from discs. For some reason, I am encountering rather terrible stuttering problems with both the audio and video. The stuttering is synchronous in both, and occurs at seemingly random intervals. I've used PowerDVD 9, PowerDVD 11 trial, and WinDVD 11 Pro trial. All three have stuttering problems, although PowerDVD 11 seems to have the least. Watching system resource usage, CPU load is never above 20%, and memory usage tends to be a constant 1/3rd the total available system memory. When playback is fine, its superb...the video is crystal clear. The audio quality is ok, certainly not what I would expect from a BluRay disc. I did some research, and it seems that playing BluRay from a PC causes a downsampling of the audio? I am curious if the audio is my primary problem here, the cause of the stuttering I am encountering? When stuttering occurs, the audio gets REALLY bad, while the video just pauses momentarily every second until for whatever reason everything picks up and runs fine (usually after a few seconds to a couple minutes.) The audio chipset is a Realtek HD ALC887 8-channel, supposedly designed to support BluRay playback. Has anyone encountered any issues like this playing back bluray discs on a PC (namely with PowerDVD...WinDVD was FAR worse, and seemed to have real trouble even reading the discs, and I have no interest in fiddling with it further.) Is there any reason to suspect the video decoding as the problem?(Given how bad the audio gets during a stutter, and how clean the video remains, I am inclined to think the issue boils down to audio.) Is it even remotely possible that the motherboard, cpu, or ram are causing the stuttering (all three are pretty blazing fast...faster than the hardware that I replaced, which seemed to play BluRay fine with PowerDVD 9.) I've read a bit about the Asus Xonar HDAV 1.3 and the Auzen X-Fi HomeTheater HD home theater hi-fi audio cards. Seems they are the only way to get true full-quality, uncompressed BluRay audio bitstreaming over HDMI on a PC. None of the usual suspects seem to have these cards in stock, however. Are these cards worth getting? Are they even still available, or have they been discontinued (if so, that would indeed be sad...they sound simply fantastic.)

    Read the article

  • Cheapest way to connect 20-24 Sata II HDDs in a budget storage server?

    - by Joe Hopfgartner
    I need to assemble a high density storage server for as cheap as possible. It's been a while for me and the last systems I integrated didn't even have Sata yet... During my Research I of course stumbled about Nexsan SATA Beast, the BackBlaze storage Pods as well as some ridiculously overpriced HP Proliant or Dell storage solutions. Finally I choose Norco cases as the way to go. My eye is set on the RPC-4020, which is a 4U 19" Rackmount case with 20 Hot Swap 3.5" SATA/SAS Hdd trays (Backplanes included) and room for two 2.5" OS drives as well as a Slim Line CD-Rom. The backplanes connect with a single SATA port for each drive, so there are 20 internal SATA ports to to be connected. They also have redundant power ports which I think is quite nice. The cheapest price I have found is 290$ + 40$ shipping. In europe the cheapest unfortunately is 370€ (500$) + 40 € shipping... A nice alternative would be the RPC-4224 which has SFF-8087 Mini SAS connectors that bundle 4 SATA trays each. But it doesn't seem to be available in Europe (where i am) anywhere. So here comes my problem: What Mainboard/Controller to choose to connect them for as cheap as possible while still having nice data rates? I have to say that the server is intended as a Storage server with 1gps connectivity and the data transfer will be distributed very evenly across all drives. I also don't require any raid functionality. This is all done at application level, I just need JBOD. So for example if I go for the RPC 4020 Model I need to connect 20 Storage + 1 OS + 1 CDROM Sata ports. I searched a bit and stumbled across this very low priced controller: http://www.intel.com/products/server/raid-controllers/SASWT4I/SASWT4I-overview.htm They sell it for 115 € here and the specs say it can control up to 122 hard discs and has 4 Mini SAS connectors. So I would use 4 Mini SAS 36pin - 4 SATA 7pin cables to connect 4 SATA drives to each port and choose a Mainboard taht has 6 SATA on board (for example this one) and hurray, I can connect my 22 SATA devices for as low as about ~ 220 EUR (cpu, ram, psu, case not counted) Question: WOULD THAT WORK? And if not, why? 2nd Question: If I go for the 4220 or 4224 Model, I have internal Mini SAS connectors. Am I right in assuming that the backplane than acts as a "SAS Expander"? And can I just plug these SAS connectors into any SAS port I can find on my controller / mainboard or are there certain requirements? I know that SATA port multipliers only work with controllers that are ready for that. But isn't this expansion already implemented in the SAS standard? I am sorry that this is a very broad question, but I really spent the last week reading up and it seems to be not so clear! Especially all the controlling hardware specifications! 3rd Question: A lot of hardware specs feature "internal channels" and "internal connectors". The connecors are the physical numbers of places where I can plug a cable in. I got that. But are the "internal channels" always the maximum numbers of physical drives that can be used in the end? Or can I enhance this further by Expanders/Fanouts? 4th and last question: What do you think about the setup so far? Do you know any good alternatives? Maby I am completely going the wrong way and some DAS would be way better? Are there any comparable chassis available in europe? Please feel free to say whatever you think is relevant to the subject!

    Read the article

  • Alternative Methods of Sharing Folders in Windows?

    - by Blaenk
    Hey guys. I'm running Windows 7 and as of now I simply share folders as one usually does in Windows. I then have a MacBook with Leopard (Now Snow Leopard) which I use to connect to my computer to mount the shares by going to Finder, then CMD + K and typing smb://BlaenkPC (The name of my PC) into the address box. This consequently connects to my computer and mounts all of the shares. The problem is that sometimes, if for example I close my MacBook (Which makes it go to sleep) or sometimes even without doing that, the connection somehow drops. Sometimes I close the MacBook and upon re-opening it, everything still works; it's random. It still shows the computer as being connected, but it just shows 'loading' indefinitely. If I hit 'eject' with the intention of re-connecting to the computer, it disappears from the sidebar (The Computer Icon) in Finder, but I cannot re-connect. Activity Monitor (or ps aux, whichever) both show hung instances of umount; one for each share that was mounted. I cannot kill these processes with kill or killall (Yes, even with sudo, and sending signal -9). This has happened to me before, and here is another person who has experienced this. My question boils down to this: Is there an alternative method of sharing folders in Windows, that my Mac can read/understand, that is possibly more reliable and preferably just as fast? I usually use the mounted shares to watch television episodes off my computer, or movies, etc. (In other words, I open them in VLC and they automatically stream from my computer). As far as I can tell, this is a problem with the Samba protocol. I have heard of NFS, but I am not sure if I would have to re-format my drives, or what. I don't mind running a service or daemon to allow the sharing of the folders, I just want it to be done and hopefully in a better way than typical Windows shares through Samba. Usually when I encounter this problem, which is often (read: every day), I have no other option but to restart the MacBook. As I stated in the first question I linked to, shutting down and restarting don't work; I have to manually force the shutdown by holding the power button. I have not modified my installation of Mac OS X in any hackish way, so I doubt it's something with the Operating System, but worst come to worst, I might end up reformatting and doing a clean install to see if that fixes anything, as I am at a complete loss as to what may be causing the problem, and no one else seems to have any idea or care, despite there being quite a few people suffering from this problem, as my research has shown. Any pieces of information that can help are extremely appreciated. You don't have to answer every question on here, but maybe even some insight as to why it might not be possible to kill those hung umount instances for example, or why I may not be able to reconnect using samba (Is it something regarding the way the protocol works?). One thing to note is that I have another computer in the home network that doesn't seem to have this problem. However, it is also running Windows 7 (Note though that I am not using the homegroup feature, but the typical windows sharing feature). My only deduction is that the problem is being caused by the way the Mac (Or Samba implementation, whichever) is handling things. Perhaps it is a limitation.

    Read the article

  • Transparent proxy which preserves client mac address

    - by A G
    I have a customer that wants to intercept SSL traffic as it leaves their network. My proposed solution is to setup a proxy that is transparent and both layer 2 and layer 3 so it can simply be dropped into their network without any change in config required. The proxy has two NICs, one connected to the server, the other to the client. The client, proxy and gateway are under control of the customer, the server is not. For example: client --- Proxy --- gateway -|- server I have my proxy program configured with IP_TRANSPARENT socket option to it can respond to connections destined for a remote IP. I am using the following setup: iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1 ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 The client in question is on its own subnet and has been configured so that the proxy is the default gateway. The result is: Client sends a frame to the proxy; source IP is client, source mac is client, destination IP is server, destination mac is proxy Proxy forwards this frame to the gateway; source IP is proxy, source mac is proxy, destination IP is server, destination mac is gateway Gateway forwards this to the server and gets a response back. Gateway sends reply back to proxy; source IP is server, source mac is gateway, destination IP is proxy, destination mac is proxy Proxy forwards this reply to client; source IP is server, source mac is proxy, destination IP is client, destination mac is client. The tproxy and iptables configuration lets the proxy send packets with a non local ip address. Is there a way to make something transparent at the mac address level? That is, put the client on the same subnet as the gateway. The gateway sees the source IP and mac as that of the client, even though they originated from the proxy. Could this be done by configuring the proxy as a bridge then use ebtables to escalate the traffic to be handled by iptables? When I use ebtables to push something up to iptables, it appears my proxy program doesn't respond to the packets as they are destined for the gateways's mac address, not the proxy's. What are some other potential avenues I could investigate? EDIT: When the client and gateway are on different subnets (and client has set the proxy as the gateway), it works as described in 1 to 5. But I want to know if it is possible to have the client and gateway on the same subnet and have the proxy fully transparent (ie client is not aware of the proxy). Thanks! EDIT 2: I can configure the proxy as a bridge using brctl, but cannot find a way to direct this traffic to my proxy program - asked here Possible for linux bridge to intercept traffic?. Currently, with the description numbered 1 to 5, it operates at layer 3; it is transparent on the client side (client thinks it is talking to the server's IP), but not on the gateway side (gateway is talking to the proxy's IP). What I want to find out is, is it possible to make this operate at layer 2, so it is fully transparent? What are the available options I should research? Thanks

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >