Search Results

Search found 775 results on 31 pages for 'discussions'.

Page 23/31 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Get Fanatical About Your Followers

    - by Mike Stiles
    In the fourth of our series of discussions with Aberdeen’s Trip Kucera, we touch on what fans of your brand have come to expect in exchange for their fandom. Spotlight: Around the Oracle Social office, we live for football. So when we think of a true “fan” of a brand, something on the level of a football fan is what comes to mind. But are brands trying to invest fans on that same level? Trip: Yeah, if you’re a football fan, this is definitely your time of year. And if you’ve been to any NFL games recently, especially if you hadn’t been for a few years previously, you may have noticed that from the cup holders to in-stadium Wi-Fi, there’s an increasing emphasis being placed on “fan-focused” accommodations. That’s what they’re known as in the stadium business. Spotlight: How are brands doing in that fan-focused arena? Trip: Remember fan is short for “fanatical.” Brands can definitely learn from the way teams have become fanatical about their fans, or in the social media world, their followers. Many companies consider a segment of their addressable social audience as true fans; I’ve even heard the term “super-fans” used. So just as fans know and can tell you nearly everything about their favorite team, our research shows that there’s a lot value from getting to know your social audience—your followers—at a deeper level. Spotlight: So did your research show there’s a lot to be gained by making fandom a two-way street? Trip: Aberdeen’s new social relationship management research suggests that companies should develop capabilities to better analyze their social audience at a more granular level. Countless “ripped from the headlines” examples, from “United Breaks Guitars” to the most recent British Airways social fiasco we talked about a few weeks ago show how social can magnify the impact of a single customer voice. Spotlight: So how do the companies who are executing social most successfully do that? Trip: Leaders, which are the top-performing companies in Aberdeen’s study, are showing the value of identifying and categorizing your social audience. You should certainly treat every customer as if they have 10,000 followers, because they just might, but you can also proactively engage with high-value customer and high-value influencers. Getting back to the football analogy, it’s like how teams strive to give every guest a great experience, but they really roll out the red carpet for those season ticket and luxury box holders. Spotlight: I’m not allowed in luxury boxes, so you’ll have to tell me what that’s like. But what is the brand equivalent of rolling out the red carpet? Trip: Leaders are nearly three times more likely than Followers to have a process in place that identifies key social influencers for engagement, and more than twice as likely to identify customer advocates for social outreach. This is the kind of knowledge that gives companies the ability to better target social messaging and promotions like we talked about in our last discussion, as well as a basis for understanding how to measure the impact of their social media programs. I’ll give you an example. I hosted an event at one of my favorite restaurants recently. I had mentioned them in a Tweet several weeks before the event, and on the day of the event, they Tweeted out that they were looking forward to seeing me that evening for the event. It’s a small thing, but it had a big impact and I’d certainly go back as a result. Spotlight: So what specifically can brands use and look at to determine where their potential super-fans are? Trip: Social graph analysis, which looks at both the demographic/psychographic trends as well as the behavioral connections, can surface important brand value. Aberdeen’s PR and Brand Management research indicated that top-performing companies are more than three times more likely than Followers to both determine demographic trends through social listening (44% vs. 13%), and to identify meaningful customer segments through social (44% vs. 12%). This kind of brand-level insight can complement and enrich traditional market research. But perhaps even more importantly, it can serve as an early warning system for customer experience failures. @mikestilesPhoto: freedigitalphotos.net

    Read the article

  • Top 5 Reasons to Invest in Enterprise 2.0 Technologies

    - by kellsey.ruppel(at)oracle.com
    In 2010, Oracle's portal, content management, and collaboration solutions evolved rapidly, supported by increasingly deep integrations across Oracle Fusion Middleware and the entire Oracle stack. In light of these developments, we asked Vince Casarez, vice president of Enterprise 2.0 product management, for his top five reasons to invest in Enterprise 2.0 (E2.0) technologies--including real-world examples of businesses already realizing the benefits of next-generation E2.0 technologies. 1. Provide a modern user experience As E2.0 technologies gain widespread adoption, customers and employees expect intuitive Web experiences that are both interactive and community-based. By partnering with Oracle, Alcatel-Lucent Enterprise Group is already making that happen. With 76,000 employees and operations in more than 100 countries, the company wanted a streamlined, personalized user experience with more relevant content in fewer clicks. Working with Oracle, they created a global support portal that supports personalization and integration with Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite--and drives collaboration with tools such as wikis, blogs, and forums. Learn more about Alcatel-Lucent Enterprise Group's Global Support Portal in this Webcast. 2. Improve productivity and collaboration As E2.0 technologies mature, Oracle anticipates companies moving beyond the idea of simply creating yet another Facebook-like destination for its employees, and instead shaping work environments around specific business tasks. After rapid growth--both organic and through acquisition--construction and infrastructure services leader Balfour Beatty found itself with multiple homegrown intranet sites with very minimal content-sharing capabilities. Today, thanks to Oracle WebCenter Suite, Oracle WebCenter Spaces, Oracle WebCenter Services, and Oracle Universal Content Management, Balfour Beatty is benefiting from collaborative workspaces, a central place to use and work with documents, and unified search across content. 3. Leverage business processes and applications Modern portals are now able to integrate users, content, and business processes in unprecedented ways. To take advantage of these new possibilities, leading dairy provider Land O'Lakes has implemented a fully integrated ERP solution together with Oracle's ECM platform. As a result, Land O'Lakes has been able to achieve better information management and compliance, increased adoption rates for enterprise tools, and increased business process efficiency thanks to more effective information sharing and collaboration. 4. Enhance customer and supplier relationships Companies have begun to move beyond the idea that E2.0 simply means enabling customer reviews or embedding chat functionality. They are taking E2.0 to the next level and providing interactive experiences for their customers. For example, to enhance customer and supplier relationships, Wind River, a global leader in device software optimization, successfully partnered with Oracle to: Integrate ERP and ECM content to provide customers the latest and most relevant support information for products they own Enable customers to personalize their support experience and receive updates regarding patches, application notes, and other relevant content Enable discussions, wikis, and blogs for more efficient collaboration 5. Increase business visibility and responsiveness By strategically embedding collaboration and communication tools into specific business contexts, companies significantly increase visibility into changing business conditions--and can respond much more agilely. Texas A&M University System--one of the largest systems of higher education in the U.S.--partnered with Oracle to create a unified repository that would enable the retrieval of research and grant data from disparate systems via an Enterprise 2.0 user interface. By enabling researchers to customize their own portals with easy-to-use tools, they have also been able to significantly reduce their reliance on the IT department. Learn how other Oracle customers are leveraging Enterprise 2.0 technologies.

    Read the article

  • Capgemini Global Business Process Management Report

    - by JuergenKress
    Welcome to the Capgemini Global Business Process Management (BPM) Report. This report is an exploration of key trends in BPM as seen by CXOs across a broad selection of sectors and geographies. BPM is perhaps at a tipping point - it’s certainly at an exciting stage in its evolution. As both an engineer and an Operational Research practitioner in my early career, and subsequently as a consultant, I have seen BPM through its development over the last 26 years. BPM has its roots in management practices such as Total Quality Management, Business Process Reengineering & Model Based Development; but the advent of the new generation of sophisticated modelling and process execution technologies has greatly enhanced BPM’s power to truly transform businesses. This has created one of the most rapidly growing and attractive market sectors for both services and technology. We see BPM as a critical management discipline that when executed against clear, cross organizational business objectives, can deliver exceptional value to that organization. However, we also see that the potential for BPM is not well understood. Our decision to conduct this global survey stemmed from discussions with our clients. We sought to gain a better impression of their understanding of BPM, how they measure its value, and how far it is prioritized within their Business and Technology Transformation efforts. This research confirms our belief that BPM needs to be a jointly owned Business and IT discipline. It also demonstrates that it is starting to gain significant traction in the market and investments are starting to pay dividends to the early adopters. At Capgemini we are being asked by our clients to help them simplify and improve their business models and the technology that supports them and we are already seeing BPM become an integral and key part of this proposition. Business Process Management is becoming ever more relevant to both large and small organizations in the current economic climate. At a time when many different market sectors are facing slow revenue growth, customer churn and increased pressures on costs, BPM becomes a critical weapon in the battle for efficiency and effectiveness in processes. Furthermore, in a challenging and changing business environment that is characterized by uncertainty, it allows organizations to adapt, be more agile and fleet of foot. Capgemini is seeing strong demand for BPM services in markets such as the USA, the UK, the Netherlands and France; and there are clear signs of increased interest in other geographies such as, Germany, Sweden, Spain, Italy and Australia. In sector terms, the financial services industry has led the way in BPM adoption over the recent past, driven by increased focus on customer- centricity and regulatory compliance. Other sectors, public sector, utilities, telco, retail and manufacturing are now not only catching up, but are starting o use BPM in new ways to create new business models to serve customers and outsmart the competition. The research findings also show however that this is a complex landscape, and we are not seeing adoption of BPM in a clear and consistent way. This report also looks at some of the barriers to adoption, with organizational silos being a major obstacle. Waters are further muddied by fragmented budgets, lack of clear governance and ownership and internal politics. The objective of our investment in this research project was to shed some light on these elements with a view to assisting organizations to create strategies that avoid or at least mitigate some of these barriers to success. Management of change in such endea vours is a key part in enabling the appropriate alignment of business and technology to support their transformation efforts. I hope that you find this report of benefit in the further adoption of Business Process Management. Get the full report here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,bpm report,bpm market,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Seperation of project responsibilities in new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (1 a slightly more experienced and will be the lead) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) PROS: Integration points are quite clear and so developers can work without dependencies on others fairly easily Code practices such as naming conventions and style is more easily managed in regards to consistancy as primarily only one developer will be handling an area CONS: Completion of an entire feature becomes a bit grey as no single person is responsible for an entire feature (story?) A person might not have a full appreciation for all areas of the project and so code overlap might be lacking if suddenly that person left. OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task from view - controller - model. PROS: A person is responsible for one entire feature so it's "complete" state can be clearly defined Code overlap into different areas will occur so each individual has good coverage over the entire application CONS: Overlap of development will occur in all the modules and developers can develop/extend without a true understanding of what the original code owner was intending. This could potentially lead more easily to code bloat? Following a convention might be harder as developers are adding to all areas of the project If a developer sets up a way of doing things would it be harder to enforce the other developers to follow that convention or even build on it (or even discuss it?). Dunno.. Bugs could more easily be introduced into areas not thought about by the developer It's easier to possibly to carry a team member in so far as one member just hacks code together to complete a task whilst another takes time to build a foundation that could be used by others and so help make future tasks easier i.e. starts building a framework? QUESTION: As it might appear I'm more in favor of option 1, however I'm interested to see how others might have approached this or what is the standard or best or preferred way of undertaking a project. Or indeed any different approach to handling this?

    Read the article

  • The World of SQL Database Deployment

    - by GGBlogger
    In my early development days, I used Microsoft Access for building databases. It made things easy since I only needed to package the database with the installation package so my clients would have access to it. When we began the development of a new package in Visual Studio .NET I decided to use SQL Server Express. It was free and provided good tools - also free. I thought it was a tremendous idea until it came time to distribute our new software! What a surprise. The nightmare Ah, the choices! Detach the database and have the client reattach it to a newly installed – oh wait. FIRST my new client needs to download and install SQL Server Express with SQL Server Management Studio. That’s not a great thing, but it is one more nightmare step for users who may have other versions of SQL installed. Then the question became – do we detach and reattach or do we do a backup. It was too late (bad planning) to revert to Microsoft Access but we badly needed a simple way to package and distribute both the database AND sample contents. Red Gate to the rescue It took me a while to find an answer but I did find it in a package called SQL Packager sold by a relatively unpublicized company in England called Red Gate. They call their products “ingeniously simple” and I must agree with that description. With SQL Packager you point to the database (more in a minute) you want to distribute. A few mouse clicks and dialogs and you have an executable file that you can ship virtually anywhere and virtually any way which, when run, installs the database on your destination SQL Server instance! It really is that simple. Easier to show than tell Let’s explore a hypothetical case. Let’s say you have a local SQL database of customers and you have decided you want to share it with your subsidiaries or partners. Here is the underlying screen you will see on starting SQL Packager. There are a bunch of possibilities here but I’m going to keep this relatively simple. At this point I simply want to illustrate the simplicity of generating an executable to deliver your database. You will notice that you can set up a new package, edit an existing package or change a bunch of options. Start SQL packager And the following is the default dialog you get on startup. In the next dialog, I’ve selected the Server and Database. I’ve also selected Windows Authentication. Pressing Next causes SQL Packager to run a number of checks and produce a report. Now you’re given a comprehensive list of what is going to be packaged and you’re allowed to change it if you desire. I’ve never made any changes here so I can’t really make any suggestions. The just illustrates the comprehensive nature of so many Red Gate products including this one. Clicking Next gives you still further options. SQL Packager then works its magic and shows you a dialog with the results. Packager then gives you a dialog of the scripts it has generated. The capture above only shows 1 of 4 tabs. Finally pressing Next gives you the option to generate a .NET executable of a C# project. I’ve only generated an executable so I’m not in a position to tell you what the C# project looks like. That may be the subject of further discussions. You can rename the package and tell SQL Packager where to save it. I’ve skipped a lot but this will serve to illustrate the comprehensive (and ingenious) things Red Gate does. All in all, it’s a superb way to distribute populated SQL databases. Oh – we’ll save running the resulting executable for later also but believe me it’s insanely simple.

    Read the article

  • Two Weeks To Go, Still Time to Register

    - by speakjava
    Yes, it's now only two weeks to the start of the 17th JavaOne conference! This will be my ninth JavaOne, I came fairly late to this event, attending for the first time in 2002.  Since then I've missed two conferences, 2006 for the birth of my son (a reasonable excuse I think) and 2010 for reasons we'll not go into here.  I have quite the collection of show devices, I've still got the WoWee robot, the HTC phone for JavaFX, the programmable pen and the Sharp Zaurus.  The only one I didn't keep was the homePod music player (I wonder why?) JavaOne is a special conference for many reasons, some of which I list here: A great opportunity to catch up on the latest changes in the Java world.  This is not just in terms of the platform, but as much about what people are doing with Java to build new and cool applications. A chance to meet people.  We have these things called BoFs, which stands for "Birds of a Feather", as in "Birds of a feather, flock together".  The idea being to have sessions where people who are interested in the same topic don't just get to listen to a presentation, but get to talk about it.  These sessions are great, but I find that JavaOne is as much about the people I meet in the corridors and the discussions I have there as it is about the sessions I get to attend. Think outside the box.  There are a lot of sessions at JavaOne covering the full gamut of Java technologies and applications.  Clearly going to sessions that relate to your area of interest is great, but attending some of the more esoteric sessions can often spark thoughts and stimulate the imagination to go off and do new and exciting things once you get back. Get the lowdown from the Java community.  Java is as much about community as anything else and there are plenty of events where you can get involved.  The GlassFish party is always popular and for Java Champions and JUG leaders there's a couple of special events too. Not just all hard work.  Oracle knows how to throw a party and the appreciation event will be a great opportunity to mingle with peers in a more relaxed environment.  This year Pearl Jam and Kings of Leon will be playing live.  Add free beer and what more could you want? So there you have it.  Just a few reasons for why you want to attend JavaOne this year.  Oh, and of course I'll be presenting three sessions which is even more reason to go.  As usual I've gone for some mainstream ("Custom Charts" for JavaFX) and some more 'out there' ("Java and the Raspberry Pi" and "Gestural Interfaces for JavaFX").  Once again I'll be providing plenty of demos so more than half my luggage this year will consist of a Kinect, robot arm, Raspberry Pis, gamepad and even an EEG sensor. If you're a student there's one even more attractive reason for going to JavaOne: It's Free! Registration is here.  Hope to see you there!

    Read the article

  • Get More From Your Service Request

    - by Get Proactive Customer Adoption Team
    Leveraging Service Request Best Practices Use best practices to get there faster. In the daily conversations I have with customers, they sometimes express frustration over their Service Requests. They often feel powerless to make needed changes, so their sense of frustration grows. To help you avoid some of the frustration you might feel in dealing with your Service Requests (SR), here are a few pointers that come from our best practice discussions. Be proactive. If you can anticipate some of the questions that Support will ask, or the information they may need, try to provide this up front, when you log the SR. This could be output from the Remote Diagnostic Agent (RDA), if this is a database issue, or the output from another diagnostic tool, if you’re an EBS customer. Any information you can supply that helps us understand the situation better, helps us resolve the issue sooner. As you use some of these tools proactively, you might even find the solution to the problem before you log an SR! Be right. Make sure you have the correct severity level. Since you select the initial severity level, it’s easy to accept the default without considering how significant this may be. Business impact is the driving factor, so make sure you take a moment to select the severity level that is appropriate to the situation. Also, make sure you ask us to change the severity level, should the situation dictate. Be responsive! If this is an important issue to you, quickly follow up on any action plan submitted to you by Oracle Support. The support engineer assigned to your Service Request will be able to move the issue forward more aggressively when they have the needed information. This is crucial in resolving your issues in a timely manner. Be thorough. If there are five questions in the action plan, make sure you provide an answer for all five questions in one response, rather than trickling them in one at a time. This will allow the engineer to look at all of the information as a whole and to avoid multiple trips to your SR, saving valuable time and getting you a resolution sooner. Be your own advocate! You know your situation best; make sure Oracle Support understands both how and why this issue is important to you and your company. Use the escalation process if you're concerned that your SR isn't going the right direction, the right pace, or through the right person. Don't wait until you're frustrated and angry. An escalation is as simple as a quick conversation on the phone and can be amazingly effective in getting your issues back on track. The support manager you speak with is empowered to make any needed changes. Be our partner. You can make your support experience better. When your SR has been resolved, you may receive a survey request. This is intended to get your feedback about how your SR went and what we can do to improve your overall support experience. Oracle Support is here to help you. Our goal with any Service Request is to provide the best possible solution as quickly as possible. With your help, we’ll be able to do this with your Service Request too.  

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • PBCS Hyperion Planning in the Cloud PartnerLab 2-Day Training

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Objective of the PartnerLab:  To help partners engage the interest and commitment of their clients for Oracle Planning and Budgeting Cloud Service projects. This is your unique opportunity to learn how to expand your business with the PBCS Application. This 2-day PartnerLab workshop will enable your team to understand the fundamental concepts of the PBCS Application, the implications of Oracle Public Cloud deployment, and to effectively present and demonstrate PBCS to prospective clients. Participants must already be competent with the on-premise Hyperion Planning application: this training will build on existing expertise to cover SaaS Cloud specific deployment implications and how best to demonstrate this to clients and win services led PBCS implementation engagements. Register here now and see full Agenda for 07-08 July 2014 in Oracle Paris – Colombes 15, bd Charles de Gaulle, 92715 Colombes Cedex France Register here now and see full Agenda for 15-16 July 2014 in Oracle Italy via Fulvio Testi 136, Cinisello Balsamo, Milan, Italy This training is free of charge to OPN Member Partners This PartnerLab is a 2 day in-class workshop event led by Oracle Pre-Sales subject matter experts. These 2 days consist of discussions, presentations, demonstration and hands-on exercises. Note: the hands-on exercises are in an already installed environment that you can have access to after the event (see more @ Hyperion Demonstration Systems for Partners). The PartnerLab will be delivered in English or local language. Mandatory prerequisites for a participant: Please view material available and complete the assessments before you attend the PartnerLab event. Material and assessments cover foundational information about Oracle Hyperion Planning and Oracle Planning and Budgeting Cloud Service. View material prior to live PartnerLab: Oracle Hyperion Planning 11 Sales Specialist guided learning path Oracle Hyperion Planning 11 PreSales Specialist guided learning path Oracle Hyperion Planning 11 Implementation Specialist guided learning path Oracle Planning and Budgeting Cloud Service Specialist guided learning path PBCS How-to Videos Learn More at Oracle Planning and Budgeting Cloud Service Take and pass these on-line assessments prior to the live PartnerLab training: Oracle Hyperion Planning 11 Sales Specialist on-line exam Oracle Hyperion Planning 11 PreSales Specialist on-line exam /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Is hidden content (display: none;) -indexed- by search engines? [closed]

    - by user568458
    Possible Duplicate: How bad is it to use display: none in CSS? We've established on this site before (in this question) that, since there are so many legitimate uses for hiding content with display: none; when creating interactive features, that sites aren't automatically penalised for content that is hidden this way (so long as it doesn't look algorithmically spammy). Google's Webmaster guidelines also make clear that a good practice when using content that is initially legitimately hidden for interactivity purposes is to also include the same content in a <noscript> tag, and Google recommend that if you design and code for users including users with screen readers or javascript disabled, then 9 times out of 10 good relevant search rankings will follow (though their specific advice seems more written for cases where javascript writes new content to the page). JavaScript: Place the same content from the JavaScript in a tag. If you use this method, ensure the contents are exactly the same as what’s contained in the JavaScript, and that this content is shown to visitors who do not have JavaScript enabled in their browser. So, best practice seems pretty clear. What I can't find out is, however, the simple factual matter of whether hidden content is indexed by search engines (but with potential penalties if it looks 'spammy'), or, whether it is ignored, or, whether it is indexed but with a lower weighting (like <noscript> content is, apparently). (for bonus points it would be great to know if this varies or is consistent between display: none;, visibility: hidden;, etc, but that isn't crucial). This is different to the other questions on display:none; and SEO - those are about good and bad practice and the answers are discussions of good and bad practice, I'm interested simply in the factual 'Yes or no' question of whether search engines index, or ignore, content that is in display: none; - something those other questions' answers aren't totally clear on. One other question has an answer, "Yes", supported by a link to an article that doesn't really clear things up: it establishes that search engines can spot that text is hidden, it discusses (again) whether hidden text causes sites to be marked as spam, and ultimately concludes that in mid 2011, Google's policy on hidden text was evolving, and that they hadn't at that time started automatically penalising display:none; or marking it as spam. It's clear that display: none; isn't always spam and isn't always treated as spam (many Google sites use it...): but this doesn't clear up how, or if, it is indexed. What I will do will be to follow the guidelines and make sure that all the content that is initially hidden which regular users can explore using javascript-driven interactivity is also structured in way that noscript/screenreader users can use. So I'm not interested in best practice, opinions etc because best practice seems to be really clear: accessibility best practices boosts SEO. But I'd like to know what exactly will happen: whether any display: none; content I have alongside <noscript> or otherwise accessibility-optimised content will be be ignored, or indexed again, or picked up to compare against the <noscript> content but not indexed... etc.

    Read the article

  • Embeddable forum software

    - by Rented
    I am in the planning stages of a specific subject matter community web site, and one feature I feel is required is that of member discussions. However, not in a typical forum style. For example, I don't want the members to have to navigate away from their own "user space" in order to discuss a topic. I think it is best described with an analogous example. Lets say the site is for literature buffs, and each member has a set of pages for keeping notes, progress, questions, etc. on books they are studying/reading. So Joe will have one page for Great Expectations, another for Hamlet, a third for I, Robot, and so forth. Likewise, Jane will have a page for Don Quixote, Lord of the Flies, and also I, Robot. Now, wouldn't it be nice if Joe and Jane could discuss I, Robot from within their own respective pages? Now, at first thought, roll your own seems like the way to go. However, once we start getting into issues such as spam blocking, banning, ratings, pruning, archiving, flooding and so on, well "roll your own" doesn't sound too appealing anymore. Also, I have next to zero experience with forum software. So I'm looking for forum software that has an extensive API or is generally very integration-friendly. I would like to be able to create user groups, topics, permissions, etc. programmatically,as well as the obvious user authentication (most seem open in that respect). The site will most probably be built with Java. Tangler seems like a descent option, but it seems less mature than what I'd prefer.

    Read the article

  • Building Cross Platform app - recommendation

    - by Ben
    Hi, I need to build a fairly simple app but it needs to work on both PC and Mac. It also needs to be redistributable on a disc or usb drive as a standalone desktop app. Initially I thought AIR would be perfect for this (it ticks all the API requirements), but the difficulty is making it distributable, as the app would require the AIR runtime to be installed to run. I came across Shu Player as an option as it seems to be able to package the AIR runtime with the app and do a (silent?) install. However this seems to break the T&C from Adobe (as outlined here) so I'm not sure about the legality. Another option could be Zinc but I haven't tested it so I'm not sure how well it'll fit the bill. What would you recommend or suggest I check out? Any suggestion much appreciated EDIT: There's a few more discussions on mono usage (though no real conclusion): Here and Here EDIT2: Titanium could also fit the bill maybe, will check it out. Any more comments from anyone? Ben

    Read the article

  • Fancybox "close" hangs/delays in IE7/8

    - by Kerri
    I'm having an issue with IE7/8 only on a development site: when closing the Fancybox, there is a major delay, sometimes about ten seconds or so, sometimes much longer (like a minute). Some things that might be relevant: • Using fancybox 1.3.1 • Works perfectly on all other browsers (FF, Safari, Opera, Chrome) • The loaded iframe contains a flash "virtual tour" • There are no errors reported in IE's Dev Toolbar or in DebugBar • The fancybox renders perfectly in IE, and after it has closed, there is not a problem loading it again. The closing seems to be the only issue. (That, and that it crashed the client's browser once, but I'm inclined to believe that had more to do with the content of the iframe than fancybox). I changed the link to the iFrame to something simpler (google), and it closed with no problem. So it does seem to have something to do with a conflict with the content of the iframe. The call: $("a#tourbox").fancybox({ "width": 750, "height" : 575, "autoScale": false, "type": "iframe" }); The HTML: <a id="tourbox" href="http://tour.circlepix.com/tour.htm?id=670335"><img src="sites/all/themes/removed/images/banquets-vtour.jpg" alt="Virtual Tour" /></a> Here's a link to the page with the problem: http://s93571.gridserver.com/banquets Click on the "Go to 360° Virtual Tour" image. Here's the page it loads: http://tour.circlepix.com/tour.htm?id=670335 I'd greatly appreciate any clues or ideas of what might be the issue. I couldn't find any other discussions with a similar problem. Thanks for any insights!

    Read the article

  • Can't connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest).

    Read the article

  • Apache Axis2 1.5.1 and NTLM Authentication

    - by arcticpenguin
    I've browsed all of the discussions here on StackOverflow regarding NTLM and Java, and I can't seem to find the answer. I'll try and be much more specific. Here's some code that returns a client stub that (I hope) is configured for NTLM authentication: ServiceStub getService() { try { ServiceStub stub = new ServiceStub( "http://myserver/some/path/to/webservices.asmx"); // this service is hosted on IIS List<String> ntlmPreferences = new ArrayList<String>(1); ntlmPreferences.add(HttpTransportProperties.Authenticator.NTLM); HttpTransportProperties.Authenticator ntlmAuthenticator = new HttpTransportProperties.Authenticator(); ntlmAuthenticator.setPreemptiveAuthentication(true); ntlmAuthenticator.setAuthSchemes(ntlmPreferences); ntlmAuthenticator.setUsername("me"); ntlmAuthenticator.setHost("localhost"); ntlmAuthenticator.setDomain("mydomain"); Options options = stub._getServiceClient().getOptions(); options.setProperty(HTTPConstants.AUTHENTICATE, ntlmAuthenticator); options.setProperty(HTTPConstants.REUSE_HTTP_CLIENT, "true"); stub._getServiceClient().setOptions(options); return stub; } catch (AxisFault e) { e.printStackTrace(); } return null; } This returns a valid SerivceStub object. When I try to invoke a call on the stub, I see the following in my log: Jun 9, 2010 12:12:22 PM org.apache.commons.httpclient.auth.AuthChallengeProcessor selectAuthScheme INFO: NTLM authentication scheme selected Jun 9, 2010 12:12:22 PM org.apache.commons.httpclient.HttpMethodDirector authenticate SEVERE: Credentials cannot be used for NTLM authentication: org.apache.commons.httpclient.UsernamePasswordCredentials Does anyone have a solution to this issue?

    Read the article

  • Can I embed a custom font in an iPhone application?

    - by Airsource Ltd
    I would like to have an app include a custom font for rendering text, load it, and then use it with standard UIKit elements like UILabel. Is this possible? I found these links: http://discussions.apple.com/thread.jspa?messageID=8304744 http://forums.macrumors.com/showthread.php?t=569311 but these would require me to render each glyph myself, which is a bit too much like hard work, especially for multi-line text. I've also found posts that say straight out that it's not possible, but without justification, so I'm looking for a definitive answer. EDIT - failed -[UIFont fontWithName:size:] experiment I downloaded Harrowprint.tff (downloaded from here) and added it to my Resources directory and to the project. I then tried this code: UIFont* font = [UIFont fontWithName:@"Harrowprint" size:20]; which resulted in an exception being thrown. Looking at the TTF file in Finder confirmed that the font name was Harrowprint. EDIT - there have been a number of replies so far which tell me to read the documentation on X or Y. I've experimented extensively with all of these, and got nowhere. In one case, X turned out to be relevant only on OS X, not on iPhone. Consequently I am setting a bounty for this question, and I will award the bounty to the first person who provides an answer (using only documented APIs) who responds with sufficient information to get this working on the device. Working on the simulator too would be a bonus. EDIT - it appears that the bounty auto-awards to the answer with the highest nunber of votes. Interesting. No one actually provided an answer that solved the question as asked - the solution that involves coding your own UILabel subclass doesn't support word-wrap, which is an essential feature for me - though I guess I could extend it to do so.

    Read the article

  • Error when instantiating .NET/COM interop class via classic ASP

    - by Lee D
    Hi all, I am having a problem when trying to instantiate a C# .NET class that's been exposed to COM in a classic ASP application. I've used tlbexp to generate a typelib and registered it in Component Services; now when trying to create the object as such: Server.CreateObject("The.Class.Name") I am getting the error: Server object error 'ASP 0177 : 80131534' Server.CreateObject Failed I've searched around online for information on this error, and found numerous discussions but no solution. The error code 0x80131534 apparently means "COR_E_TYPEINITIALIZATION, a type failed to initialize", which would suggest the problem would be in the constructor. The constructor of the class in question sets a private field to an instance of another class from the same assembly, and then reads some configuration settings from an XML file. This behaviour is unit tested and I've checked that the file exists; I can't see anything else that could be breaking in there. A few other points which may or may not be of use: A test .NET project referencing the DLL can instantiate the class just fine; however a test VB6 project referencing the TLB blows up with the same error. Both the DLL and the TLB are in the same location. This application is running locally, on Windows XP Professional SP3 and IIS 5.1. The .NET assembly is built with .NET Framework 2.0, although 3.5 is installed on the machine. I know other people who don't get this error on their builds, so I believe it may be something environmental. Any suggestions are welcome as I've been struggling to fix this for some time. Thanks in advance.

    Read the article

  • Non-Relational Database Design

    - by Ian Varley
    I'm interested in hearing about design strategies you have used with non-relational "nosql" databases - that is, the (mostly new) class of data stores that don't use traditional relational design or SQL (such as Hypertable, CouchDB, SimpleDB, Google App Engine datastore, Voldemort, Cassandra, SQL Data Services, etc.). They're also often referred to as "key/value stores", and at base they act like giant distributed persistent hash tables. Specifically, I want to learn about the differences in conceptual data design with these new databases. What's easier, what's harder, what can't be done at all? Have you come up with alternate designs that work much better in the non-relational world? Have you hit your head against anything that seems impossible? Have you bridged the gap with any design patterns, e.g. to translate from one to the other? Do you even do explicit data models at all now (e.g. in UML) or have you chucked them entirely in favor of semi-structured / document-oriented data blobs? Do you miss any of the major extra services that RDBMSes provide, like relational integrity, arbitrarily complex transaction support, triggers, etc? I come from a SQL relational DB background, so normalization is in my blood. That said, I get the advantages of non-relational databases for simplicity and scaling, and my gut tells me that there has to be a richer overlap of design capabilities. What have you done? FYI, there have been StackOverflow discussions on similar topics here: the next generation of databases changing schemas to work with Google App Engine choosing a document-oriented database

    Read the article

  • How do I detect whether the sample supplied by VideoSink.OnSample() is right-side up?

    - by Ken Smith
    We're currently using the Silverlight VideoSink to capture video from users' local webcams, kinda like so: protected override void OnSample(long sampleTime, long frameDuration, byte[] sampleData) { if (FrameShouldBeSubmitted()) { byte[] resampledData = ResizeFrame(sampleData); mediaController.SetVideoFrame(resampledData); } } Now, on most of the machines that we've tested, the video sample provided in the byte[] sampleData parameter is upside-down, i.e., if you try to take the RGBA data and turn it into, say, a WriteableBitmap, the bitmap will be upside-down. That's odd, but fairly easy to correct, of course -- you just have to reverse the array as you encode it. The problem is that at least on some machines (e.g., the single Macintosh in our test environment), the video sample provided is no longer upside-down, but right-side up, and hence, flipping the image actually results in an image that's received upside-down on the far side. I reported this to MS as a bug, but their (terse) response was that it was "As Designed". Further attempts at clarification have so far been ignored. Now, I'll grant that it's kinda entertaining to imagine the discussions behind this design decision: "OK, just to make it interesting, let's play the video rightside up on a Mac, but let's turn it upside down for Windows!" "Great idea!" "Yeah, that'll keep those developers guessing!" But beyond that, I can't find this, umm, "feature" documented anywhere, nor can I find any documentation on how one is supposed to be able to tell that a given video sample is upside down or rightside up. Any thoughts on how to tell this?

    Read the article

  • InvalidCastException in DataGridView

    - by Max Yaffe
    (Using VS 2010 Beta 2 - .Net 4.0 B2 Rel) I have a class, MyTable, derived from BindingList where S is a struct. S is made up of several other structs, for example: public class MyTable<S>:BindingList<S> where S: struct { ... } public struct MyStruct { public MyReal r1; public MyReal r2; public MyReal R1 {get{...} set{...}} public MyReal R2 {get{...} set{...}} ... } public struct MyReal { private Double d; private void InitFromString(string) {this.d = ...;} public MyReal(Double d) { this.d = d;} public MyReal(string val) { this.d = default(Double); InitFromString(val);} public override string ToString() { return this.real.ToString();} public static explicit operator MyReal(string s) { return new MyReal(s);} public static implicit operator String(MyReal r) { return r.ToString();} ... } OK, I use the MyTable as a binding source for a DataGridView. I can load the data grid easily using InitFromString on individual fields in MyStruct. The problem comes when I try to edit a value in a cell of the DataGridView. Going to the first row, first column, I change the value of the existing number. It gives an exception blizzard, the first line of which says: System.FormatException: Invalid cast from 'System.String' to 'MyReal' I've looked at the casting discussions and reference books but don't see any obvious problems. Any ideas?

    Read the article

  • Implementing crossover in genetic programming

    - by Name
    Hi, I'm writing a genetic programming (GP) system (in C but that's a minor detail). I've read a lot of the literature (Koza, Poli, Langdon, Banzhaf, Brameier, et al) but there are some implementation details I've never seen explained. For example: I'm using a steady state population rather than a generational approach, primarily to use all of the computer's memory rather than reserve half for the interim population. Q1. In GP, as opposed to GA, when you perform crossover you select two parents but do you create one child or two, or is that a free choice you have? Q2. In steady state GP, as opposed to a generational system, what members of the population do the children created by crossover replace? This is what I haven't seen discussed. Is it the two parents, or is it two other, randomly-selected members? I can understand if it's the latter, and that you might use negative tournament selection to choose members to replace, but would that not create premature convergence? (After a crossover event the population contains the two original parents plus two children of those parents, and two other random members get removed. Elitism is inherent.) Q3. Is there a Web forum or mailing list focused on GP? Oddly I haven't found one. Yahoo's GP group is used almost exclusively for announcements, the Poli/Langdon Field Guide forum is almost silent, and GP discussions on general/game programming sites like gamedev.net are very basic. Thanks for any help you can provide!

    Read the article

  • How to compile Python scripts for use in FORTRAN?

    - by Vincent Poirier
    Hello, Although I found many answers and discussions about this question, I am unable to find a solution particular to my situation. Here it is: I have a main program written in FORTRAN. I have been given a set of python scripts that are very useful. My goal is to access these python scripts from my main FORTRAN program. Currently, I simply call the scripts from FORTRAN as such: CALL SYSTEM ('python pyexample.py') Data is read from .dat files and written to .dat files. This is how the python scripts and the main FORTRAN program communicate to each other. I am currently running my code on my local machine. I have python installed with numpy, scipy, etc. My problem: The code needs to run on a remote server. For strictly FORTRAN code, I compile the code locally and send the executable to the server where it waits in a queue. However, the server does not have python installed. The server is being used as a number crunching station between universities and industry. Installing python along with the necessary modules on the server is not an option. This means that my “CALL SYSTEM ('python pyexample.py')” strategy no longer works. Solution?: I found some information on a couple of things in thread http://stackoverflow.com/questions/138521/is-it-feasible-to-compile-python-to-machine-code Shedskin, Psyco, Cython, Pypy, Cpython API These “modules”(? Not sure if that's what to call them) seem to compile python script to C code or C++. Apparently not all python features can be translated to C. As well, some of these appear to be experimental. Is it possible to compile my python scripts with my FORTRAN code? There exists f2py which converts FORTRAN code to python, but it doesn't work the other way around. Any help would be greatly appreciated. Thank you for your time. Vincent PS: I'm using python 2.6 on Ubuntu

    Read the article

  • mocking collection behavior with Moq

    - by Stephen Patten
    Hello, I've read through some of the discussions on the Moq user group and have failed to find an example and have been so far unable to find the scenario that I have. Here is my question and code: // 6 periods var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; // Now the proxy is correct with the schedule helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>(), schedule)); Then in my tests I use Periods but the Mocked _PaymentPlanHelper never populates the collection, see below for usage: public IEnumerable<PaymentPlanPeriod> Periods { get { if (CanCalculateExpression()) _PaymentPlanHelper.GetPlanPeriods(this.ToString(), _PaymentSchedule); return _PaymentSchedule; } } Now if I change the mocked object to use another overloaded method of GetPlanPeriods that returns a List like so : var schedule = new List<PaymentPlanPeriod>() { new PaymentPlanPeriod(1000m, args.MinDate.ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(1).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(2).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(3).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(4).ToString()), new PaymentPlanPeriod(1000m, args.MinDate.Value.AddMonths(5).ToString()) }; helper.Setup(h => h.GetPlanPeriods(It.IsAny<String>())).Returns(new List<PaymentPlanPeriod>(schedule)); List<PaymentPlanPeriod> result = new _PaymentPlanHelper.GetPlanPeriods(this.ToString()); This works as expected. Any pointers would be awesome, as long as you don't bash my architecture... :) Thank you, Stephen

    Read the article

  • Does GetSystemInfo (on Windows) always return the number of logical processors?

    - by mhughes
    Reading up on this, and specifically reading the Microsoft docs, it looks like it should be returning the number of PHYSICAL processors, and that you should use GetLogicalProcessorInformation to figure out how many LOGICAL processors you have. Here's the doc I found on the SYSTEM_INFO structure: http://msdn.microsoft.com/en-us/library/ms724958(v=VS.85).aspx And here's the doc on GetLogicalProcessorInformation: (spaces added to get through spam filter) http:// msdn.microsoft.com/ en-us/ library/ ms683194.aspx Reading up on it further though, in most of the discussions I've found on this topic, developers say to that GetSystemInfo (and the SYSTEM_INFO structure) report the number of LOGICAL processors. When I search again, I find that MS did release some info on this (and a hot fix), here (spaces added to get through spam filter): http:// support. microsoft.com/ kb/936235 Reading that, it sounds like on Xp, pre-service Pack 3, GetSystemInfo reports the number of LOGICAL processors in the SYSTEM_INFO structure. It also reads to me that on Windows Vista and Windows 7, GetSystemInfo should be reporting the number of PHYSICAL processors (different to Windows XP pre-service Pack 3). Does anyone know what it actually does? Does GetSystemInfo really report the number of physical processors (on the same computer) differently, depending on which OS it's running on?

    Read the article

  • How large a role does subjectiveness play in programming?

    - by Bob
    I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP. Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing. What role does subjectiveness play within the programming realm? Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting. The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance. But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought. It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work. Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >