Search Results

Search found 6523 results on 261 pages for 'route planning'.

Page 72/261 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Can you recommend a good Idea Management System?

    - by Tedi
    I'm trying to find a good (and cheap) Idea Management System for a non-profit project. I've browsed lot of good options which cost a lot of money. They are probably worth but we're planning to run this as a non-profit project, so unfortunately money investment is not the key strenght. Basically what we want to run is a platform where users can propose ideas that are voted, commented and enriched by the rest of the community. Thanks.

    Read the article

  • How to become a better programmer in 2011?

    - by Anish Patel
    Not strictly a Stack Overflow thing, but I thought I'd get it out there and ask the question. What are you as a programmer going to do to improve in 2011? The things I am planning to do are as follows: Learn Functional Programming Write 100 blog posts Take a bunch of Microsoft exams (70-433, 70-511, 70-513, 70-515, 70-516, 70-518, 70-519) Contribute to an open source project Lets hope the motivation lasts all year!

    Read the article

  • B2B - OSB Action Series

    - by Ramesh Nittur
    What are we planning 1. Why there is a synergy between OSB B2B integration. 2. Integrating OSB - B2B for a healthcare scenario 3. Various Integration pattern for OSB - B2B integration 4. Correlation of messages from OSB perspective 5. Correlation of messges from B2B perspective. 6. User experience in B2B, user experience in OSB.

    Read the article

  • SQL Server Capacity Planner

    - by Colt
    Apart from the capacity planner tool for System Center and SharePoint Server, I was looking for a tool which can help me to estimate the capacity of SQL Server. I found an article on Microsoft.com for SQL Server 2000 sizing but unfortunately the links are obseleted and dead: Dell PowerMatch Server Sizing Software Compaq Active Answer Resources Finally I found an article that is "close" to my interest: Hardware and Software Requirements for Installing SQL Server 2008 If any of you heard of any tools in capacity planning or sizing for SQL Server, please drop me a message. Thanks,Colt

    Read the article

  • APress Deal of the Day 17/Oct2013 - Pro SharePoint 2013 Branding and Responsive Web Development

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/17/apress-deal-of-the-day-17oct2013---pro-sharepoint-2013.aspxToday's $10 deal of the day from APress at http://www.apress.com/9781430250289 is Pro SharePoint 2013 Branding and Responsive Web Development "Pro SharePoint 2013 Branding and Responsive Web Development is a complete guide to planning, designing, and developing modern, responsive websites and applications using SharePoint 2013 and open standards like HTML5, CSS3 and JavaScript"

    Read the article

  • Migration from one domain to another - Transfering the social media stats

    - by Dipak Saraf
    I am planning to move my site from one domain to another i.e from domain a.com to b.com . The site also has a lot of content and the migration of content is not an issue. The 301 redirect will take care of all the backlinks also. But my real worry is transfer the social media shares links and stats from domain a.com to b.com. I need some insight or any way in which the same can be migrated seamlessly from domain a.com to b.com

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Iterative and Incremental Principle Series 3: The Implementation Plan (a.k.a The Fitness Plan)

    - by llowitz
    Welcome back to the Iterative and Incremental Blog series.  Yesterday, I demonstrated how shorter interval sets allowed me to focus on my fitness goals and achieve success.  Likewise, in a project setting, shorter milestones allow the project team to maintain focus and experience a sense of accomplishment throughout the project lifecycle.  Today, I will discuss project planning and how to effectively plan your iterations. Admittedly, there is more to applying the iterative and incremental principle than breaking long durations into multiple, shorter ones.  In order to effectively apply the iterative and incremental approach, one should start by creating an implementation plan.   In a project setting, the Implementation Plan is a high level plan that focuses on milestones, objectives, and the number of iterations.  It is the plan that is typically developed at the start of an engagement identifying the project phases and milestones.  When the iterative and incremental principle is applied, the Implementation Plan also identified the number of iterations planned for each phase.  The implementation plan does not include the detailed plan for the iterations, as this detail is determined prior to each iteration start during Iteration Planning.  An individual iteration plan is created for each project iteration. For my fitness regime, I also created an “Implementation Plan” for my weekly exercise.   My high level plan included exercising 6 days a week, and since I cross train, trying not to repeat the same exercise two days in a row.  Because running on the hills outside is the most difficult and consequently, the most effective exercise, my implementation plan includes running outside at least 2 times a week.   Regardless of the exercise selected, I always apply a series of 6-minute interval sets.  I never plan what I will do each day in advance because there are too many changing factors that need to be considered before that level of detail is determined.  If my Implementation Plan included details on the exercise I was to perform each day of the week, it is quite certain that I would be unable to follow my plan to that level.  It is unrealistic to plan each day of the week without considering the unique circumstances at that time.  For example, what is the weather?  Are there are conflicting schedule commitments?  Are there injuries that need to be considered?  Likewise, in a project setting, it is best to plan for the iteration details prior to its start. Join me for tomorrow’s blog where I will discuss when and how to plan the details of your iterations.

    Read the article

  • Can frequent state changes decrease rendering performance?

    - by Miro
    Can frequent texture and shader binding decrease rendering performance? "Frequent" binding example: for object for material in object render part of object using that material "Low count" binding example: for material for object in material render part of object using that material I'm planning to use an octree later and with this "low count" method of rendering it can drastically increase memory consumption. So is it good idea?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Facebook - Filter Page posts by #hashtag [on hold]

    - by beppe9000
    I'm trying to gather all the posts (official posts and people's posts) on my page which contain a specified #tag, to later show them on website. But I've no clue on how to accomplish this. Is there any API capable of this or anything else that could help? I basically need to get all those posts IDs looping spidering trought them for my hashtag. I'm planning to do this server-side so PHP is my choice.

    Read the article

  • 7 Tips For Strong On Page SEO

    On page SEO is the first thing a webmaster should consider when planning the marketing of his website. Follow these simple yet effective methods to give your SEO campaign a kick start.

    Read the article

  • xTuple Announces Sponsorship of DrupalCon San Francisco

    <b>LinuxPR:</b> xTuple, the leader in Mac and <a href="http://www.xtuple.com/">open source enterprise resource planning (ERP) software</a> for small to medium-sized businesses (SMBs), today announced its sponsorship of DrupalCon San Francisco, which will take place April 19-21, 2010, at the Moscone Center.

    Read the article

  • Suggestion for an application to do our company car fleet management

    - by Pitto
    (As suggested I've tried asking on webapps with no luck... So I try on askubuntu: I'll be luckier :) ) Hello, everybody! Our company could be a little more open source and a little "ubuntier" if I can find an easy and user friendly web application (we already have our LAMP server based on Ubuntu 10.04 LTS) to handle company's car fleet expenses and repairs planning. Any hints? :) Thanks for any kind of help.

    Read the article

  • How do I install Cacti 8.7i with PIA 3.1(Plug In Architecture) for 12.04 Server LTS?

    - by Pininy
    I am planning to upgrade my current 9.04 server version to 12.04LTS server. I've successfully backup and restore my Cacti version 8.7d to 8.7i that comes with the distribution. However, the plugin patch installation for PIA 3.1 it is not working for only ubuntu. Can you assist? or it's there a way to include Cacti 8.8a which is a stable version which comes with PIA 3.1 preinstalled. regards, thomas

    Read the article

  • Announcing the New Virtual Briefing Center

    - by Theresa Hickman
    Do you want to hear about real-world customer success stories? Or listen to Oracle Application leaders discuss the value in the latest releases of Oracle Application products? Do you want one place to download up-to-date content, including white papers, podcasts, webcasts and presentations? Did you miss the Virtual Trade Show at the beginning of 2011? If you answered yes to any of these questions, then the Virtual Briefing Center is the place to get up-to-date Oracle product information for Oracle E-Business Suite, PeopleSoft, JD Edwards, Fusion, Siebel and Hyperion across multiple product areas from financials, procurement, supply chain, CRM, Performance Management, and more. Every month we will have "Monthly Spotlights" to showcase new content. The following lists the upcoming live webcasts in July 2011: Weds. July 6, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Hear about Amway’s upgrade to Oracle E-Business Suite 12.1 and how they stabilized financial modules, especially the month-end close processes. Thurs. July 14, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Hear West Corporation share their PeopleSoft 9.1 upgrade, resulting in improved self-service, more robust reporting capabilities and new workflow and processes. Thurs. July 21, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Learn how MFlex improved their operations, saved manpower and reduced time to close with their upgrade to JD Edwards EnterpriseOne 9.0. Thurs. July 28, 2011 at 9:00 a.m. PST/12:00 p.m. EST: IEEE discusses their upgrade to Siebel 8.1 using open web service architecture for faster SOA enablement allowing them to scale their membership capacity by 250%. If you cannot attend any of the above live events, that's OK because each of the webcasts in this series will be recorded and available on demand. And for you Financials folks who may have missed the webcasts from the Virtual Trade Show earlier this year, you can view them on demand by Visiting the Resource Library: Planning Your Successful Upgrade to Oracle E-Business Suite Financials 12.1. In this session, Bryant and Stratton College talk about their upgrade. Planning Your Successful Upgrade to PeopleSoft Financials 9.1. In this session, the University of Central Florida share their upgrade story. Fusion Financials: The New Standard for Finance. In this session, Terrance Wampler, the VP of Financial Application Strategy discusses the business value of Oracle's next generation financial applications and how customers can take advantage of Fusion Financials alongside their existing investments. What are you waiting for? Register now!

    Read the article

  • Customizing the Outlook Address Book

    It is possible to change the fields in the Outlook address book to make them a better fit for your organisation. Exchange provides templates for Users, Contacts, Groups, Public Folders, and Mailbox Agents that are downloaded to Outlook. Any changes will be visible in the address book. As usual with Exchange, careful planning before you make changes pays dividends, Ben Lye explains.

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – and Remus Rusanu’s session on Service Broker to process NAP data. I want to send out heartfelt thanks to the generous...(read more)

    Read the article

  • Replacement of Outlooksoft (SAP BPC) with Oracle EPM at Brady Corporation

    Nigel Youell, Product Marketing Director, Enterprise Performance Management Applications at Oracle discusses with Joe Bittorf, Project Manager at Brady Corporation why and how they embarked on this major project to replace SAP BPC with Oracle's Enterprise Performance Management Solution. Joe covers the outstanding improvements they have achieved in their financial close process, how they worked with Oracle Partner Emerging Solutions and the current project to further improve their planning and budgeting processes.

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – Remus Rusanu’s session on Service Broker to process NAP data, and Matt Masson’s presentation on high performance SSIS...(read more)

    Read the article

  • Search Engine Placement Optimization and Link Popularity

    The increased visibility of your website due to high link popularity and search engine placement optimization can mean so much, especially if you are promoting a product or service through your website. If you are new to the business of link building, you might be wondering how to get started with it and how search engine placement optimization can help you. Knowledge of link popularity basics is essential even if you are planning on hiring someone to do link building tasks for you.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >