Search Results

Search found 2885 results on 116 pages for 'mike m lin'.

Page 23/116 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Upgrade Workshop in Switzerland

    - by Mike Dietrich
    Thanks for your attendance today at the Upgrade Workshop in Baden-Dättwil - I had lots of fun - and thanks for the great discussions as well. Let me figure out the open questions later on and publish them here as well. And please find the current version of the slides here: http://apex.oracle.com/folien and use the Schluesselwort: upgrade112

    Read the article

  • Regulating how much to draw based on how much was drawn last frame.

    - by Mike Howard
    I have a 3D game world on an iPhone (limited graphics speed), and I'm already regulating whether I draw each shape on the screen based on it's size and distance from the camera. Something like... if (how_big_it_looks_from_the_camera > constant) then draw What I want to do now is also take into account how many shapes are being drawn, so that in busier areas of the game world I can draw less than I otherwise would. I tried to do this by dividing how_big_it_looks by the number of shapes that were drawn last frame (well, the square root of this but I'm simplifying - the problem is the same). if (how_big_it_looks / shapes_drawn > constant2) then draw But the check happens at the level of objects which represent many drawn shapes, and if an object containing many shapes is switched on, it increases shapes_drawn lots and switches itself back off the next frame. It flickers on and off. I tried keeping a kind of weighted average of previous values, by each frame doing something like shapes_drawn_recently = 0.9 * shapes_drawn_recently + 0.1 * shapes_just_drawn, but of course it only slows the flickering down because of the nature of the feedback loop. Is there a good way of solving this? My project is in Objective-C, but a general algorithm or pseudo-code is good too. Thanks.

    Read the article

  • Social HCM: Is Your Team Listening?

    - by Mike Stiles
    Does integrating Social HCM into your enterprise make sense? Consider Sam and Christina. Sam is a new hire at a big company. On the job 3 weeks, a question has come up on how to properly file an expense report to get reimbursed. It was covered in the onboarding session, but shockingly enough, Sam didn’t memorize or write down every word of the session. The answer is probably in a handout, in a stack of handouts 2 inches thick. It also might be on the employee web site…somewhere. Christina is a new hire at a different big company. She has the same question. She logs into her company’s social network, goes to the “new hires” group, asks her question and gets an answer in seconds. Christina says, “Cool!” Sam says, “Grrrr.” It’s safe to say the qualified talent your company wants is accustomed to using social platforms to communicate and get quick answers. As such, Christina is comfortable at her new company, whereas Sam is wondering what he’s gotten himself into. Companies that cling to talent communication and management systems that don’t speak to talent’s needs or expectations put themselves at risk. Right from the recruiting stage, prospects can determine if a company has embraced the communications tools of the 21st century. If they don’t see it, alarm bells go off. With great talent more in demand than ever, enterprises should reconsider making “this is the way we do it, you adapt to us” their mantra. Other blogs have clearly outlined that apart from meeting top recruits’ expectations, Social HCM benefits the organization itself in terms of efficiency, talent performance & measurement. Recruiting: Jobvite shows 64% of companies hired using social. 89% of job seekers are using social in their search. Social can give employers access to relevant communities of prospects and advance the brand. Nucleus Research found general hiring software can provide over 1,000% ROI by reducing churn and improving screening. Social talent acquisition should perform at least as well. Learning & Development:Employees, learning from the company or from peers, can be kept on top of the latest needed skillsets and engage in self-paced training so as to advance within the company. Performance Management:Just as gamers are egged on by levels and achievements, talent can reach for workplace kudos, be they shout-outs from peers & managers or formally established milestones. Plus employee reviews become consistent and fair as managers have access to the cumulative feedback social offers. Workflow and Collaboration:With workforces dispersing in terms of physical location, social provides a platform that helps eliminate drawbacks that would have brought just 10 years ago. Finding and connecting with just the right colleague to get the most relevant info at any given time has never been more possible…or expected. While yes, marketing has taken the social lead inside the enterprise, HCM (with the word “human” right there in its name) is the obvious locale for the next big integration of social in business. The technology is there. At Oracle, Fusion HCM apps are deeply embedded with Social HCM…just one example of systems taking social across the enterprise. Christina’s company is communicating with her in ways she’s used to. Sam’s company may as well be trying to talk to him using signal flags. @mikestilesPhoto via stock.xchng

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Wrong statistics in AUX_STATS$ might puzzle the optimizer

    - by Mike Dietrich
    We do recommend the creation of System Statistics for quite a long time. Since Oracle 9i the optimizer works with a CPU and IO cost based model. And in order to give the optimizer some knowledge about the IO subsystem's performance and throughput - once System Statistics are collected - they'll get stored in AUX_STATS$. For this purpose in the old Oracle 9i days some default values had been defined - and you'll still find those defaults in Oracle Database 11g Release 2 in AUX_STATS$. But these old values don't reflect the performance of modern IO systems. So it might be a good best practice post upgrade to create fresh System Statistics if you haven't done this before.  You can collect System Statistics with: exec DBMS_STATS.GATHER_SYSTEM_STATS('start'); and end it later by executing: exec DBMS_STATS.GATHER_SYSTEM_STATS('stop'); You could also run DBMS_STATS.GATHER_SYSTEM_STATS('interval', interval=>N) instead where N is the number of minutes when statistics gathering is stopped automatically. Please make sure you'll do this on a real workload period. It won't make sense to gather these values while the database is in an idle state. You should do this ideally for several hours. It doesn't affect performance in a negative way as the values are anyway collected in V$SYSSTAT and V$SESSTAT. And in case you'd like to delete the stats and revert to the old default values you'd simply execute:exec DBMS_STATS.DELETE_SYSTEM_STATS; The tricky thing in Oracle Database 11.2 - and that's why I'm actually writing this blog post today - is bug9842771. This leads to wrong values in AUX_STATS$ for SREADTIM and MREADTIM by factor 1000 guiding the optimizer sometimes into the totally wrong directon. The workaround is to overwrite these values manually and divide them by 1000. Use the DBMS_STATS.SET_SYSTEM_STATS procedure. See this MOS Note:9842771.8 for the above bug for some further information. This issue is fixed in Oracle Database 11.2.0.3 and above. To get some background information about the statistics collected in please read this section in the Oracle Database 11.2 Performance Tuning Guide. And gathering System Statistics might have some implication if you have mixed workloads - and interacts with DB_FILE_MULTIBLOCK_READ_COUNT. For more information please read section 13.4.1.2.

    Read the article

  • High resolution CLI?

    - by Mike Williamson
    I want the resolution of my console to match my screen resolution(1440x900). 1024x768 works fine but for some reason when I put 1440x900 when I switch to ttyX the command prompt is almost right off the bottom of the screen! The Ubuntu splash screen goes off the edge of the screen during boot as well. Here is my /etc/default/grub 4 GRUB_DEFAULT=0 5 GRUB_HIDDEN_TIMEOUT=0 6 GRUB_HIDDEN_TIMEOUT_QUIET=true 7 GRUB_TIMEOUT=10 8 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` 9 GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" 10 GRUB_CMDLINE_LINUX="" 11 GRUB_GFXMODE=1440x900 12 GRUB_GFXPAYLOAD_LINUX=keep How do I get my CLI resolution to be 1440x900?

    Read the article

  • Let Me Show You Something: Instagram, Vine and Snapchat for Brands

    - by Mike Stiles
    While brands are well aware of how much more impactful images are than text-only posts on social channels, today you’re additionally being presented with platform after additional platform for hosting, doctoring and sharing photos and videos.  Can you play in every sandbox? And if you do, can you be brilliant on all of them? As has usually been the case, so far brands are sticking their toes into new platforms while not actually committing to them, or strategizing for them, or resourcing them. TrackMaven found of the 123 F500 companies using Instagram, only 22% of them are active on it. Likewise, research from Simply Measured found brands are indeed jumping in, with the number establishing a presence on Instagram up 55% over the past year. Users want them there…brand engagement has exploded 350%, and over 1/3 of the top brands have at least 10,000 followers. BUT…the top 10 brands are generating 33% of all posts, reaping 83% of all engagement. Things are also growing on Twitter’s Vine, the 6-second looping video app that hit 40 million users in August. The 7th Chamber says 5 tweets a second contain a Vine link. Other studies say branded Vines are 4 times more likely to be shared and seen than rank-and-file branded videos. Why? Users know that even if a video is pure junk, they won’t get robbed of too much of their valuable time. Vine is always upgrading so you can make sure your videos are worth viewers’ time. You can now edit videos, and save & work on several projects concurrently. What you can’t do is upload a finely crafted video into Vine, but you can do that with Instagram. The key to success? Same as with all other content; make it of value. Deliver a laugh or a lesson or both. How-to, behind the scenes peeks, contests, demos, all make sense in the short video format. Or follow Nash Grier’s example, which is to just have fun with and connect to your viewers, earning their trust that your next Vine will be as good as the last. Nash is only 15, has over 1.4 million followers, and adds about 100,000 a week. He broke out when one of his videos was re-Vined by some other kid with 300,000 followers. Make good stuff, get it in front of influencers, and your brand Vines could break out as well. Then there’s Snapchat, the “this photo will self destruct” platform. How can that be of use to brands besides offering coupons that really expire? The jury is out. But with an audience of over 100 million and a valuation of $800 million, media-with-a-time-limit is compelling. Now there’s “Snapchat Stories” that can last 24 hours and be shared to the public at large. You might be able to capitalize on how much more focus gets put on content when there’s a time limit on its availability. The underlying truth to all of this is, these are all tools. Very cool, feature rich tools, but tools. You can give the exact same art kit to 5 different people and you’d get back 5 very different works, ranging from worthless garbage to masterpiece. Brands are being called upon to be still and moving image artists. That’s what your customers are used to seeing, from a variety of sources. Commit to communicating with them accordingly. @mikestiles Photo: stock.xchng

    Read the article

  • INNOVATIONS IN PRODUCTS – Partner Briefing PROGRAM - October 1st

    - by Mike.Hallett(at)Oracle-BI&EPM
    Partners are invited to join the Innovations in Products webcast, October 1st: 4:00pm CET /5:00pm UK BI & EPM Product breakout Webcast sessions available on October 1st: Topics Speaker To Register Oracle Endeca Information Discovery, Product Overview Emma Palii, BI Sales Consultant CLICK HERE Hyperion Project Financial Planning, Measure the full financial impacts of your Projects Olivier Bernard, EPM Business Solutions Director CLICK HERE To see the full list of session topics, goto the overall registration page Innovations in Products October 1st.    To access the previously presented Applications, and Public-Sector Value Proposition presentations, please click here. Delivery Format: 1 Hour Webcast The Innovations in Products program is a series of Oracle product presentations followed by live Q&A.  It will be delivered over the Web.  Partner Participants have the opportunity to submit questions during the web cast via chat and subject matter experts will provide verbal answers live. For further information please contact Markku Rouhiainen.  

    Read the article

  • Why is apt-get --auto-remove not removing all dependencies?

    - by Mike
    I just installed a package (dansguardian in this case) and apt told me that I had unmet dependencies. # sudo apt-get install dansguardian Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: clamav clamav-base clamav-freshclam libclamav6 libtommath0 Suggested packages: clamav-docs squid libclamunrar6 The following NEW packages will be installed: clamav clamav-base clamav-freshclam dansguardian libclamav6 libtommath0 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/4,956 kB of archives. After this operation, 14.4 MB of additional disk space will be used. Do you want to continue [Y/n]? So I installed it and the dependencies. So far so good. Later on, I decide that this package just isn't the package for me, so I want to remove it and all of the other junk it installed with it since I'm not going to be needing any of it: # sudo apt-get remove --auto-remove --purge dansguardian Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: dansguardian 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 1,816 kB disk space will be freed. Do you want to continue [Y/n]? However it is only removing that one specific package. What about clamav clamav-base clamav-freshclam libclamav6 libtommath0? Not only did it not remove them, but clamav was actually running a daemon that loads every time the computer boots. I thought that --auto-remove would remove not only the packages, but also the dependencies that were installed with it. So basically, without going through the apt history log file (if I even remember to do so, or if I even remember that a specific package I installed 3 months ago had dependencies along with it), is there a way to remove a package and all of the other dependencies that were installed like in this case?

    Read the article

  • New Time Zone Patch DST V18 is available

    - by Mike Dietrich
    Sorry for not updating the blog more often at the moment - but more updates will come soon as I play around with Oracle Restart and single instance databases in ASM with Oracle 11.2. Just on the side there's a new time zone patch to DST V18 available since May 2012. You can download it via PATCH download from MOS with the patch number: 13417321 What do you think? Will Lufthansa operate a faster jet the other night? Will the jet stream be more powerful? Or a better type of fuel? Or is it just the travel portal which hasn't applied the correct time zone patches to catch DST change that night in the US whereas it happens two weeks later in Europe? Guess ... And please see the readme about how to apply the patch and our slides about why time zone patching may be important even in your environment RDBMS bug: Bug 13417321: DST 18 : HALF YEARLY DST PATCHES, MAY 2012 OJVM Bug 14112098 - dst changes for dstv18 (tzdata2012c) - need ojvm fix

    Read the article

  • Shared hosting banwidth limits

    - by mike
    I have a shared hosting account with a 20GB monthly bandwidth limit. I have exceeded my monthly limit and according to my host my counter is never reset, they say they use a continuous 30 day counter. So for example, I make payment on the 1st of each month, say I use 20GB in the last week of the month. My bandwidth counter is not reset on the 1st of the new month and my bandwidth will only become available in the last week of the new month. Is this common practice by shared hosting companies? Sounds a bit shady to me. Surely my counters should be reset on the 1st of every month when I make payment and 20GB of bandwidth should be available from the day payment is made?

    Read the article

  • Ubuntu will not start due to full partitions

    - by mike
    I left my computer downloading all the night and I did download 35 GB of movies (legal ...). I restarted the computed in the morning then I booted in my encrypted Windows partition for my work. I have left my computer downloading 35GB of files and when I restarted in the morning, I booted Windows. When I tried to access Ubuntu, it failed to boot and in low-graphic mode told me that it won't boot because the partition is full. I tried rescue and it reported 0 MB free. I also cannot delete files with sudo rm as all are impossible due to a read-only file system. I can mount it in Windows but there is a "write protection" there, also. Should I try a live USB?

    Read the article

  • Open Grid Engine or Akka/Something more fault tolerant?

    - by Mike Lyons
    My use case is that I have a pipeline of independent, stand alone programs, that I want to execute in a certain order on specific pieces of data that our output from previous pipeline stages. The pipeline is entirely linear and doesn't do anything in terms of alternate paths through the pipe. I'm currently using SGE to do this and it works OK, however occasionally a job will overstep it's memory bounds, fail, and all jobs that require that output data will fail. The pipe needs to be restarted in that case, and it seems that whatever is providing the fault tolerance in akka might solve that for me?

    Read the article

  • Should I use JavaFx properties?

    - by Mike G
    I'm usually very careful to keep my Model, View, and Controller code separate. The thing is JavaFx properties are so convenient to bind them all together. The issue is that it makes my entire code design dependent on JavaFx, which I feel I should not being doing. I should be able to change the view without changing too much of the model and controller. So should I ignore the convenience of JavaFx properties, or should I embrace them and the fact that it reduces my codes flexibility.

    Read the article

  • How are these bullets done?

    - by Mike
    I really want to know how the bullets in Radiangames Inferno are done. The bullets seem like they are just billboard particles but I am curious about how their tails are implemented. They can curve so this means they are not just a billboard. Also, they appear continuous which implies that the tails are not made of a bunch of smaller particles (I think). Can anyone shead some light on this for me?

    Read the article

  • Trigger IP ban based on request of given file?

    - by Mike Atlas
    I run a website where "x.php" was known to have vulnerabilities. The vulnerability has been fixed and I don't have "x.php" on my site anymore. As such with major public vulnerabilities, it seems script kiddies around are running tools that hitting my site looking for "x.php" in the entire structure of the site - constantly, 24/7. This is wasted bandwidth, traffic and load that I don't really need. Is there a way to trigger a time-based (or permanent) ban to an IP address that tries to access "x.php" anywhere on my site? Perhaps I need a custom 404 PHP page that captures the fact that the request was for "x.php" and then that triggers the ban? How can I do that? Thanks! EDIT: I should add that part of hardening my site, I've started using ZBBlock: This php security script is designed to detect certain behaviors detrimental to websites, or known bad addresses attempting to access your site. It then will send the bad robot (usually) or hacker an authentic 403 FORBIDDEN page with a description of what the problem was. If the attacker persists, then they will be served up a permanently reccurring 503 OVERLOAD message with a 24 hour timeout. But ZBBlock doesn't do quite exactly what I want to do, it does help with other spam/script/hack blocking.

    Read the article

  • Technical Integration Roadmap for OBI11g and Oracle Hyperion EPM System

    - by Mike.Hallett(at)Oracle-BI&EPM
    There is an excellent technical whitepaper on the integration roadmap for Oracle business intelligence enterprise edition and the Oracle Hyperion enterprise performance management system  (download at this link).  This document lists the integration points among all current releases of Oracle BI EE with EPM System releases: with live links to other relevant documentation also provided. You may also be interested in the overall Hyperion EPM System Documentation Resources which can be found from the Doc Portal. And, there are two new tools for EPM @ MyOracleSupport  {this needs your oracle logon} : Cumulative Feature Overview Tool This new tool offers a simple way to determine the features developed between releases to assist you in your upgrade implementations. The tool helps you to plan your upgrades by providing concise descriptions of new and enhanced solutions and functionality that are added between your current and target releases. With the Cumulative Feature Overview Tool, you can quickly and easily find information about new features for each EPM System product. Defects Fixed Finder Tool This new tool provides an efficient way to review the defects fixed in patch set updates, patch set exceptions, and patch sets for major releases, starting with Release 11.1.1. The tool helps you plan patch implementations by providing concise descriptions of defects fixed after your current release. The Defects Fixed Finder enables you to easily find information about defects fixed for each EPM System product.

    Read the article

  • Sprite Animation Toolkits for iPhone

    - by Mike Eggleston
    Does anyone know of any good (and preferably free) Sprite Animation Toolkits/Libraries for iOS development? This library should be able to handle the collision detection and the movement of the sprites. Back in the 90's there was a Pascal library called Sprite Animation Toolkit by Ingemar Ragnemalm that handled a lot of the heft to create animations and the such. I am just wondering if there is anything like that in the iOS world?

    Read the article

  • HD Video Peformance Unacceptable

    - by Mike Hasselbeck
    Was wondering if anyone could help me boost HD 1080p video performance on my machine? I've got an AMD Athlon X2 Dual Core processor, 2 gb RAM & an ATI Radeon 5450 video card. I've installed the latest ATI Catalyst drivers, I installed the hardware acceleration things and linked them (I believe) to VLC. Still, it's still not running as well as I would like. Any thoughts or suggestions? Any help would be much appreciated. Thanks!

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

  • Become an Oracle BI or Hyperion Ace Director

    - by Mike.Hallett(at)Oracle-BI&EPM
    Now you are a specialised Partner, how can you go even further to differentiate yourself as a real expert in the field, and cement closer links with Oracle’s R&D and Strategy teams ? Become an Oracle BI or Hyperion ACE Director , and you get more air-time to publish your ideas and stories throughout the Oracle network, and thereby promote yourself and your company.  Often ACE Directors get more involvement in product development advisory boards and Beta testing programmes. What is the Oracle ACE Program? The Oracle ACE Program is designed to recognize and reward members of the Oracle Technology and Applications communities for their contributions to those communities. These individuals are technically proficient and willingly share their knowledge and experiences.  Read the FAQ for more details.

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Does purposely linking to an invalid URL and then using 301 affect SEO?

    - by Mike
    On a section of my site, I am currently using .htaccess rewrites to put the ID as part of the URL instead of in the query, like so: RewriteRule ^([a-z_]+)?/?tours/([0-9]+)/(.*) /tours/tour_text.php?lang=$1&id=$2&urlstr=$3 [L] For example, if someone goes to /en/tours/12/some-text-here it will rewrite it to /tours/tour_text.php?lang=en&id=12&urlstr=some-text-here. However I don't want the users to be able to put just any text, so if they type in the wrong some-text-here part it will 301 redirect them to the right page. This works perfectly, but I can see a potential problem potential arising when localizing the website, so I just wanted to make sure it's not actually a problem. How it is now, if someone goes to /en/tours/12/some-text-here, the anchor to the Spanish version of that page will be /es/tours/12/some-text-here (i.e. only changing the "en" to "es"), and then the script will then 301 them to the correct Spanish text (something like /es/tours/12/algun-texto-aqui). And the reverse will also be the same. The anchor on the Spanish version to the English version would be /en/tours/12/algun-texto-aqui and then they will be forwarded with 301 back to /en/tours/12/some-text-here. Basically, the anchor changes the language and the 301 changes the string at the end. So I have two questions: Does purposely and permanently having invalid URLs on your site that get 301'ed to the correct ones have any effect on SEO? I could make it just show the correct URL to begin with, but this is a significant amount of work due to how I am handling the translations, so I would prefer just to 301 them. Will the invalid URLs that are contained in the links be added to the search engine indexes even if they get 301'ed to another page?

    Read the article

  • Merging Social Accounts: What We Learned This Weekend

    - by Mike Stiles
    Guest Post by Erika BrookesWe learned that it’s not always as easy as you think it’s going to be. While it’s widely accepted that merging multiple owned Facebook Pages that are duplicating communities and putting out the same type of content is a best practice, actually pulling it off without rattling fans is a trickier proposition. Facebook is nice and clear about how to merge Facebook Pages. Although content is not carried over, Likes from the pages you’re merging are. So you can imagine the surprise when such fans start seeing posts in their News Feed from a page they don’t believe they ever Liked. One community member accurately likened it to having your bank come under another bank’s brand name. The Facebook Page changes to the new brand, just like your debit card, emails, signs and other communication. This weekend we did our merge. The Facebook communities of Vitrue, Involver and Collective Intellect were pulled into one community, Oracle Social. Could we have handled it better? Oh yeah. Our intent was to make sure, to the fullest extent possible, that the fans of the Vitrue, Involver, and Collective Intellect brand pages were well-informed about the pending page merges in ADVANCE of the merge. While many were aware that Oracle acquired the three companies, many were not. We learned from fan feedback that we should have sent notifications MUCH earlier to make the brand Page merge crystal clear and to answer any questions. That was our bad, our responsibility and we apologize for Oracle Social showing up in your News Feed if you were not aware that it was a result of your fandom of Vitrue, Involver or Collective Intellect. It was our job to make you aware well in advance. Some felt they had never Liked the fan Pages of Vitrue, Involver or Collective Intellect, so they were understandably upset (some cultures may call it “fit to be tied”) when they found themselves fans of Oracle Social. One thing to consider is that since 2009, brands and developers have used and enjoyed free Involver tab apps like Twitter, RSS and YouTube (1.2 million of which are currently active), which included an opt-in Liking the Involver Page. Often, when Liking happens in a manner outside of the traditional clicking of a Like button on a brand Page, it’s easy to forget a Page was indeed Liked. Lastly, a few felt that their Like of the Page had been “bought.” It was not. No fans or Likes were separately purchased. Yes, the companies and the social properties of Vitrue, Involver and Collective Intellect were acquired by Oracle. Those brands are now being coordinated into the larger Oracle brand. In social media, that means those brands are being integrated into the Oracle Social community. So what now? We apologize and apply lessons learned. We learned that you not only have to communicate thoroughly and clearly, but you have to communicate well in advance of any actionable items that will affect fans. We’re more than willing to walk straight to the woodshed when we deserve it. Going forward, the social team here is dedicated to facilitating content, discussion and sharing around social for marketers, agencies, IT stakeholders and social staffs, including community managers. We anticipate Oracle Social being the premier gathering place for true social innovators as we move into social’s exciting next phase of development. Inevitably, some will still feel they are fans of the Page in error. While we hate to see you go, you may unlike the Page if it’s not relevant or useful to you. Let’s continue to contribute, participate, foster our desire to learn, and move forward together positively and constructively - both for current fans of the community and the many fans to come.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >