Search Results

Search found 4279 results on 172 pages for 'crystal reports 8 5'.

Page 103/172 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Embedding BIP just got easier!

    - by Tim Dexter
    To make up for yesterday's documentation faux pas, when I referenced Kan's blog as a source for the debug feature over Leslie's fine official documentation that had come out some time before Kan's blog. Apologies again Leslie! I noticed another new feature that I was unaware of in the New Features guide for 10.1.3.4.1. The ability to remove the BI Publisher header from the user interface so you can get this: instead of this. Useful? If you wanted to host BIP inside another application such as a portal or custom app, you can make the BIP UI app look much more integrated. By default you still get our 'blue' look and feel' but I have documented else where how you can change that. For instance, now you can have BIP hosted inside a BIEE instance and provide access to all the reports a user might wish to run with very little effort rather than picking and choosing what to bubble up to the dashboard. How do you do it? Get on over to the New Features guide and find out, there are some other goodies there too. Note to self, RTFM!

    Read the article

  • The Social Business Thought Leaders - Steve Denning

    - by kellsey.ruppel
    How is the average organization doing? Not very well according to a number of recent books and reports. A few indicators provide quite a gloomy picture: Return on assets and invested capitals dropped to 25% of its value in 1965 in the entire US market (see The Shift Index by John Hagel) Firms are dying faster and faster with the average lifespan of companies listed in the S&P 500 index gone from 67 years in the 1920s to 15 years today (see Creative Disruption by Richard Foster) Employee engagement ratio, a high level indicator of an organization’s health proved to affect performance outcomes, does not exceed on average 20%-30% (see Employee Engagement, Gallup or The Engagement Gap, Towers Perrin) In one of the most enjoyable keynotes of the Social Business Forum 2012, Steve Denning (Author of Radical Management and Independent Management Consultant) explained why this is happening and especially what leaders should do to reverse the worrying trends. In this Social Business Thought Leaders series, we asked Steve to collapse some key suggestions in a 2 minutes video that we strongly recommend. Steve discusses traditional management - that set of principles and practices born in the early 20th century and largely inspired by thinkers such as Frederick Taylor and Henry Ford - as the main responsible for the declining performance of modern organizations. While so many things have changed in the last 100 or so years, most companies are in fact still primarily focused on maximizing profits and efficiency, cutting costs, coordinating individuals top-down through command and control. The issue is, in a knowledge intensive, customer centred, turbulent market like the one we are experiencing, similar concepts are not just alienating employees' passion but also destroying the last source of competitive differentiation left: creativity and the innovative potential. According to Steve Denning, in a phase change from old industrial to a creative, collaborative, knowledge economy, the answer is hidden in a whole new business ecosystem that puts the individual (both the employee and the customer) at the center of the organization. He calls this new paradigm Radical Management and in the video interview he articulates the huge challenges and amazing rewards our enterprises are facing during this inevitable transition.

    Read the article

  • Friday Fun: Wake Up the Box

    - by Mysticgeek
    Another Friday and it’s time to waste the rest of your Friday playing a  fun flash game online. Today we take a look at a relaxing physic based puzzle game called Wake Up the Box. Wake Up the Box This goal of this game is to wake up the box character by attaching parts of existing wood objects in each stage. You can start a new game or continue your progress from where you left off. At the beginning you get a tutorial showing what you need to do to wake the box. You get wood parts and can attach them to other wood pieces but not metal or brick. After successfully waking up Mr. Box, you can go to the next level or restart a level at any time if your having problems figuring out the puzzle. Each level gets more difficult and the puzzles are more challenging. Wake Up the Box is a relaxing and challenging game that will allow you to have fun, not working on TPS reports until the whistle blows. Play Wake Up the Box at FreeWebArcade Similar Articles Productive Geek Tips Stop the Mouse From Waking Up Your Computer from Sleep ModeFix "Sleep Mode Randomly Waking Up" Issue in Windows VistaStop Your Mouse from Waking Up Your Windows 7 ComputerPrevent Windows Asking for a Password on Wake Up from Sleep/StandbyUse Sleep.FM to Wake Up with the Web TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • INCLUDE ON YOUR SOLUTION ORACLE'S BUSINESS INTELLIGENCE SOFTWARE / 22 Fev 11

    - by Claudia Costa
    Convidamo-lo a assistir à sessão ISV Partner Embedded BI que decorrerá no prximo dia 22 de Fevereiro nas instalações da Oracle, em Porto Salvo. Não perca esta oportunidade de descobrir como pode modernizar a sua aplicação através da inclusão do Oracle Business Intelligence (OBI 11g). Durante esta sessão, ficará a saber como tornar os seus relatórios e a informação de apoio à gestão mais competitivos, e em simultâneo como pode proporcionar aos seus clientes informação de gestão com um visual apelativo. Qual a importância que esta temática tem para si? Ao encorporar a solução Oracle BI na sua aplicação, poderá mais rapidamente endereçar oportunidades de mercado, acrescentando valor ao seu produto. Poderá também baixar o custo total de propriedade (TCO) e proporcionar um retorno de investimento maior. Em caso de dúvida ou eventual esclarecimento, por favor contacte Claudia Costa - Telf: 21 4235027 ou email: [email protected]. Contamos com a sua presença! Agenda 09:15 Registo 09:30 Boas Vindas e Introdução - Paulo Costa, ISV Manager Oracle Portugal 09:40 The BI&EPM Market and Oracle's Strategic Position - Mike Hallet, BI and EPM Director Oracle EMEA 10:00 Oracle Business Intelligence 11g - Most Complete, Open, Integrated and Embeddable solution - Guy Ernoul, Master Principal Sales Consultant 11:00 Coffee Break 11:20 Introduction to the embedded BI program for ISV partners - Mike Hallet, BI and EPM Director Oracle EMEA 12:00 Partner showcase of an Oracle Embedded BI solution 13:00 Lunch 14:00 Technical Presentation - Guy Ernoul, Master Principal Sales Consultant OBI Administration: Architecture Creating & Manage the (Presentation, Model, Physical) Layer Administration using FMW control Diagnostic and performance for Enterprise Manager Demonstration OBI Utilization: Analyse & Dashboard Reports Action Framework Map & Scorecard APIs for Embedding OBI 11g (Go, Xml, ADF) Demonstration 16:00 Encerramento22 Fevereiro de 2011 9.30 a.m. - 4.00 p.m. Instalações Oracle Showroom Lagoas Park - Edf 8 Porto SalvoAssista a este evento exclusivo Inscrições Gratuitas. Lugares Limitados!Registe-se já!

    Read the article

  • New training on Power Pivot with recorded video courses

    - by Marco Russo (SQLBI)
    I and Alberto Ferrari started delivering training on Power Pivot in 2010, initially in classrooms and then also online. We also recorded videos for Project Botticelli, where you can find content about Microsoft tools and services for Business Intelligence. In the last months, we produced a recorded video course for people that want to learn Power Pivot without attending a scheduled course. We split the entire Power Pivot course training in three editions, offering at a lower price the more introductive modules: Beginner: introduces Power Pivot to any user who knows Excel and want to create reports with more complex and large data structures than a single table. Intermediate: improves skills on Power Pivot for Excel, introducing the DAX language and important features such as CALCULATE and Time Intelligence functions. Advanced: includes a depth coverage of the DAX language, which is required for writing complex calculations, and other advanced features of both Excel and Power Pivot. There are also two bundles, that includes two or three editions at a lower price. Most important, we have a special 40% launch discount on all published video courses using the coupon SQLBI-FRNDS-14 valid until August 31, 2014. Just follow the link to see a more complete description of the editions available and their discounted prices. Regular prices start at $29, which means that you can start a training with less than $18 using the special promotion. P.S.: we recently launched a new responsive version of the SQLBI web site, and now we also have a page dedicated to all videos available about our sessions in conferences around the world. You can find more than 30 hours of free videos here: http://www.sqlbi.com/tv.

    Read the article

  • Connecting the Dots (.NET Business Connector)

    - by ssmantha
    Recently, one of my colleagues was experimenting with Reporting Server on DAX 2009, whenever he used to view a report in SQL Server Reporting Manager he was welcomed with an error: “Error during processing Ax_CompanyName report parameter. (rsReportParameterProcessingError)” The Event Log had the following entry: Dynamics Adapter LogonAs failed. Microsoft.Dynamics.Framework.BusinessConnector.Session.Exceptions.FatalSessionException at Microsoft.Dynamics.Framework.BusinessConnector.Session.DynamicsSession.HandleException(Stringmessage, Exception exception, HandleExceptionCallback callback) We later found out that this was due to incorrect Business Connector account, with my past experience I noticed this as a very common mistake people make during EP and Reporting Installations. Remember that the reports need to connect to the Dynamics Ax server to run the AxQueries., which needs to pass through the .NET Business Connector. To ensure everything works fine please note the following settings: 1) Your Report Server Service Account should be same as .NET Business Connector proxy account. 2) Ensure on the server which has Reporting Services installed, the client configuration utility for Business Connector points to correct proxy account. 3) And finally, the AX instance you are connecting to has Service account specified for .NET business connector. (administration –> Service accounts –> .NET Business Connector) These simple checkpoints can help in almost most of the Business Connector related  errors, which I believe is mostly due to incorrect configuration settings. Happy DAXing!!

    Read the article

  • Encrypting a non-linux partition with LUKS.

    - by linuxn00b
    I have a non-Linux partition I want to encrypt with LUKS. The goal is to be able to store it by itself on a device without Linux and access it from the device when needed with an Ubuntu Live CD. I know LUKS can't encrypt partitions in place, so I created another, unformatted partition of the EXACT same size (using GParted's "Round to MiB" option) and ran this command: sudo cryptsetup luksFormat /dev/xxx Where xxx is the partition's device name. Then I typed in my new passphrase and confirmed it. Oddly, the command exited immediately after, so I guess it doesn't encrypt the entire partition right away? Anyway, then I ran this command: sudo cryptsetup luksOpen /dev/xxx xxx Then I tried copying the contents of the existing partition (call it yyy) to the encrypted one like this: sudo dd if=/dev/yyy of=/dev/mapper/xxx bs=1MB and it ran for a while, but exited with this: dd: writing `/dev/mapper/xxx': No space left on device just before writing the last MB. I take this to mean the contents of yyy was truncated when it was copied to xxx, because I have dd'd it before, and whenever I have dd'd to a partition of the exact same size, I never get that error. (and fdisk reports they are the same size in blocks). After a little Googling I discovered all luksFormat'ted partitions have a custom header followed by the encrypted contents. So it appears I need to create a partition exactly the size of the old one + however many bytes a LUKS header is. What size should the destination partition be, no. 1, and no. 2, am I even on the right track here? UPDATE I found this in the LUKS FAQ: I think this is overly complicated. Is there an alternative? Yes, you can use plain dm-crypt. It does not allow multiple passphrases, but on the plus side, it has zero on disk description and if you overwrite some part of a plain dm-crypt partition, exactly the overwritten parts are lost (rounded up to sector borders). So perhaps I shouldn't be using LUKS at all?

    Read the article

  • Laptop shuts down upon waking from suspend

    - by Bryan Head
    The computer enters suspend either by closing lid, choosing suspend from the top-right drop down, or hitting the power button and pressing suspend. It doesn't matter. I then attempt to wake the computer either by opening the lid (if it was closed) or hitting the power button. Again, doesn't matter. The computer will then immediately shutdown about 50% of the time. It seems to be more likely to shut down the longer it has been on suspend. I took a snapshot of /var/log/pm-suspend.log after a successfully resume and a shutdown. The only difference (outside of timestamps of course) was that a successful resume, after reporting the success of various suspend hooks, writes: Thu Jul 5 21:36:45 PDT 2012: performing suspend Thu Jul 5 21:37:10 PDT 2012: Awake. Thu Jul 5 21:37:10 PDT 2012: Running hooks for resume and then reports successful resume hooks. When it shuts down, the log ends at "performing suspend". I diffed the two files so I know this is the only difference. Thus, it looks like it's not even trying to wake up. Would love some ideas on this one. I've scoured the web but can't seem to find anyone else running into the same issue (it seems more common that the computer shuts down upon entering suspend, or only on hitting the power button to wake, and haven't seen any that are random like mine). I'll update with any requested information.

    Read the article

  • Hex Dump using LINQ (in 7 lines of code)

    - by Fabrice Marguerie
    Eric White has posted an interesting LINQ query on his blog that shows how to create a Hex Dump in something like 7 lines of code.Of course, this is not production grade code, but it's another good example that demonstrates the expressiveness of LINQ.Here is the code:byte[] ba = File.ReadAllBytes("test.xml");int bytesPerLine = 16;string hexDump = ba.Select((c, i) => new { Char = c, Chunk = i / bytesPerLine })    .GroupBy(c => c.Chunk)    .Select(g => g.Select(c => String.Format("{0:X2} ", c.Char))        .Aggregate((s, i) => s + i))    .Select((s, i) => String.Format("{0:d6}: {1}", i * bytesPerLine, s))    .Aggregate("", (s, i) => s + i + Environment.NewLine);Console.WriteLine(hexDump); Here is a sample output:000000: FF FE 3C 00 3F 00 78 00 6D 00 6C 00 20 00 76 00000016: 65 00 72 00 73 00 69 00 6F 00 6E 00 3D 00 22 00000032: 31 00 2E 00 30 00 22 00 20 00 65 00 6E 00 63 00000048: 6F 00 64 00 69 00 6E 00 67 00 3D 00 22 00 75 00000064: 3E 00Eric White reports that he typically notices that declarative code is only 20% as long as imperative code. Cross-posted from http://linqinaction.net

    Read the article

  • MCSE and MCSA makes a return to the world of certification..... but not as you know it.

    - by Testas
    Quick announcementMicrosoft Learning today announced the certification tracks for the upcoming SQL Server 2012 exams.You begin by acheiving the MCSA - Microsoft Certified Solutions Associate (Not to be confused by the old Microsoft Certified System Administrator)If you are starting out this includes taking the following three exams:Exam 70-461: Querying Microsoft SQL Server 2012Exam 70-462: Administering Microsoft SQL Server 2012 DatabasesExam 70-463: Implementing a Data Warehouse with Microsoft SQL Server 2012If you have an MCTS in SQL Server 2008 already you can take the following pathA pass in a SQL Server 2008 (MCTS) Microsoft Certified Technology Specialist examExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Once you have achieved you MCSA status you can then start for your MCSE - Microsoft Certified Solutions Expert certificationYou have a choice, to do the MCSE: SQL Server 2012 Data Platform, MCSE: SQL Server 2012 Business Intelligence or you could do bothMCSE: SQL Server 2012 Data Platform involvesObtain your SQL Server 2012 MCSAExam 70-464: Developing Microsoft SQL Server 2012 DatabasesExam 70-465: Designing Database Solutions for Microsoft SQL Server 2012There is also an upgrade pathA pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Database Administrator or Database Developer CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-459: transisitioning your MCITP on SQL Server 2008 Database Administrator or Database Developer to MCSE:Data PlatformMCSE: SQL Server 2012 Business Intelligence involvesObtain your SQL Server 2012 MCSAExam 70-466: Implementing Data Models and Reports with Microsoft SQL Server 2012Exam 70-467: Designing Business Intelligence Solutions with Microsoft SQL Server 2012The upgrade path involves:A pass in a SQL Server 2008 (MCITP) Microsoft Certified IT Professional Business Intelligence CertificationExam 70-457: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 1Exam 70-458: transisitioning your MCTS on SQL Server 2008 to MCSA on SQL Server 2012 part 2Exam 70-460: transisitioning your MCITP on SQL Server 2008 Business Intelligence Developer to MCSE:Business IntelligenceAs a result if you want to achieve the MCSE in either Data Platform or Business Intelligence and you are starting from scratch there will be 5 exams to takeIf you have the ability to upgrade your certification because you have an MCITP already then it will be three examsFull details and questions can be found at http://www.microsoft.com/learning/en/us/certification/cert-sql-server.aspxThanksChris

    Read the article

  • Important additions to the Silverlight MaskedTextBox

    RadMaskedTextBox is one of the major controls in the Telerik Silverlight suite. It enables you to filter the user input and makes the work with data much more easier for the end-user. That is why in the past quarter (Q1) we put a great effort and get all the scenarios and users reports that we had so far and made this control as stable as possible. Now post Q1 we added some small, but important new features to the control. Here they are: Option to get the changed value when the focus of the control is lost. Before this change each user stroke causes the ValueChanged and ValueChanging events to be raised. Now you have to option to get these events on lost focus. This also gives you the option to enable the Regular expression validation of the new value on ValueChanging event. So if you want a complex ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is HTML5/WebGL performance bad on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

  • Google Chrome for Mac, CSS colors and display profiles

    - by Trevor Burnham
    So, I'm aware that some browsers correct the colors in images in accordance with system settings, and that browsers differ in how they do this. But I'm very surprised when a color specified in a stylesheet appears different from one browser to another on the same system. With the latest Safari and Firefox, if I draw a div with background: #885500, I get a box with that color (as confirmed by the native DigitalColor Meter app): But when I load the same page in Chrome (Mac version 12.0.742.91) on the same system, a MacBook Pro with the default "Color LCD" display profile set, I get a noticeably different-colored box (DigitalColor Meter reports #a34d00—much more red, a bit less green): I tried a few different color profiles, and found that the color reported by DigitalColor Meter changed under Chrome. It stays constant in Safari. What's going on? Is it that Chrome is adjusting its colors depending on the system's display profile, or is it that Safari and Firefox are doing so? Does this happen under other operating systems, or is it purely a Mac phenomenon? And is there any way, from CSS/JavaScript, to detect/prevent this behavior so that colors are consistent across modern browsers on the same system?

    Read the article

  • Oracle Tutor: Learn Tutor in the comfort of your own home or office

    - by emily.chorba(at)oracle.com
    The primary challenge for companies faced with documenting policies and procedures is to realize that they can do this documentation in-house, with existing resources, using Oracle Tutor. Procedure documentation is a critical success component for supporting corporate governance or other regulatory compliance initiatives and when implementing or upgrading to a new business application. There are over 1000 Oracle Tutor customers worldwide that have used Tutor to create, distribute, and maintain their business procedures. This is easily accomplished because of Tutor's: Ease of use by those who have to write procedures (Microsoft Word based authoring) Ease of company-wide implementation (complex document management activities are centralized) Ease of use by workers who have to follow the procedures (play script format)Ease of access by remote workers (web-enabled) Oracle University is offering Live Virtual Tutor classes! The class lasts four days, starts on Tuesday and finishes on Friday. This course is an introduction to the Oracle Tutor suite of products. It focuses on the Policy and Procedure writing feature set of the Tutor applications. Participants will learn about writing procedures and maintaining these particular process document types, all using the Tutor method. The next three classes are scheduled for: April 19 - 22 May 31 - June 3 July 5 - 8 You will learn to: Write procedures Create procedure Flowcharts Write support documents Create Impact Analysis Reports Create Role-base Employee Manuals Deploy online Employee Manuals on an Intranet Enjoy learning Tutor in your local environment. Start the sign up process from this link Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & BPM

    Read the article

  • Crash after plugging in the Logitech H600 wireless transmitter

    - by SneezyDinosaur
    I recently bought the Logitech H600 Wireless Headset. When I plug in the USB wireless transmitter, Ubuntu 12.04 LTS crashes. How does that come and what do you suggest me to do? Same problem here, as soon as the H600 headset is plugged in 12.04 crashes. Relevant snip from /var/log/Xorg.failsafe.log is below, complete log can be found as a gist [ 2676.496] (**) Logitech Logitech Wireless Headset: Applying InputClass "evdev keyboard catchall" [ 2676.496] (II) Using input driver 'evdev' for 'Logitech Logitech Wireless Headset' [ 2676.496] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 2676.496] (**) Logitech Logitech Wireless Headset: always reports core events [ 2676.496] (**) evdev: Logitech Logitech Wireless Headset: Device: "/dev/input/event16" [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Vendor 0x46d Product 0xa29 [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Found absolute axes [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Found absolute multitouch axes [ 2676.496] (--) evdev: Logitech Logitech Wireless Headset: Found keys [ 2676.496] (II) evdev: Logitech Logitech Wireless Headset: Configuring as mouse [ 2676.496] (II) evdev: Logitech Logitech Wireless Headset: Configuring as keyboard [ 2676.496] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2.2/2-1.2.2:1.3/input/input16/event16" [ 2676.496] (II) XINPUT: Adding extended input device "Logitech Logitech Wireless Headset" (type: KEYBOARD, id 10) [ 2676.496] (**) Option "xkb_rules" "evdev" [ 2676.496] (**) Option "xkb_model" "pc105" [ 2676.496] (**) Option "xkb_layout" "us" [ 2676.497] (II) evdev: Logitech Logitech Wireless Headset: initialized for absolute axes. [ 2676.497] Backtrace: [ 2676.497] 0: /usr/bin/X (xorg_backtrace+0x37) [0xb7698637] [ 2676.497] 1: /usr/bin/X (0xb7510000+0x18c3ba) [0xb769c3ba] [ 2676.497] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb74ed40c] [ 2676.497] 3: /lib/i386-linux-gnu/libc.so.6 (0xb7155000+0x135d32) [0xb728ad32] [ 2676.497] 4: /usr/bin/X (XIChangeDeviceProperty+0x16c) [0xb7630b0c] [ 2676.497] 5: /usr/lib/xorg/modules/input/evdev_drv.so (0xb3f4a000+0x634e) [0xb3f5034e]

    Read the article

  • displaying multi-section html documents - best practices

    - by ecpepper
    I work at a research organization and we publish a lot of large-ish documents, usually organized in sections. What I want to know is how best to present these multi-section documents on our website. Presently, what I do is load the entire document as a single page, with each section as its own div. Then I show and hide divs as needed via a table of contents and "next" and "prev" buttons. The advantages to this are mainly: 1) that you can move between sections very quickly, 2) it produces consistent analytics (when a page is loaded, I know a report is being read). The disadvantages, however, are real: Readers can't take advantage of browser back/forward buttons to move between sections. It's complicated to create direct links to individual sections (I can do it with javascript but it's not easy for other people to grab and share). For long reports, you have to wait for the full report to load before you can move around (and that can include hordes of images and charts). Do other people have thoughts on better ways to organize this? Here's an example of the current system: http://massbudget.org/825

    Read the article

  • Oracle Virtualbox does not open since upgrade

    - by Langjan
    After upgrading to Ubuntu 12.10, I have been unable to restart my Oracle virtualbox. jan@jan-System-Product-Name:~$ sudo /etc/init.d/vboxdrv setup sudo: /etc/init.d/vboxdrv: command not found Where do I go from here? Can anyone help, please? I tried to install virtualbox via these commands: echo "deb http://download.virtualbox.org/virtualbox/debian $(lsb_release -sc) contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list wget -q http://download.virtualbox.org/virtu...racle_vbox.asc -O- | sudo apt-key add - sudo apt-get update sudo apt-get install virtualbox-4.2 Attempts to install via package manager vbox would not start. These error reports are received: Because the USB 2.0 controller state is part of the saved VM state, the VM cannot be started.To fix this problem, either install the 'Oracle VM VirtualBox Extension Pack' or disable USB 2.0 support in the VM settings (VERR_NOT_FOUND). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {db7ab4ca-2a3f-4183-9243-c1208da92392} I installed the extension pack, no change in result. I have added myself as user, but the error report says user must be added, when I redo add user, it says user is already added. The following outputs were also received:   Failed to open a session for the virtual machine Nuwe skelm. Implementation of the USB 2.0 controller not found! I cannot access the USB 2.0 setting to disable it. Where do I go from here, please?

    Read the article

  • Where did ULSTraceLog go to in the SharePoint 2010 Logging Database?

    - by Jan Tielens
    The Logging Database is one of the many new concepts that will make the life of many SharePoint administrators quite a bit more enjoyable. In SharePoint 2007 the Unified Logging System (ULS) logged all of its data to text files, typically found on your SharePoint server in 12\LOGS. We still have that in SharePoint 2010, but besides those text files, ULS can also write the data to a database! The advantages are obvious: easy to query, one central location for all servers in the farm, easy to build reports etc. You can find this ULS data in the SharePoint 2010 logging database (typically called WSS_Logging), in the view ULSTraceLog. Quite recently on one of my demo machines (standalone installation on Windows 7) I noticed the ULSTraceLog view was not available in the logging database. It turned out that there is a Timer Job that’s responsible for writing the data to the database, when the Timer Job hasn’t executed, the view is not there (the first time it executes, the view is created). Even more, the timer job was disabled, so the view would never be created, nor any data would be written to the database. If you encounter this situation as well, it’s quite easy to solve: Open the SharePoint Central Administration site Navigate to the Monitoring section Select Review Job Definitions Click on the job with the name Diagnostic Data Provider: Trace Log Click on the Enable button to enable it Optionally click on Run Now afterwards, to start it immediately There you go, the ULSTraceLog will be created and the ULS messages will appear in the database!

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • Spinup time failure

    - by bioShark
    I am not sure this is a real question or a bug I should report Ubuntu. Using: Ubuntu 11.10, on a Intel Q6600, Samsung Spinpoint F4 2TB. I have set my PC on Suspend and after I came back, pressed Enter and after logging in everything was back to normal. However, I had a message from Disk Utility that one disk reports errors. I entered Disk Utility, and my Samsung 2TB disk, the one on which my Ubuntu is installed, had the SMART Status turned red, with error message on it. The error was: Spinup time failed Value 21, Threshold value was 25 (so the error was reported because 21 < 25) I restarted and booted up in Windows to see what HD Tune is reporting. Unfortunately it was exactly the same 21/25. After reading up on Wiki about SMART and the errors, I discovered that Spinup time is the time required for the disk to reach full spinning speed in milliseconds. Then it hit me that, in Ubuntu I had Suspended the system, making essentially all my hardware stop. And when I rebooted to Windows, the hardware doesn't really stop, so SMART's reading of the Spinup time was still from Ubuntu's suspension. So I did a full PC stop and then booted up again, both in Ubuntu and Windows to see if there are different readings. Both reported successful Spinup time, 68 (a little better then 21 :) ), although in Disk Utility I have a nice message: Failed in the Past So now I am pretty sure that Ubuntu didn't handle the Suspend correctly, but then again should I worry about Imminent hardware failure ? Am I missing some drivers? Should I report this as a bug to Ubuntu? Sorry if this was a bad place to ask this question.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Iva&rsquo;s internship story

    - by anca.rosu
    Hello, my name is Iva and I am a member of the Internship program at Oracle Czech. When I joined Oracle, I initially worked as an Alliances and Channel Marketing Assistant at Oracle Czech Republic, but most recently, I have been working in the Demand Generation Team. I am a student of the Economics and Management Faculty at Czech University of Life Sciences in Prague, specializing in Marketing, Business and Administration. I have recently passed my Bachelor exams. I received the information about Oracle’s Internship opportunity from a friend. I joined Oracle in September 2008 and worked as an Alliances and Channel Marketing Assistant until May 2009. Here I was responsible for the Open Market Model (OMM) and at the same time I was covering communication with Partners, Oracle Events and Team Buildings as well as creating Partner Databases and Reports. At the moment, I support our Demand Generation Team to execute Direct Marketing campaigns in Czech Republic, Slovak Republic and Hungary. In addition to this, I help with Reporting and Contact Data Management for the whole of the European Enlargement (EE) and Commonwealth of Independent States (CIS) regions. I enjoy my job and I appreciate the experience. Every day is interesting, because every day I learn something new. I am very happy that I was presented with an opportunity to work at Oracle and cooperate with friendly people in a multicultural environment. Oracle gives me the chance to develop my skills and start building my career. I am able to attend interesting training classes, improve my language skills and enjoy sporting activities, such as squash, swimming and aerobics, at the same time. If you dream of working in an international company and you would like to join a very dynamic industry, I really can recommend Oracle without a doubt, even if you have no IT background! If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com   Technorati Tags: Internship program,Oracle Czech,Economics,Management Faculty,Czech University of Life Sciences in Prague,Demand Generation Team

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >