Search Results

Search found 1481 results on 60 pages for 'paper'.

Page 18/60 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How can I copy this quote from PDF?

    - by isme
    I'm reading a PDF copy of Jerome H. Friedman's paper "Data Mining and Statistics: What's the Connection?" using Google Chrome and the Adobe Reader plugin. It contains an amusing quote that I want to copy and paste to my blog. I used the mouse to select the text of the quote and pressed CTRL + C to copy the text. The document looks like this: When I paste the text into Notepad, Stack Overflow, or anywhere else, the product is Wingdings-like gibberish: ????????????????????????|?????????|????? ?????|??????????????????????????????????????????????????|?????????????? ??????????????P????? ?????????????????????P?|?????????|?????????????????????????????????????????????????????? ????????????Þ?????????????????????????????????|???|??????????????????????????? The text should instead look like this: A difference between statisticians and computer scientists in this field seems to be that when a statistician has an idea he or she writes a paper; a computer scientist starts a company. I had to type that text out manually. This is feasible for such a small quote, but how do I actually copy what I see? Is it something unusual about the PDF, the browser, the plugin, or some combiniation of the three?

    Read the article

  • Printing from Firefox on different printers and setting the page details beforehand

    - by user1162541
    I´ve got an odd problem and I have not been able to fix this. I have a computer which is connected to two printers. One is a receipt printer (EPSON TM-U220), and the other one is an impact printer (Epson LX-300+). From Firefox, I need to print on both printers at different moments. So first I print on the receipt printer, then on the impact printer, etc. However, whenever I first print on the receipt printer, and then go back to the impact printer, the printout is only the width of the page of the receipt printer. That is, the page does not come out properly, just the left part of the page is used for printing and the right part is completely empty, as if I am just printing on the small receipt paper. And there is no way I can tell Firefox that I am printing on this larger printer. The second print on the impact printer goes fine. Firefox now knows it is printing on the impact printer, and it comes out properly on the full page width. But every first print on the impact printer is using the wrong paper size. How can I fix this? When I go to PAGE PREVIEW I can not set the printer UNTIL I actually print the page. If I go to PRINT PREVIEW CONFIGURE PAGE, I can not set the printer I will be using. I can only do so if I go to PRINT PREVIEW PRINT (here is the dropdown box to set the printer). But I can only set the printer and then click PRINT, or CANCEL. If I click PRINT, then the computer remembers the setting but that page will come our wrong, and when I click cancel it simply does not remember the printer I just set.

    Read the article

  • Logging communication between two VMs

    - by sYnfo
    Hi, I'm trying to set up "malware lab" described in this paper. So far, I've set up Windows guest system, adding one Host-only Network adapter, and setting this (sorry if the names aren't exactely correct, I don't have an english language version): - IP Address - 10.0.0.3 - Subnet mask - 255.255.255.0 - Default gateway - not set - Preferred DNS - 10.0.0.4 - Alternate DNS - not set And a Linux guest system - Ubuntu 9.04 - with two Network adapters - Bridged (eth0) and Host-only (eth1), and setting eth1 IP Address to 10.0.0.4, leaving the eth0 to be set by DHCP. Then, I have configured iptables as described in the paper, ie.: iptables -F -t nat iptables -F -t mangle iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -A PREROUTING -i eth0 -j ACCEPT iptables -t mangle -A PREROUTING -p udp -i eth1 -d 10.0.0.3 --dport 53 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 --dport 80 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 -d 10.0.0.3 --dport 6000:7000 -j ACCEPT iptables -t mangle -A PREROUTING -i eth1 -j ULOG iptables -t mangle -A PREROUTING -i eth1 -j DROP Now, when I try to ping the windows system from within the Linux system, it does not reply, I guess thats perfectly normal, because iptables is blocking ping responce. Same when I try to ping the Linux system from within the Windows. But when I try to access any web page from within the Windows system, I would expect that this action should get logged by iptables. But thing is, I don't see any of that kind of lines in log file (If I am looking in the right place, that is. :) It is at /var/log/messages, isn't it?). So, what do you think might be the problem here? I should note, that this is the first time I'm using linux, so don't expect ANY working knowledge of Linux at all... :) Also, since english is not my mother tongue, feel free to point out any gramatical mistakes... :) Thanks for any advice.

    Read the article

  • Is there an simple but good To Do Manager app for the Mac?

    - by Another Registered User
    Every morning I think about what I am going to do today. So I take a paper and start to write things like: [ ] Call Mr. XYZ [ ] Answer Support E-Mails [ ] Reduce website header height by 20 px [ ] Create new navigation bar icons And every time I'm done with something, I paint a checkmark in this square. On paper. It would be fun to have something like this as an application. But I don't want a heavy project management tool or integration with email. It should be like download, install, use without fat configuration and steep learning curve. usually I don't schedule my to do's, I just write down every day what I want to accomplish today. For my experience it doesn't make sense to plan what to do next week, because next week everything looks totally different. Would be cool if such a simple utility exists. At the moment I try just using textEdit and deleting rows which are done. With a nice interface, this would be much more fun.

    Read the article

  • image processing algorithm in MATLAB

    - by user261002
    I am trying to reconstruct an algorithm belong to this paper: Decomposition of biospeckle images in temporary spectral bands Here is an explanation of the algorithm: We recorded a sequence of N successive speckle images with a sampling frequency fs. In this way it was possible to observe how a pixel evolves through the N images. That evolution can be treated as a time series and can be processed in the following way: Each signal corresponding to the evolution of every pixel was used as input to a bank of filters. The intensity values were previously divided by their temporal mean value to minimize local differences in reflectivity or illumination of the object. The maximum frequency that can be adequately analyzed is determined by the sampling theorem and s half of sampling frequency fs. The latter is set by the CCD camera, the size of the image, and the frame grabber. The bank of filters is outlined in Fig. 1. In our case, ten 5° order Butterworth11 filters were used, but this number can be varied according to the required discrimination. The bank was implemented in a computer using MATLAB software. We chose the Butter-worth filter because, in addition to its simplicity, it is maximally flat. Other filters, an infinite impulse response, or a finite impulse response could be used. By means of this bank of filters, ten corresponding signals of each filter of each temporary pixel evolution were obtained as output. Average energy Eb in each signal was then calculated: where pb(n) is the intensity of the filtered pixel in the nth image for filter b divided by its mean value and N is the total number of images. In this way, en values of energy for each pixel were obtained, each of hem belonging to one of the frequency bands in Fig. 1. With these values it is possible to build ten images of the active object, each one of which shows how much energy of time-varying speckle there is in a certain frequency band. False color assignment to the gray levels in the results would help in discrimination. and here is my MATLAB code base on that : clear all for i=0:39 str = num2str(i); str1 = strcat(str,'.mat'); load(str1); D{i+1}=A; end new_max = max(max(A)); new_min = min(min(A)); for i=20:180 for j=20:140 ts = []; for k=1:40 ts = [ts D{k}(i,j)]; %%% kth image pixel i,j --- ts is time series end ts = double(ts); temp = mean(ts); ts = ts-temp; ts = ts/temp; N = 5; % filter order W = [0.00001 0.05;0.05 0.1;0.1 0.15;0.15 0.20;0.20 0.25;0.25 0.30;0.30 0.35;0.35 0.40;0.40 0.45;0.45 0.50]; N1 = 5; for ind = 1:10 Wn = W(ind,:); [B,A] = butter(N1,Wn); ts_f(ind,:) = filter(B,A,ts); end for ind=1:10 imag_test1{ind}(i,j) =sum((ts_f(ind,:)./mean(ts_f(ind,:))).^2); end end end for i=1:10 temp_imag = imag_test1{i}(:,:); x=isnan(temp_imag); temp_imag(x)=0; temp_imag=medfilt2(temp_imag); t_max = max(max(temp_imag)); t_min = min(min(temp_imag)); temp_imag = (temp_imag-t_min).*(double(new_max-new_min)/double(t_max-t_min))+double(new_min); imag_test2{i}(:,:) = temp_imag; end for i=1:10 A=imag_test2{i}(:,:); B=A/max(max(A)); B=histeq(B); figure,imshow(B) colorbar end but I am not getting the same result as paper. has anybody has aby idea why? or where I have gone wrong? Refrence Link to the paper

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • Challenges and Opportunities to Drive Change in the Healthcare System Explored at America’s Health Insurance Plans Exchange Conference and Institute 2013

    - by elaine blog
    The program theme at the June America’s Health Insurance Plans (AHIP) Exchange Conference and AHIP’s Institute 2013 was Transforming Our Health Care System: Navigating and Succeeding in the New Marketplace.  Topics included care delivery transformation, innovation for a new healthcare eco system, Health Insurance Exchanges, the nexus of consumerism, retail and healthcare, driving value through improved operations and leveraging technology, data and innovation to transform care. Oracle participated as a sponsor of both conferences, signaling the significant investment and activity Oracle continues to make in helping health plans, providers and government agencies become more efficient and more relevant in the healthcare market place. AHIP is a national trade association representing the health insurance industry. AHIP’s members provide health and supplemental benefits to more than 200 million Americans through employer-sponsored coverage, the individual insurance market and public programs such as Medicare and Medicaid.   AHIP advocates for public policies that expand access to affordable health care. Health plans are focusing on the Health Insurance Exchanges and the opportunities they offer to provide better access and higher quality healthcare.  With the opportunities come operational challenges to implementation and innovative technology solutions to consider.   At the Exchange Conference, Oracle hosted a breakfast symposium on “Strategies for Success:  Driving Business Transformation in the Growing Health Insurance Exchange Market”. With Health Insurance Exchanges as catalysts for change, attendees learned about how to achieve integration within an Exchange and deploy new business strategies to support health reform initiatives. Discussion covered steps and processes to successfully establish and implement enrollment systems, quote to card activities, program pricing, claims billing, automated claims processing and new customer service tools. Piyush Pushkar, COO of Benefitalign, an Oracle partner that provides solutions to adopt innovative business models for retail, HIX, consumer-centric health plan and benefits administration, spoke on the state of the Exchanges in the U.S. and the activities health plans are engaged in to support individuals entering the healthcare system, including sales automation, member enrollment automation/portals and integration strategies with the Exchanges. The Oracle and Benefitalign partnership allows seamless integration between a health plan enrollment solution with the HIX individual market and allows for the health plan to customize and characterize the offerings available to the HIX that may or may not be available through other channels.  This approach can benefit the health plan through separation of interests, but also because some state-run HIXs require such separation. Janice W. Young, Program Director, Payer IT Strategies, IDC Health Insights, reviewed a survey of health plans on their investment priorities for this last year as well as this year.  She also identified the 2013-2015 strategies of go/get to market with front end and compliance investments; leveraging existing business processes and internal technologies; and establishing best practices.  Of key interest to the audience was a reform era payer solutions platform overview mapping technologies to support the business operations. David Bonham of the Oracle Health Insurance organization moderated the panel and spoke on Oracle’s presence in healthcare and products for payers to help them drive efficiencies and gain a competitive advantage in an ever changing market. Oracle serves healthcare stakeholders with applications such as billing, rating and underwriting, analytics, CRM, enrollment, and products for processing of health insurance claims including pricing and benefits administration, as well as payment of providers through alternative, non-fee for service reimbursement methods. Oracle in Healthcare….Did you know? More than 80 healthcare payers run Oracle applications. More than 300 leading healthcare providers run Oracle applications. 10 out of the top 12 fortune Global 500 healthcare organizations run Oracle applications. For more information on Oracle solutions for healthcare payers, please visit oracle.com/insurance or these individual solution pages: Oracle Health Insurance Components Oracle Insurance Insbridge Rating and Underwriting Oracle Insurance Revenue Management and Billing Oracle Documaker Oracle Healthcare Oracle CRM Related Resources Webcast On Demand: Strategies for Success: Driving Business Transformation in the Growing Health Insurance Exchange Market Strategy Brief: Executing on the Individual Mandate: Opportunities and Challenges for Healthcare Payers White Paper: White paper: Navigating Alternative Provider Reimbursement Models of the Future Strategy Brief: Enterprise Rating Agility Improves Payer Response to Healthcare Reform Podcast: Technology Implications of Healthcare Reform Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • Google, typography, and cognitive fluency for persuasion

    - by Roger Hart
    Cognitive fluency is - roughly - how easy it is to think about something. Mere Exposure (or familiarity) effects are basically about reacting more favourably to things you see a lot. Which is part of why marketers in generic spaces like insipid mass-market lager will spend quite so much money on getting their logo daubed about the place; or that guy at the bus stop starts to look like a dating prospect after a month or two. Recent thinking suggests that exposure effects likely spin off cognitive fluency. We react favourably to things that are easier to think about. I had to give tech support to an older relative recently, and suggested they Google the problem. They were confused. They could not, apparently, Google the problem, because part of it was that their Google toolbar had mysteriously vanished. Once I'd finished trying not to laugh, I started thinking about typography. This is going somewhere, I promise. Google is a ubiquitous brand. Heck, it's a verb, and their recent, jaw-droppingly well constructed Paris advert is more or less about that ubiquity. It trades on Google's integration into any information-seeking behaviour. But, as my tech support encounter suggests, people settle into comfortable patterns of thinking about things. They build schemas, and altering them can take work. Maybe the ubiquity even works to cement that. Alongside their online effort, Google is running billboard campaigns to advertise Chrome, a free product in a crowded space. They are running these ads in some kind of kooky Calibri / Comic Sans hybrid. Now, at first it seems odd that one of the world's more ubiquitous brands needs to run a big print campaign in public places - surely they have all the fluency they need? Well, not so much. Chrome, after all, is not the same as their core product, so there's some basic awareness work to do, and maybe a whole new batch of exposure effect to try and grab. But why the typeface? It's heavily foregrounded, and the ads are extremely textual. Plus, don't we all know that jovial, off-beat fonts look unprofessional, or something? There's a whole bunch of people who want (often rightly) to ban Comic Sans I wonder, though. Are Google trying to subtly disrupt cognitive fluency? There's an interesting paper (pdf) about - among other things - the effects of typography on they way people answer survey questions. Participants given the slightly harder to read question gave more abstract answers. The paper references other work suggesting that generally speaking, less-fluent question framing elicits more considered answers. The Chrome ad typeface is less fluent for print. Reactions may therefore be more considered, abstract, and disruptive. Is that, in fact, what Google need? They have brand ubiquity, but they want here to change accustomed behaviour, to get people to think about changing their browser. Is this actually a very elegant piece of persuasive information design? If you think about their "what is a browser?" vox pop research video, there's certainly a perceptual barrier they're going to have to tackle somehow.

    Read the article

  • Three Key Tenets of Optimal Social Collaboration

    - by kellsey.ruppel
    Today's blog post comes to us from John Bruswick! This post is an abridged version of John’s white paper in which he discusses three principals to optimize social collaboration within an enterprise.   By [email protected], Oracle Principal Sales Consultant Effective social collaboration is actionable, deeply contextual and inherently derives its value from business entities outside of itself. How does an organization begin the journey from traditional, siloed collaboration to natural, business entity based social collaboration? Successful enablement of enterprise social collaboration requires that organizations embrace the following tenets and understand that traditional collaborative functionality has inherent limits - it is innovation and integration in accordance with the following tenets that will provide net-new efficiency benefits. Key Tenets of Optimal Social Collaboration Leverage a Ubiquitous Social Fabric - Collaborative activities should be supported through a ubiquitous social fabric, providing a personalized experience, broadcasting key business events and connecting people and business processes.  This supports education of participants working in and around a specific business entity that will benefit from an implicit capture of tacit knowledge and provide continuity between participants.  In the absence of this ubiquitous platform activities can still occur but are essentially siloed causing frequent duplication of effort across similar tasks, with critical tacit knowledge eluding capture. Supply Continuous Context to Support Decision Making and Problem Solving - People generally engage in collaborative behavior to obtain a decision or the resolution for a specific issue.  The time to achieve resolution is referred to as "Solve Time".  Users have traditionally been forced to switch or "alt-tab" between business systems and synthesize their own context across disparate systems and processes.  The constant loss of context forces end users to exert a large amount of effort that could be spent on higher value problem solving. Extend the Collaborative Lifecycle into Back Office - Beyond the solve time from decision making efforts, additional time is expended formalizing the resolution that was generated from collaboration in a system of record.  Extending collaboration to result in the capture of an explicit decision maximizes efficiencies, creating a closed circuit for a particular thread.  This type of structured action may exist today within your organization's customer support system around opening, solving and closing support issues, but generally does not extend to Sales focused collaborative activities. Excelling in the Unstructured Future We will always have to deal with unstructured collaborative processes within our organizations.  Regardless of the participants and nature of the collaborate process, two things are certain – the origination and end points are generally known and relate to a business entity, perhaps a customer, opportunity, order, shipping location, product or otherwise. Imagine the benefits if an organization's key business systems supported a social fabric, provided continuous context and extended the lifecycle around the collaborative decision making to include output into back office systems of record.   The technical hurdle to embracing optimal social collaboration would fall away, leaving the company with an opportunity to focus on and refine how processes were approached.  Time and resources previously required could then be reallocated to focusing on innovation to support competitive differentiation unique to your business. How can you achieve optimal social collaboration? Oracle Social Network enables business users to collaborate with each other using a broad range of collaboration styles and integrates data from a variety of sources and business applications -- allowing you to achieve optimal social collaboration. Looking to learn more? Read John's white paper, where he discusses in further detail the three principals to optimize social collaboration within an enterprise. 

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • Impressions of my ASUS eee slate EP121 - Dual core 4GB, 64GB SSD

    - by tonyrogerson
    This thing is lovely, very nice bluetooth keyboard that has nice feedback on the keypress, there is no mouse but you can use the stylus or get yourself a bluetooth mouse, me, I've opted for a Microsoft ARC mouse which is a delight to use, the USB doors are a pain to open for the first time if like me you don't have any finger nails. It came as a suprise that the slate shows four processors, Dual Core with multi-threading, I didn't really look at the processor I was more interested in the amount of memory and the SSD; you don't get the full 4GB even with the 64 bit version of Windows 7 installed (which I immediately upgraded to Ultimate through my MSDN subscription). The box is extremely responsive - extremely, it loads Winword in literally a second. I've got office 2010 and onenote 2010 on there now; one problem is that on applying all (43) windows updates since the upgrade the machine is still sat on step 3 of 3 on the start up configuring updates screen after about an hour, you can't turn this machine off without using a paper clip to reset it and as I have just found you need a paper clip :). Installing Windows 7 SP1 was effortless. One of the first things I did on it was to reduce the size of the font, by default its set at 125%, my eye sight is ok :) so I've set that back down. Amazon Kindle for the PC works really well, plenty of text on the screen when viewed portrait, the case it comes with also allows the slate to stand up in various positions - portrait, horizontal - seems stable enough. The wireless works well, seems to have a better signal than my other two laptop machines which is good news. The gadget passed the pose test at work :). I use offline files to keep a copy of all my work stuff locally, I'm not sure what it is, well, its probably my server but whenever I try and sync it runs for a couple of minutes then fails with network name no longer contactable, funnily enough its fine from my big laptop so I can only guess this may be a driver type issue on the EP121 itself - very odd and very annoying. I do a lot of presenting and need to plug into a VGA project because most sites that's all that is offered, the EP121 has a mini-hdmi output which is great except for this scenario, hdmi is digital, vga is analogue, you will struggle to find a cost effective solution, I found HDFury and also a device HP do, however, a better solution appears to be getting a USB graphics adapter for instance the one I've ordered is the ClimaxDigital USB 2.0 to DVI,VGA or HDMI Adaptor which gives everything I need - VGA and DVI output and great resolution as well - ok, so fingers crossed because I'm presenting next Wednesday in Edinburgh and not taking my 300kg lenovo w700 (I'm sure my back just sighed in relief) - it certainly works really well on my LED TV, the install was simple - it just works! One of the several reasons for buying this piece of kit was to use it on my LED TV to remote into my main machine to check stuff whilst sat in my living room, also to watch webcasts and lecture videos in comfort away from my office, because of the wireless speed and limitation I'm opting for a USB network adapter from Belkin - that will also allow me to take advantage of my home gigabit network, there are only 2 usb ports on the slate so I'm going to knock up a hub so connecting it in is straight forward and simple, I'm also going to purchase a second power supply so I don't have to faff about with that either.I now have the developer x64 edition of SQL Server 2008 R2, yes everything :) - about 16GB left to play with on the machine now but that will be fine, I'll put AdventureWorks on there so I can play and demo stuff which is all I'm after from this, my development machine is significantly more powerful and meets my storage needs too.Travel test this weekend and next week, I'm in Dundee for my final exam for the masters degree.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • The Rise of Project Intelligence and Why It Matters

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Amy DeWolf Are you doing any of these in your organization? How are you leveraging historical data to forecast projects? There’s a lot going on in government today. The economic pressures agencies feel from the uncertainty of budget cuts and sequestration effect every part of an organization, including the Project Management Office (PMO).  The PMO is responsible for monitoring and administering government IT projects. As time goes on, priorities shift, technology advances, and new regulations are imposed, all of which make planning and executing projects more difficult.  For example, think about your own projects.  How many boxes do you need to check and hoops do you need to jump through to ensure you comply with new regulations? While new regulations and technology advancements can be a good thing, they add an additional layer of complexity to already complex projects. To overcome some of these pressures, particularly new regulations, many in the PMO world are adopting a new approach- Project Intelligence (PI). According to a new Oracle Primavera white paper, The Rise of Project Intelligence: When Project Management is Just Not Enough, “PI uses Business Intelligence methods to leverage historical project data to make more informed decisions and greatly enhance project execution.” Currently, project managers plan and forecast the possible phases in an execution cycle.  However, most project managers don’t have the proper tools to do this as effectively as they would like. As the white paper noted, “The underlying deficiencies in most forecasting approaches are that 1) the PM fails in most instances to leverage historical data and 2) the PM doesn’t employ current Business Intelligence tools.” PI seeks to overturn this by combining modeling tools used in Business Intelligence for projects with the understanding of Emotional Intelligence for managing people.   Simply put, Project Intelligence is built off four main pillars: Actively use historical data to forecast project cycles Understand the intricacies of complex projects Enhance social and emotional intelligence in projects Actively use Business intelligence tools Read our complimentary whitepaper and discover the importance of emotional intelligence and best practices for improving projects, specifically in terms of communication.

    Read the article

  • Is This Your Idea of Disaster Recovery?

    - by rickramsey
    Don't just make do with less. Protect what you've got. By, for instance, deploying Oracle Solaris 10 inside a zone cluster. "Wait," you say, "what is a zone cluster?" It is a zone deployed across different physical servers. "Who would do that!" you ask in a mild panic. Why, an upstanding sysadmin citizen interested in protecting his or her employer's investment with appropriate high availability and disaster recovery. If one server gets wiped out by Hurricane Sandy along with pretty much the entire East Coast of the USA, your zone continues to run on the other server(s). Provided you set them up in Edinburgh. This white paper (pdf) explains what a zone cluster is and how to use it. If a white paper reminds you of having to read War and Peace in school, just use this Oracle RAC and Solaris Cluster Cheat Sheet, instead. "But wait!" you exclaim. "I didn't realize Solaris 10 offered zone clusters!" I didn't, either! And in an earlier version of this blog post I said that zone clusters were only available with Oracle Solaris 11. But Karoly Vegh pointed me to the documentation for Oracle Solaris Cluster 3.3, which explains how to manage zone clusters in Oracle Solaris 10. Bite my fist! So, the point I was trying to make is not just that you can run Oracle Solaris 10 zone clusters, but that you can run them in an Oracle Solaris 11 environment. Now let's return to our conversation and pick up where we left off ... "Oh no! Whatever shall I do?" Fear not. Remember how Oracle Solaris 11 lets you create a Solaris 10 branded zone inside a system running Oracle Solaris 11? Well, the Solaris Cluster engineers thought that was a bang-up idea, and decided to extend Oracle Solaris Cluster so that you could run your Solaris 10 applications inside the protective cocoon of an Oracle Solaris 11 zone cluster. Take advantage of the installation improvements and network virtualization capabilities of Oracle Solaris 11 while still running your application on Oracle Solaris 10. You Luddite, you. That capability is in the latest release of Oracle Solaris Cluster, version 4.1, which became available last Friday. "Last Friday! Is it too late to get a copy?" You can still get a free copy from our download center (see below). And, if you'd like to know what other goodies the 4.1 release of Oracle Solaris Cluster provides, see: What's New In Oracle Solaris Cluster 4.1 (pdf) Free download Oracle Solaris Cluster 4.1 (SPARC or x86) Tech Article: How to Upgrade to Oracle Solaris Cluster 4.0, by Tim Read. As always, you can get the latest information about Oracle Solaris Cluster, plus technical how-to articles, documentation, and more from Oracle Solaris Cluster Resource Page for Sysadmins and Developers. And don't forget about the online launch of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1, scheduled for Nov 7. "I feel so much better, now!" Think nothing of it. That's what we're here for. - Rick Website Newsletter Facebook Twitter

    Read the article

  • USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Know your Data Lineage

    - by Simon Elliston Ball
    An academic paper without the footnotes isn’t an academic paper. Journalists wouldn’t base a news article on facts that they can’t verify. So why would anyone publish reports without being able to say where the data has come from and be confident of its quality, in other words, without knowing its lineage. (sometimes referred to as ‘provenance’ or ‘pedigree’) The number and variety of data sources, both traditional and new, increases inexorably. Data comes clean or dirty, processed or raw, unimpeachable or entirely fabricated. On its journey to our report, from its source, the data can travel through a network of interconnected pipes, passing through numerous distinct systems, each managed by different people. At each point along the pipeline, it can be changed, filtered, aggregated and combined. When the data finally emerges, how can we be sure that it is right? How can we be certain that no part of the data collection was based on incorrect assumptions, that key data points haven’t been left out, or that the sources are good? Even when we’re using data science to give us an approximate or probable answer, we cannot have any confidence in the results without confidence in the data from which it came. You need to know what has been done to your data, where it came from, and who is responsible for each stage of the analysis. This information represents your data lineage; it is your stack-trace. If you’re an analyst, suspicious of a number, it tells you why the number is there and how it got there. If you’re a developer, working on a pipeline, it provides the context you need to track down the bug. If you’re a manager, or an auditor, it lets you know the right things are being done. Lineage tracking is part of good data governance. Most audit and lineage systems require you to buy into their whole structure. If you are using Hadoop for your data storage and processing, then tools like Falcon allow you to track lineage, as long as you are using Falcon to write and run the pipeline. It can mean learning a new way of running your jobs (or using some sort of proxy), and even a distinct way of writing your queries. Other Hadoop tools provide a lot of operational and audit information, spread throughout the many logs produced by Hive, Sqoop, MapReduce and all the various moving parts that make up the eco-system. To get a full picture of what’s going on in your Hadoop system you need to capture both Falcon lineage and the data-exhaust of other tools that Falcon can’t orchestrate. However, the problem is bigger even that that. Often, Hadoop is just one piece in a larger processing workflow. The next step of the challenge is how you bind together the lineage metadata describing what happened before and after Hadoop, where ‘after’ could be  a data analysis environment like R, an application, or even directly into an end-user tool such as Tableau or Excel. One possibility is to push as much as you can of your key analytics into Hadoop, but would you give up the power, and familiarity of your existing tools in return for a reliable way of tracking lineage? Lineage and auditing should work consistently, automatically and quietly, allowing users to access their data with any tool they require to use. The real solution, therefore, is to create a consistent method by which to bring lineage data from these data various disparate sources into the data analysis platform that you use, rather than being forced to use the tool that manages the pipeline for the lineage and a different tool for the data analysis. The key is to keep your logs, keep your audit data, from every source, bring them together and use the data analysis tools to trace the paths from raw data to the answer that data analysis provides.

    Read the article

  • PrintableArea in C# - Bug?

    - by Brandi
    I am having an issue with PageSettings.PrintableArea's width and height values. Width, Height, and Size properties claim to "get or set" the values. Also, the inflate() function claims to change the size based on values passed in. However, all of these attempts to change the value have not worked. Inflate() is ignore (no error, just passes as if it worked, but the values remain unchanged. Attempting to set the height, width, or size gives a compiler error: "Cannot modify the return value of 'System.Drawing.Printing.PageSettings.PrintableArea' because it is not a variable". I get the feeling that this means the "or set" part of the description is a lie. Why I want to know this: (Someone always asks...) I have a printing application (C#, WinForm) that for most things is working rather well. I can set the printer settings and page settings objects to control what displays in the print dialog's printer properties. However, with Microsoft Office Document Image Writer, these settings are sometimes ignored, and the paper size returns as 0, 0 even when it displayed something else. All I really want it for it to be WYSIWYG as far as the displayed values go, so I change the paper size back to what it should be, but the printable area, if it is wrong, makes the resulting image wonky. The resulting image is the size of the printable area instead of the value in papersize. Just wondering if there was a reason for this or a way to get it not to do that. Thanks in advance. :)

    Read the article

  • Rendering LaTeX on third-party websites

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • MS-Excel Negative times

    - by oxinabox.ucc.asn.au
    I'm writing a spreadsheet for a shop manager. What it does is keep track of the number of hours a worker has worked. So you enter times for Monday-Sunday, and then an adjustment - e.g. if they work 40/40/40/32 hours for the month, then you would have an adjustment of -2/-2/-2/+6 to bring the worker to the 38 hour week that he's being paid for. Some (most) weeks may be adjusted for overtime. The spreadsheet then totals the hours. This spreadsheet is supposed to just be a self-calculating version of a paper form. It needs to match the paper form as it has to be substituted for the old form which is given to some other member of the company (pay clerk, I don't know; I'm not rebuilding their whole system, just replacing a form) I'm having trouble entering a negative time in the adj field - the field has a [h]:mm formatting. and when i enter a negative time (e.g. -2:00) it displays an error, saying "incorrectly formatted equation", with the suggestion that if I was entering a string then I should prefix with a apostrophe. How do I overcome this?

    Read the article

  • Sweave/R - Automatically generating an appendix that contains all the model summaries/plots/data pro

    - by John Horton
    I like the idea of making research available at multiple levels of detail i.e., abstract for the casually curious, full text for the more interested, and finally the data and code for those working in the same area/trying to reproduce your results. In between the actual text and the data/code level, I'd like to insert another layer. Namely, I'd like to create a kind of automatically generated appendix that contains the full regression output, diagnostic plots, exploratory graphs data profiles etc. from the analysis, regardless of whether those plots/regressions etc. made it into the final paper. One idea I had was to write a script that would examine the .Rnw file and automatically: Profile all data sets that are loaded (sort of like the Hmisc(?) package) Summarize all regressions - i.e., run summary(model) for all models Present all plots (regardless of whether they made it in the final version) The idea is to make this kind of a low-effort, push-button sort of thing as opposed to a formal appendix written like the rest of a paper. What I'm looking for is some ideas on how to do this in R in a relatively simple way. My hunch is that there is some way of going through the namespace, figuring out what something is and then dumping into a PDF. Thoughts? Does something like this already exist?

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • Rendering LaTeX on third-party websites?

    - by A. Rex
    There are some sites on the web that render LaTeX into some more readable form, such as Wikipedia, some Wordpress blogs, and MathOverflow. They may use images, MathML, jsMath, or something like that. There are other sites on the web where LaTeX appears inline and is not rendered, such as the arXiv, various math forums, or my email. In fact, it is quite common to see an arXiv paper's abstract with raw LaTeX in it, e.g. this paper. Is there a plugin available for Firefox, or would it be possible to write one, that renders LaTeX within pages that do not provide a rendering mechanism themselves? (The LaTeX would be enclosed within dollar signs, e.g. $\pi$. See the arXiv link above.) Some notes: It may be impossible to render some of the code, because authors often copy-paste code directly from their source TeX files, which may contain things like "\cite{foo}" or undefined commands. These should be left alone. This question is a repost of a question from MathOverflow that was closed for not being related to math. There is one answer there, which is helpful, but perhaps Stack Overflow can provide better answers. I program a lot, but Javascript is not my specialty, so comments along the lines of "look at this library" are not particularly helpful to me (but may be to others).

    Read the article

  • algorithm || method to write prog

    - by fatai
    I am one of the computer science student. My wonder is everyone solve problem with different or same method, but actually I dont know whether they use method or I dont know whether there are such common method to approach problem. All teacher give us problem which in simple form sometimes, but they dont introduce any approach or method(s) so that we can first choose method then apply that one to problem , afterward find solution then write code. I have found one method when I failed the course, More accurately, When I counter problem in language , I will get more paper and then ; first, input/ output step ; my prog will take this / these there argument(s) and return namely X , ex : in c, input length is not known and at same type , so I must use pointer desired output is in form of package , so use structure second, execution part ; in that step , I am writing all step which are goes to final output ex : in python ; 1.) [ + , [- , 4 , [ * , 1 , 2 ]], 5] 2.) [ + , [- , 4 , 2 ],5 ] 3.) [ + , 2 , 5] 4.) 7 ==> return 7 third, I will write test code ex : in c++ input : append 3 4 5 6 vector_x remove 0 1 desired output vector_x holds : 5 6 But now, I wonder other method ; used to construct class :::: for c++ , python, java used to communicate classes / computers used for solving embedded system problem ::::: for c Why I wonder , because I learn if you dont costruct algorithm on paper, you may achieve your goal. Like no money no lunch , I can say no algorithm no prog therefore , feel free when you write your own method , a way which is introduced by someone else but you are using and you find it is so efficient

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >