Search Results

Search found 7183 results on 288 pages for 'export to excel'.

Page 127/288 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Getting Error 91

    - by user1695788
    I have a general comprehension issue with classes and objects. What I'm trying to do is pretty simple but I'm getting errors. In the code example below, sometimes the line "Call tables.MethodInCTables" runs fine and sometimes it produces error 91, object not set. IN all cases, I can "see" the method in the type ahead so I know that the code recognizes the "tables" instance and "sees" MethodInCTables. But then I get the run-time error. Sub MainSub() Dim tables as New CTables Call tables.MethodInCTables End Sub ----Class Module = CTables Sub MethodInCTables() ...do something End Sub

    Read the article

  • Additional options in MDL

    - by Jane Zhang
        The Metadata Loader(MDL) enables you to populate a new repository as well as transfer, update, or restore a backup of existing repository metadata. It consists of two utilities: metadata export and metadata import. The export utility extracts metadata objects from a repository and writes the information into a file. The import utility reads the metadata information from an exported file and inserts the metadata objects into a repository.      While the Design Client provides an intuitive UI that helps you perform the most commonly used export and import tasks, OMBPlus scripting enables you to specify some additional options, and manage a control file that allows you to perform more specialized export and import tasks. Is it possible to utilize these options in MDL from Design Client? This article will tell you how to achieve it.      A property file named mdl.properties is used to configure the additional options. It stores options in name/value pairs. This file can be created and placed under the directory <owb installation path>/owb/bin/admin/. Below we will introduce the options that can be specified in the mdl.properties file. 1. DEFAULTDIRECTORY     When we open a Metadata Export/Import dialog in Design Client, a default directory is provided for MDL file and log file. For MDL Export, the default directory is <owb installation path>/owb/bin/. As for MDL Import, the default directory is <owb installation path>/owb/mdl/. It may not be the one you would want to use as a default. You can specify the option DEFAULTDIRECTORY in the mdl.properties file to set your own default directory for MDL Export/Import, for example, DEFAULTDIRECOTRY=/tmp/     In this example, the default directory is set to /tmp/. Be sure the value ends with a file separator since it represents a directory. In Windows, the file separator is “\”. In linux, the file separator is “/”. 2. MDLTRACEFILE     Sometimes we would like to trace the whole process of MDL Export/Import, and get detailed information about operations to help developers or supports troubleshooting. To turn on MDL trace, set the option MDLTRACEFILE in the mdl.properties file. MDLTRACEFILE=/tmp/mdl.trc    The right side of the equals sign is to specify the name of the file for MDL trace information to be written. If no path is specified, the file will be placed under directory <owb installation path>/owb/bin/admin/. However, the trace file may be large if the MDL file contains a large number of metadata objects, so please use this option sparingly. 3. CONTROLFILE       We can use a control file to specify how objects are imported or exported. We can set an option called CONTROLFILE in the mdl.properties file, so the control file can also be utilized in Design Client, for example, CONTROLFILE=/tmp/mdl_control_file.ctl     The control file stores options in name/value pairs. When using control file, be sure the file exists, otherwise an exception java.lang.Exception: CNV0002-0031(ERROR): Cannot find specified file will be thrown out during MDL Export/Import.      Next we will introduce some options specified in control file. ZIPFILEFORMAT     By default, MDL exports objects into a zip format file. This zip file has an .mdl extension and contains two files. For example, you export the repository metadata into a file called projects.mdl. When you unzip this MDL file, you obtain two files. The file projects.mdx contains the repository objects. The file mdlcatalog.xml contains internal information about the MDL XML file. Another choice is to combine these two files into one unzip text format file when doing MDL exporting.    In OMBPlus command related to MDL, there is an option called FILE_FORMAT which is used to specify the file format for the exported file. Its acceptable values are ZIP or TEXT. When the value TEXT is selected, the exported file is in text format, for example, OMBEXPORT MDL_FILE '/tmp/options_file_format_test.mdl' FILE_FORMAT TEXT FROM PROJECT 'MY_PROJECT'    How to achieve this via Design Client when doing an MDL exporting? Here we have another option called ZIPFILEFORMAT which has the same function as the FILE_FORMAT. The difference is the acceptable values for ZIPFILEFORMAT are Y or N. When the value is set to N, the exported file is in text format, otherwise it is in zip file format. LOGMESSAGELEVEL     Whenever you export or import repository metadata, MDL writes diagnostic and statistical information to a log file. Their are 3 types of status messages: Informational, Warning and Error. By default, the log file includes all types of message. Sometimes, user may only care about one type of messages, for example, they would like only error messages written to the log file. In order to achieve this, we can set an option called LOGMESSAGELEVEL in control file. The acceptable values for LOGMESSAGELEVEL are ALL, WARNING and ERROR. ALL: If the option LOGMESSAGELEVEL is set to ALL, all types of messages (Informational, Warning and Error) will be written into the log file. WARNING: If the option LOGMESSAGELEVEL is set to WARNING, only warning messages will be written into log file. ERROR: If the option LOGMESSAGELEVEL is set to ERROR, only error messages will be written into log file. UPDATEPROJECTATTRIBUTES, UPDATEMODULEATTRIBUTES      These two options are used to decide whether updating the attributes of projects/modules. The options work when projects/modules being imported already exist in repository and we use update metadata mode or replace metadata mode to do the MDL import. The acceptable values for these two options are Y or N. If the value is set to Y, the attributes of projects/modules will be updated, otherwise not.      Next, let’s give an example to see how these options take effect in MDL. 1. First of all, create the property file mdl.properties under the directory <owb installation path>/owb/bin/admin/. 2. Specify the options in the mdl.properties file, see the following screenshot. 3. Create the control file mdl_control_file.ctl under the directory /tmp/. Set the following options in control file. 4. Log into the OWB Design Client. 5. Create an Oracle module named ORA_MOD_1 under the project MY_PROJECT, then export the project MY_PROJECT into file my_project.mdl. 6. Check the trace file mdl.trc under the directory /tmp/. In this file, we can see very detail information for the above export task. 7. Check the exported MDL file. The file my_project.mdl is in text format. Opening the file, you can see the content of the file directly. It concats the file my_project.mdx and mdlcatalog.xml. 8. Modify the project MY_PROJECT and Oracle module ORA_MOD_1, add descriptions for them separately. Delete the location created in step 5. 9. Import the MDL file my_project.mdl. From the Metadata Import dialog, we can see the default directory for MDL file and log file has been changed to /tmp/. Here we use update metadata mode, match by names to do the importing. 10. After importing, check the description of the project MY_PROJECT, we can see the description is still there. But the description of the Oracle module ORA_MOD_1 has gone. That because we set the option UPDATEPROJECTATTRIBUTES to N, and set the option UPDATEMODULEATTRIBUTES to Y. 11. Check the log file, the log file only contains warning messages and the log message level is set to WARNING.      For more details about the 3 types of status messages, see Oracle® Warehouse Builder Installation and Administration Guide11g Release 2.

    Read the article

  • Can the Flash CS4 [embed] tag be made to export assets to frame 2 rather than frame 1?

    - by Tim Knauf
    We're working on a Flash CS4 project where the main .fla file has ballooned in size and 'Publish' is taking forever. I suspect a large amount of the size (and at least some of the compile time) is due to the quantity of audio symbols in the library. I would love to remove this unnecessary bloat from the .fla file. I've experimented with removing an audio symbol from the library and using the [embed] metadata tag instead, like so: [Embed(source="audio/music/EndOfLevelDitty.mp3")] public var EndOfLevelDitty:Class The resulting published file works perfectly, but there is a problem. Our game uses a preloader on the first frame of the timeline, so all other classes need to be exported in frame 2 (as set in Publish Settings ActionScript 3.0 Settings). So a size report normally begins like this: Frame # Frame Bytes Total Bytes Scene ------- ----------- ----------- ---------------- 1 284515 284515 Scene 1 2 5485305 5769820 (AS 3.0 Classes Export Frame) However, if I use an [embed] tag on a small sound, my size report is now: Frame # Frame Bytes Total Bytes Scene ------- ----------- ----------- ---------------- 1 363320 363320 Scene 1 2 5407240 5770560 (AS 3.0 Classes Export Frame) As you can see, the embedded sound has been exported into frame 1 rather than frame 2. If I were to embed all sounds in this manner, the size of frame 1 would grow to be huge, and users would be looking at a white screen for ages before the preloader frame even loaded. So my question is this: can I use an [embed] tag but have the embedded asset export in frame 2 instead of frame 1? Project constraints: Our team composition means we can't change to pure Flex at this stage. The compiled .swf needs to be 'all in one', so we can't split the preloader into a separate file, and we can't access external resources. Edit: I'd also settle for having the audio in an embedded library SWC, but there seems to be no way to make that embed in frame 2 either; it always ends up in frame 1.

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • Google ajoute le support des fichiers Excel et PowerPoint à ses Google Docs pour faciliter la migration vers sa suite bureautique Cloud

    Google ajoute le support des fichiers Excel et PowerPoint à ses Google Docs Pour faciliter la migration vers sa suite bureautique Cloud Mise à jour du 22/02/11 Les Google Docs sont de plus en plus populaires, y compris en entreprise. C'est en tout cas ce que révèlent des études et ce que nous a confirmé le responsable des produits professionnels de Google France. Seul petit problème, si Google peut héberger tout type de fichiers (vidéos, texte, image, pdf, etc.), il ne peut pas tous les lire. Autrement dit, la p...

    Read the article

  • Importing CSV filte to SQL server...

    - by sam
    HI guys, I am trying to import CSV file to SQL server database, no success, I am still newbie to sql server, thanks Operation stopped... Initializing Data Flow Task (Success) Initializing Connections (Success) Setting SQL Command (Success) Setting Source Connection (Success) Setting Destination Connection (Success) Validating (Success) Messages Warning 0x80049304: Data Flow Task 1: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. (SQL Server Import and Export Wizard) Prepare for Execute (Success) Pre-execute (Success) Messages Information 0x402090dc: Data Flow Task 1: The processing of file "D:\test.csv" has started. (SQL Server Import and Export Wizard) Executing (Error) Messages Error 0xc002f210: Drop table(s) SQL Task 1: Executing the query "drop table [dbo].[test] " failed with the following error: "Cannot drop the table 'dbo.test', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. (SQL Server Import and Export Wizard) Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Code"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task 1: The "output column ""Code"" (38)" failed because truncation occurred, and the truncation row disposition on "output column ""Code"" (38)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "D:\test.csv" on data row 21. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - test_csv" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. (SQL Server Import and Export Wizard) Copying to [dbo].[test] (Stopped) Post-execute (Success) Messages Information 0x402090dd: Data Flow Task 1: The processing of file "D:\test.csv" has ended. (SQL Server Import and Export Wizard) Information 0x402090df: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has started. (SQL Server Import and Export Wizard) Information 0x402090e0: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has ended. (SQL Server Import and Export Wizard) Information 0x4004300b: Data Flow Task 1: "component "Destination - test" (70)" wrote 0 rows. (SQL Server Import and Export Wizard)

    Read the article

  • ssh-agent on ubuntu rapidly restarts

    - by Santa Claus
    I am attempting to use ssh-agent on Ubuntu 13.10 so that I will not have to enter my passphrase to unlock a key every time I want to use ssh or git. As you can see below, ssh-agent appears to be restarting for some reason. These commends were executed within a period of less than 5 seconds: andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-pqm5J0s70NxG/agent.2820; export SSH_AUTH_SOCK; SSH_AGENT_PID=2821; export SSH_AGENT_PID; echo Agent pid 2821; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-VpkOH2WKjT1M/agent.2822; export SSH_AUTH_SOCK; SSH_AGENT_PID=2823; export SSH_AGENT_PID; echo Agent pid 2823; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-EQ6X9JHNiBOO/agent.2824; export SSH_AUTH_SOCK; SSH_AGENT_PID=2825; export SSH_AGENT_PID; echo Agent pid 2825; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-8Iij8kFkaapz/agent.2826; export SSH_AUTH_SOCK; SSH_AGENT_PID=2827; export SSH_AGENT_PID; echo Agent pid 2827; andrew@zaphod:~$ My guess is that ssh-agent is crashing, but how would I know? What log file would it log to?

    Read the article

  • mount nfs subdirectory and still apply parent directory permissions

    - by Christophe Drevet
    A NFS server exports : /export/home computers /export/cont1 computers On the filesystem, there are these permissions : $ ls -al /export/cont1 drwxr-x--- 6 root group1 4096 2010-05-04 10:57 . drwxrwxrwx 5 root root 4096 2010-05-07 14:52 .. drwxrwxrwx 2 root root 4096 2010-05-06 20:33 .snapshot drwxr-xr-x 2 user1 group1 4096 2010-05-04 10:57 user1 drwxr-xr-x 2 user2 group1 4096 2010-05-04 10:57 user2 drwxr-xr-x 2 user3 group1 4096 2010-05-04 10:57 user3 So that user4, which is in not in the group1 can't access this directory and its subdirectories. Now, on its client machine, this user can do : $ sudo mount server:/export/cont1/user3 /mnt/temp and then access the directory without permissions on /export/cont1 : $ id uid=7943(user4) gid=7943(user4) groupes=1189(group4) $ ls -al /mnt/temp/ drwxr-xr-x 3 user3 group1 4096 2010-05-04 10:57 . drwxr-xr-x 7 root root 4096 2010-05-04 11:02 .. -rw-r--r-- 1 user3 group1 6 2010-05-04 10:56 README Is there a way to apply /export/cont1 permissions even if it is not mounted ? The goal is to enable users to mount /home/user3 and only access it if they can access /export/cont1 on the nfs server. Said in another way : how can I allow a machine to mount /export/cont1/user3 and still don't allow user4 to access it. Maybe NFSv4 and Kerberos can help ?

    Read the article

  • mount.nfs: access denied by server while mounting (Kerberos authentication)

    - by Nick
    There's plenty of references to this error on Goggle, and even a question here with the same title, but it seems that "access denied by server while mounting" is a catch-all error. I've tried suggestions that others have used to fix this problem, but they did not work in my case. I'm trying to set-up a Kerberos-based NFS file server with shared homes for a Linux network. I'm using Ubuntu 11.04 Servers and clients. When trying to mount a share using: mount 192.168.1.115:/export/home/ /media/tmp I get: mount.nfs: access denied by server while mounting 192.168.1.115:/export/home/ This is the same if I mount it from a client machine or from the server itself. On the server, in /var/log/syslog I get: Aug 25 06:22:37 nfs mountd[1580]: authenticated mount request from 192.168.1.115:835 for /export/home (/export/home) Aug 25 06:22:37 nfs mountd[1580]: authenticated unmount request from 192.168.1.115:766 for /export/home (/export/home) Which is odd, since it says it's authenticated the request, not denying it. /etc/exports: /export *(rw,fsid=0,crossmnt,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) /export/home *(rw,insecure,async,no_subtree_check,sec=krb5p:krb5i:krb5) On client: me@dt1:/$ rpcinfo -p 192.168.1.115 program vers proto port 100000 2 tcp 111 portmapper 100024 1 udp 37320 status 100024 1 tcp 48460 status 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 2 tcp 2049 100227 3 tcp 2049 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 2 udp 2049 100227 3 udp 2049 100021 1 udp 58625 nlockmgr 100021 3 udp 58625 nlockmgr 100021 4 udp 58625 nlockmgr 100021 1 tcp 49616 nlockmgr 100021 3 tcp 49616 nlockmgr 100021 4 tcp 49616 nlockmgr 100005 1 udp 45627 mountd 100005 1 tcp 60265 mountd 100005 2 udp 45627 mountd 100005 2 tcp 60265 mountd 100005 3 udp 45627 mountd 100005 3 tcp 60265 mountd Any suggestions I could try?

    Read the article

  • Processes spawned by taskset not respecting environment variables

    - by jonesy16
    I've run into an issue where an intel compiler generated program that I'm running with taskset has been putting its temporary files into the working directory instead of /tmp (defined by environment variable TMPDIR). If run by itself, it works correctly. If run with taskset (e.g. taskset -c 0 <program> Then it seems to completely ignore the TMPDIR environment variable. I then verified this by writing a quick bash script as follows: contents of test.sh: #!/bin/bash echo $TMPDIR When run by itself: $ export TMPDIR=/tmp $ test.sh /tmp When run through taskset: $ export TMPDIR=/tmp $ taskset -c 1 test.sh "" Another test. If I export the TMPDIR variable inside of my script and then use taskset to spawn a new process, it doesn't know about that variable: #!/bin/bash export TMPDIR=/tmp taskset -c 1 sh -c export When run, the list of exported variables does not include TMPDIR. It works correctly with any other exported environment variable. If i diff the output of: export and taskset -c 1 bash -c export Then I see that there are 4 changes. The taskset spawned export doesn't have LD_LIBRARY_PATH, NLSPATH (intel compiler variable), SHLVL is 3 instead of 1, and TMPDIR is missing. Can anyone tell me why?

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • How do I export Outlook 2003 distribution lists to Gmail?

    - by Nick T
    I have several distribution lists in Outlook 2003 that I need to move to Gmail. While transferring contacts in the main folder is fairly easy—all one does is export to CSV and import—it's not as easy with the distribution lists. I can't (don't know how to) copy the contacts from the lists to the main contact folder so they can be "normal", and the "save as" options don't include a CSV. There are several of these to do, containing maybe ~100-200 contacts altogether, so I'd like something that's not very tedious.

    Read the article

  • Slideshow from excel file listing the caption, sound file and image file?

    - by Slabo
    Hello, I have excel files with the following header: Caption Sound: Location of sound file Image: Location of image file How can I make a slideshow from this? Each slide should show image, caption, and play sound automatically according to the excel list. I don't care what software I use, if I can get the job done. Total slides ~10,000. In case interested,this is review material for English second language students. Any help appreciated, Thanks

    Read the article

  • How can a single threaded application like Excel 2003 take more than 50% of a hyper-threaded or dual

    - by Lunatik
    I'm waiting for Excel to finish a recalculation and I notice that the CPU usage as reported by Task Manager occasionally spikes to 51% or 52% on a Pentium 4 with hyper-threading. How is a single-threaded application like Excel 2003 doing this? Is it just a rounding/estimation error on the part of Task Manager? Or is it something to do with HT allocation i.e. I wouldn't see this happening on a genuine dual-core or dual-CPU machine?

    Read the article

  • How do I export banshee library settings to another machine?

    - by nityabaddha
    I am trying to downgrade my distro. The thing is, I have spent some few hours organising my music library and I do not want to lose my database. I have checked 'write meta data to files'. And so, my understanding is, that if I just import those same mp3s into a new library, those files will be arranged in the same order in the library. But this doesn't sound like a safe enough option. I'm moving from 11.10 to 10.04. Any help would be greatly appreciated. newbie

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • What is the most compatible, widely used production language to export knowledge and skills gained from Haskell?

    - by World Engineer
    I like Haskell, plain and simple. While Haskell is used in production software, it's not especially widely deployed from what I've seen. What is the most similar and still widely used language in regards to production projects so that I might have a snowball's chance of using something similarly awesome in industry? Also is the same language from the first part available on large numbers of platforms? If not, what is the best alternative that has wide platform deployment? I'd like a single language to put on my to-do list rather than a massive swarm or family. Hard evidence would be a plus.

    Read the article

  • Can iTextSharp rasterize/export to JPEG or other image format?

    - by SkippyFire
    I need to be able to export PDF's that I am creating to JPEG, so that users can have a screenshot/thumbnail of the end product, which is faster than opening the whole PDF. I am running this on an ASP.NET website running in Medium Trust in the Rackspace Mosso Cloud. I have yet to find a library that will either work in Medium trust, or in the case of ABC PDF, which works great locally, wont load in Mosso. Maybe Mosso has a custom trust level? I know that iTextSharp works on Mosso, but I haven't been able to figure how to "screenshot" a single page of a PDF, or export a page to JPEG. Is there anyone out there who has done this before?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >