Search Results

Search found 14905 results on 597 pages for 'reporting tools'.

Page 27/597 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Error while installing vmware tools v8.8.2 in Ubuntu 12.04 beta

    - by Dipen Patel
    I just upgraded to Ubuntu 12.04 from 11.10 using update manager. I use it as virtual machine on VMWare Player 4.xx. As usual I installed vmware tools to enable full screen mode and shared folder functionality. But while installing I got an error while building modules for shared folder and fast networking utilities for vmware tools. Error is ============================================== /tmp/vmware-root/modules/vmhgfs-only/fsutil.c: In function ‘HgfsChangeFileAttributes’: /tmp/vmware-root/modules/vmhgfs-only/fsutil.c:610:4: error: assignment of read-only member ‘i_nlink’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/fsutil.o] Error 1 make[2]: *** Waiting for unfinished jobs.... /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: initialization from incompatible pointer type [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/file.c:128:4: warning: (near initialization for ‘HgfsFileFileOperations.fsync’) [enabled by default] /tmp/vmware-root/modules/vmhgfs-only/tcp.c:53:30: error: expected ‘)’ before numeric constant /tmp/vmware-root/modules/vmhgfs-only/tcp.c:56:25: error: expected ‘)’ before ‘int’ /tmp/vmware-root/modules/vmhgfs-only/tcp.c:59:33: error: expected ‘)’ before ‘int’ make[2]: *** [/tmp/vmware-root/modules/vmhgfs-only/tcp.o] Error 1 make[1]: *** [_module_/tmp/vmware-root/modules/vmhgfs-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-22-generic' make: *** [vmhgfs.ko] Error 2 make: Leaving directory `/tmp/vmware-root/modules/vmhgfs-only' The filesystem driver (vmhgfs module) is used only for the shared folder feature. The rest of the software provided by VMware Tools is designed to work independently of this feature. Let me know if anyone has encountered and solved this problem. Regards, Dipen Patel

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Application Lifecycle Management Tools

    - by John K. Hines
    Leading a team comprised of three former teams means that we have three of everything.  Three places to gather requirements, three (actually eight or nine) places for customers to submit support requests, three places to plan and track work. We’ve been looking into tools that combine these features into a single product.  Not just Agile planning tools, but those that allow us to look in a single place for requirements, work items, and reports. One of the interesting choices is Software Planner by Automated QA (the makers of Test Complete).  It's a lovely tool with real end-to-end process support.  We’re probably not going to use it for one reason – cost.  I’m sure our company could get a discount, but it’s on a concurrent user license that isn’t cheap for a large number of users.  Some initial guesswork had us paying over $6,000 for 3 concurrent users just to get started with the Enterprise version.  Still, it’s intuitive, has great Agile capabilities, and has a reputation for excellent customer support. At the moment we’re digging deeper into Rational Team Concert by IBM.  Reading the docs on this product makes me want to submit my resume to Big Blue.  Not only does RTC integrate everything we need, but it’s free for up to 10 developers.  It has beautiful support for all phases of Scrum.  We’re going to bring the sales representative in for a demo. This marks one of the few times that we’re trying to resist the temptation to write our own tool.  And I think this is the first time that something so complex may actually be capably provided by an external source.   Hooray for less work! Technorati tags: Scrum Scrum Tools

    Read the article

  • sensors reporting weird temperatures

    - by Felix
    lm-sensors is reporting weird temps for me: $ sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +38.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +32.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +42.0°C (high = +72.0°C, crit = +100.0°C) w83627dhg-isa-0290 Adapter: ISA adapter Vcore: +1.10 V (min = +0.00 V, max = +1.74 V) in1: +1.62 V (min = +0.06 V, max = +0.17 V) ALARM AVCC: +3.34 V (min = +2.98 V, max = +3.63 V) VCC: +3.34 V (min = +2.98 V, max = +3.63 V) in4: +1.83 V (min = +1.30 V, max = +1.15 V) ALARM in5: +1.26 V (min = +0.83 V, max = +1.03 V) ALARM in6: +0.11 V (min = +1.22 V, max = +0.56 V) ALARM 3VSB: +3.30 V (min = +2.98 V, max = +3.63 V) Vbat: +3.18 V (min = +2.70 V, max = +3.30 V) fan1: 0 RPM (min = 0 RPM, div = 128) ALARM fan2: 1117 RPM (min = 860 RPM, div = 8) fan3: 0 RPM (min = 10546 RPM, div = 128) ALARM fan4: 0 RPM (min = 10546 RPM, div = 128) ALARM fan5: 0 RPM (min = 10546 RPM, div = 128) ALARM temp1: +88.0°C (high = +20.0°C, hyst = +4.0°C) ALARM sensor = diode temp2: +25.0°C (high = +80.0°C, hyst = +75.0°C) sensor = diode temp3: +121.5°C (high = +80.0°C, hyst = +75.0°C) ALARM sensor = thermistor cpu0_vid: +2.050 V Please note temp3. How can I know what temp3 is, and why it is so high? The system is really stable (which I guess it wouldn't be at those temps). Also, note the really decent core temps, which suggest a healthy system as well. My guess is that the readout is wrong. On another computer it reported temperatures below 0 degrees centigrade, which was not possible, considering the environment temperature of ~22-24. Is this some known bug/issue? Should I try some Windows programs (like CPU-Z) and see they give similar results?

    Read the article

  • Which tools you use for development in your company?…Please be exact [closed]

    - by predrag.music
    If you are a professional php/(my/postgre/?)sql/? developer and working in a professional team ... I would like to know which tools you use for development in your company. I do not care which tool is better or worse, but "which tools you use", if it is not a TOP SECRET :) For example, these are just some of the tools i/we use (first those used most (in general)): Pen, paper lots of cofee, cola ... let me think ... mmmm ... yeah more cofee :) All kinds of books (a lots of books) OS: Win / MacOS X Server: Hosted (CentOS )/ At work Mac OS X Dev server: XAMPP / MAMP / LAMP Editor: Notepad++ IDE: Netbeans / Zend Studio / Eclipse Version Control System: Mercurial / SVN FTP: Filezilla mostly / ... Passwords: KeePass js / ajax: jQuery / pure js / jQuery UI Framework:CI / Zend / pure php Database: MySQL / Other ORM: Framework layer db (Not an ORM I know but...) / Doctrine (2) / no ORM Debugging: Xdebug (PHP) / firebug (ajax/js/html/css/...) / framework profiler (stuff) / ... (x) Dreaming: About... Thinking: Not about chaos in ? direction .... n Anything else that comes to mind n+1 Zilion other stuff i know but i can't remember ... 8 some other stuff i (don't) remember i forgot, give up, delete, lost, said to myself never again, i haven't had time stuff, have on computer stuff but can't find or don't even know i have it on my computer at least 2-3 or more times, stuff I said to myself i'll check later and never checked again for all sort of "perfectly justified" reasons (time, memory, wife :), whatever,...), ... what is the reason i'm asking this?:) 8 and beyond looking forward to see a lot of answers ?

    Read the article

  • Report Books and Parameters with Telerik Reporting

    Telerik Reporting provides a simple, yet powerful, component called the ReportBook that allows multiple reports to be combined into one. Doing so makes displaying, printing, and exporting multiple reports a much simpler task for end users. Providing this type of functionality does leave one question though, "How are report parameters handled?" This entry will focus on providing the answers to that simple question. Click here to download the sample code so you can follow along. Creating a ReportBook ReportBooks are supported in all three environments the ReportViewer supports, WinForms, Silverlight, and ASP.NET. Adding a ReportBook to each of these environments is very similar. This example focuses on using a ReportBook with the WinForms ReportViewer. Create a WinForms Application Add a reference to the project containing the reports. Drag a ReportViewer from the ToolBox into the designer. Drag a ReportBook from the ToolBox into the designer. A dialog will be displayed asking for the reports to be included in the ReportBook....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • 2013 U.S. GAAP Financial Reporting Taxonomy Available for Public Review and Comment

    - by Theresa Hickman
    FASB recently released the proposed 2013 U.S. GAAP Reporting Taxonomy. Comments are due October 29, 2012 to be finalized and published early 2013.  The proposed 2013 U.S. GAAP taxonomy and instructions on how to submit comments are available at the FASB’s XBRL page. In previous blog entries, I talked about how Oracle Hyperion Disclosure Management supports the latest taxonomy, enabling financial managers to easily comply with the latest filing requirements. The taxonomy is a list of computer-readable tags in XBRL that allows companies to annotate the voluminous financial data that is included in typical long-form financial statements and related footnote disclosures. The tags allow computers to automatically search for, assemble, and process data so it can be readily accessed and analyzed by investors, analysts, journalists, and regulators. You do not have to have Oracle Hyperion Financial Management, used for consolidating financial results, to generate XBRL. You just need Oracle Hyperion Disclosure Management to generate XBRL instance documents from financial applications, such as Oracle E-Business Suite, Oracle PeopleSoft, Oracle JD Edwards EnterpriseOne, and Oracle Fusion General Ledger. To generate XBRL tags and complete SEC filings using your existing financial applications with Oracle Hyperion Disclosure Management, here are the steps: Download the XBRL taxonomy from the SEC or XBRL Website into Hyperion Disclosure Management to create a company taxonomy. Publish financial statements from the general ledger to Microsoft Excel or Microsoft Word. Create the SEC filing in the Microsoft programs and perform the XBRL tag mapping in Oracle Hyperion Disclosure Management. Ensure that the SEC filing meets XBRL and SEC EDGAR Filer Manual validation requirements. Validate and submit the company taxonomy and XBRL instance document to the SEC. Get more details about Oracle Hyperion Disclosure Management.

    Read the article

  • CUOA Workshop: “L’evoluzione dei modelli e dei sistemi di analisi e reporting direzionale”

    - by Paolo Leveghi
    Il 26 Giugno scorso presso l’Aula Magna della Fondazione CUOA si è tentuo il workshop “L’evoluzione dei modelli e dei sistemi di analisi e reporting direzionale”, promosso dal Club Finance CUOA con la collaborazione di Oracle .  Una recente analisi di Gartner ha evidenziato che Executives Finance ed IT hanno identificato Business Intelligence, Analytics e Performance Management come tema prioritario su cui concentrare gli investimenti “tecnologici” nel 2013 confermando, se mai qualcuno ne avesse avuto bisogno, la continua e crescente attenzione delle aziende alla propria capacità di definire, produrre e gestire informazioni funzionali all’identificazione di opportunità, alla valutazione di rischi ed impatti, alla simulazione degli effetti di operazioni ordinarie e straordinarie...in un solo concetto, informazioni funzionali alla guida dell’azienda. Questo dato è ancora più interessante se incrociato con il risultato di una survey condotta da CFO Magazine e KPMG International nella quale il 48% dei CFO intervistati ha dichiarato di ritenere i propri sistemi datati ed poco flessibili, quindi un limite, se non proprio un freno, alla loro volontà di essere agenti del cambiamento. A fronte di tutto questo, le aziende dimostrano un crescente interesse a capire cosa fare, oggi e domani, per migliorare la propria capacità di sfruttare una risorsa aziendale estremamente pregiata quale l’informazione. L’    Obiettivo del workshop è quindi stato quello di analizzare in quale scenario stanno operando oggi le aziende del Nord Est Italiano verificando, grazie alle testimonianze di ITAL TBS e del Gruppo Carraro,  lo “stato di salute” dei processi di Business Analytics, il livello di cultura aziendale ed il grado di adozione di soluzioni da parte dei CFO, del Management e più in generale dei decisori aziendali, i percorsi evolutivi e prospettici per migliorare su questi temi

    Read the article

  • Visual Studio 2010 Modeling and Architecture Tools

    - by MikeParks
    Jennifer Marsman (Microsoft Evangelist) and Cameron Skinner (Microsoft Visual Studio Product Unit Manager) recently stopped by our office while they were passing through Louisville on their tour to give us a presentation on the new Visual Studio 2010 Modeling and Architecture Tools. I checked out these new features when Visual Studio 2010 Beta versions originally rolled out and have been really impressed with this stuff ever since then. So it was pretty cool to actually learn some new techniques from Cameron himself since he helped write the actual code behind some of those features. If you've upgraded to Visual Studio 2010 recently I would highly recommend using the Architecture tools. They're awesome. If you want to make improvements to it, they even have their own SDK for it. There are plenty of blogs out there to show you how to use it. I've been waiting to find a tool that works like this where I can really analyze the code in solutions and projects and see how everything ties together. It's really handy if you're asked to work on a new project and aren't familiar with how it works. Just run the tools, analyze the DLL's, learn how everything works, and then you'll be ready to implement new code! It's a great tool to learn new systems quick and easy and it's all housed within the Visual Studio IDE. I just wanted to write a blog to brag about it a little bit, so I figured I'd throw this up here. It's a must have tool for Developers/Architects. Here's some screenshots of when I was using it earlier:   Thanks everyone! - Mike

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • YouTube: Chrome Dev Tools Integration with NetBeans IDE!

    - by Geertjan
    Some time ago my colleague David Konecny discussed the question "What works better for you? NetBeans IDE or Chrome Developer Tools?". It's a good read. David highlights the point that it's not a question of either/or but both, since the two tools are like the apple/pear dichotmoy. However, good news! The two worlds are not divided in NetBeans IDE 7.4. Changes you make in Chrome Developer Tools (CDT) are automatically persisted to the related files in NetBeans IDE, as you can see in a new YouTube clip I made today. The new integration of CDT with NetBeans IDE has been mentioned in the NetBeans IDE 7.4 New & Noteworthy, while on Twitter this was sighted yesterday: Watch the movie above and within 5 minutes you too will see the simplicity and power of CDT integration with NetBeans IDE. In other news. I consider the above to be my favorite (though it's a tough choice, since there are so many new features in NetBeans IDE 7.4) new feature, for the article "What is your favorite new NetBeans IDE 7.4 feature?"

    Read the article

  • What is the value of workflow tools?

    - by user16549
    I'm new to Workflow developement, and I don't think I'm really getting the "big picture". Or perhaps to put it differently, these tools don't currently "click" in my head. So it seems that companies like to create business drawings to describe processes, and at some point someone decided that they could use a state machine like program to actually control processes from a line and boxes like diagram. Ten years later, these tools are huge, extremely complicated (my company is currently playing around with WebSphere, and I've attended some of the training, its a monster, even the so called "minimalist" versions of these workflow tools like Activiti are huge and complicated although not nearly as complicated as the beast that is WebSphere afaict). What is the great benefit in doing it this way? I can kind of understand the simple lines and boxes diagrams being useful, but these things, as far as I can tell, are visual programming languages at this point, complete with conditionals and loops. Programmers here appear to be doing a significant amount of work in the lines and boxes layer, which to me just looks like a really crappy, really basic visual programming language. If you're going to go that far, why not just use some sort of scripting language? Have people thrown the baby out with the bathwater on this? Has the lines and boxes thing been taken to an absurd level, or am I just not understanding the value in all this? I'd really like to see arguments in defense of this by people that have worked with this technology and understand why its useful. I don't see the value in it, but I recognize that I'm new to this as well and may not quite get it yet.

    Read the article

  • Project compilation requires a class that is not used anywhere

    - by Susei
    When I build with ant my project that uses libgdx, I get a strange error. It says that a class com.google.gwt.dom.client.ImageElement is not found, but it isn't used at all in the code. How can I find what makes this class necessary? Even searching over the whole project doesn't give any results. It says that error is at PixmapTextureAtlas.java:16 (class source), but there is no code that uses that ImageElement class. Adding the library containing com.google.gwt.dom.client.ImageElement class helps, of course, but I'd like to figure out why this class in needed. Here is the place in ant log that tells of the actual error: Compiling 3 source files to /home/suseika/Projects/tendiwa/client/bin /home/suseika/Projects/tendiwa/client/src/org/tendiwa/client/PixmapTextureAtlas.java:16: error: cannot access ImageElement class file for com.google.gwt.dom.client.ImageElement not found Here is the whole ant log: /usr/lib/jvm/java-7-oracle/bin/java -Xmx128m -Xss2m -Dant.home=/opt/intellijidea/lib/ant -Dant.library.dir=/opt/intellijidea/lib/ant/lib -Dfile.encoding=UTF-8 -classpath /opt/intellijidea/lib/ant/lib/ant-apache-regexp.jar:/opt/intellijidea/lib/ant/lib/ant-swing.jar:/opt/intellijidea/lib/ant/lib/ant-apache-xalan2.jar:/opt/intellijidea/lib/ant/lib/ant-jdepend.jar:/opt/intellijidea/lib/ant/lib/ant-apache-resolver.jar:/opt/intellijidea/lib/ant/lib/ant-jsch.jar:/opt/intellijidea/lib/ant/lib/ant.jar:/opt/intellijidea/lib/ant/lib/ant-testutil.jar:/opt/intellijidea/lib/ant/lib/ant-launcher.jar:/opt/intellijidea/lib/ant/lib/ant-apache-bsf.jar:/opt/intellijidea/lib/ant/lib/ant-commons-logging.jar:/opt/intellijidea/lib/ant/lib/ant-netrexx.jar:/opt/intellijidea/lib/ant/lib/ant-junit.jar:/opt/intellijidea/lib/ant/lib/ant-commons-net.jar:/opt/intellijidea/lib/ant/lib/ant-apache-bcel.jar:/opt/intellijidea/lib/ant/lib/ant-antlr.jar:/opt/intellijidea/lib/ant/lib/ant-apache-log4j.jar:/opt/intellijidea/lib/ant/lib/ant-jai.jar:/opt/intellijidea/lib/ant/lib/ant-apache-oro.jar:/opt/intellijidea/lib/ant/lib/ant-jmf.jar:/opt/intellijidea/lib/ant/lib/ant-javamail.jar:/usr/lib/jvm/java-7-oracle/lib/tools.jar:/opt/intellijidea/lib/idea_rt.jar com.intellij.rt.ant.execution.AntMain2 -logger com.intellij.rt.ant.execution.IdeaAntLogger2 -inputhandler com.intellij.rt.ant.execution.IdeaInputHandler -buildfile /home/suseika/Projects/tendiwa/client/build.xml jar build.xml property path description compile ant property property property description compile mkdir javac jar ant property description _core_src_available available ontology antcall property description _core_src_available available _build_core ant property property compile echo /home/suseika/Projects/tendiwa/client mkdir javac jar jar Building jar: /home/suseika/Projects/tendiwa/MainModule.jar description tempfile mkdir Created dir: /tmp/tendiwa373148820 unjar Expanding: /home/suseika/Projects/tendiwa/MainModule.jar into /tmp/tendiwa373148820 Expanding: /home/suseika/Projects/tendiwa/tendiwa-backend.jar into /tmp/tendiwa373148820 Expanding: /home/suseika/Projects/tendiwa/tendiwa-ontology.jar into /tmp/tendiwa373148820 copy Copying 1 file to /tmp/tendiwa373148820 java Created item short_sword Created item short_bow Created item bucket Created item boot Created item steel_morningstar Created item rifle_ammo Created item handAxe Created item iron_armor Created item steel_mace Created item jacket Created item fedora Created item wooden_arrow Saving sources to /tmp/tendiwa373148820/ontology/src tendiwa/resources/SoundTypes.java tendiwa/resources/CharacterTypes.java tendiwa/resources/ObjectTypes.java tendiwa/resources/FloorTypes.java tendiwa/resources/ItemTypes.java tendiwa/resources/MaterialTypes.java mkdir mkdir mkdir Created dir: /tmp/tendiwa373148820/ontology/bin javac jar Building jar: /home/suseika/Projects/tendiwa/tendiwa-ontology.jar echo Resources source code generated ant property property compile echo /home/suseika/Projects/tendiwa/client mkdir javac jar jar jar Building jar: /home/suseika/Projects/tendiwa/MainModule.jar mkdir javac /home/suseika/Projects/tendiwa/client/build.xml:25: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1150) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:912) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.Main.start(Main.java:180) at org.apache.tools.ant.Main.main(Main.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) /home/suseika/Projects/tendiwa/client/build.xml (25:46)'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds Compiling 3 source files to /home/suseika/Projects/tendiwa/client/bin /home/suseika/Projects/tendiwa/client/src/org/tendiwa/client/PixmapTextureAtlas.java:16: error: cannot access ImageElement class file for com.google.gwt.dom.client.ImageElement not found 1 error /home/suseika/Projects/tendiwa/client/build.xml:25: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1150) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:912) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.Main.start(Main.java:180) at org.apache.tools.ant.Main.main(Main.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) /home/suseika/Projects/tendiwa/client/build.xml:25: Compile failed; see the compiler error output for details. at org.apache.tools.ant.taskdefs.Javac.compile(Javac.java:1150) at org.apache.tools.ant.taskdefs.Javac.execute(Javac.java:912) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291) at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:390) at org.apache.tools.ant.Target.performTasks(Target.java:411) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399) at org.apache.tools.ant.Project.executeTarget(Project.java:1368) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1251) at org.apache.tools.ant.Main.runBuild(Main.java:809) at org.apache.tools.ant.Main.startAnt(Main.java:217) at org.apache.tools.ant.Main.start(Main.java:180) at org.apache.tools.ant.Main.main(Main.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.intellij.rt.ant.execution.AntMain2.main(AntMain2.java:30) Ant build completed with 3 errors one warning in 4s at 10/30/13 3:09 AM Here is a part of ant file where this error appears: <path id="tendiwa.jars"> <fileset dir="../libs"> <include name="**/*.jar"/> </fileset> <pathelement path="../tendiwa-backend.jar"/> <pathelement path="../tendiwa-ontology.jar"/> <!--<fileset dir="/usr/share/java" includes="gwt*.jar"/>--> </path> <target name="compile"> <ant dir="../MainModule" target="jar"/> <mkdir dir="bin"/> <javac destdir="bin" failonerror="true"> <classpath> <path refid="tendiwa.jars"/> <!--temporary--> <pathelement path="../tendiwa-ontology.jar"/> <!--temporary--> <pathelement path="../MainModule.jar"/> <fileset dir="../libs" includes="**/*.jar"/> </classpath> <src> <pathelement path="Desktop/src"/> <pathelement path="src"/> </src> </javac> </target>

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • cpufreq-selector, cpufreq-info reporting wrong max speed

    - by dty
    Hi, I've just built a new machine with a Core i5-760 CPU. Max speed is 2.80GHz (turbo mode notwithstanding). I've also done a vanilla install of Ubuntu 10.10. I've added the cpufreq-selector applet to the top panel, and its menus only allow me to select up to 2.39GHz. If I select the "performance" governor, it also shows 2.39GHz. cpufreq-info reports: $ cpufreq-info cpufrequtils 007: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to [email protected], please. analyzing CPU 0: driver: acpi-cpufreq CPUs which run at the same hardware frequency: 0 1 2 3 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 10.0 us. hardware limits: 1.20 GHz - 2.39 GHz available frequency steps: 2.39 GHz, 2.26 GHz, 2.13 GHz, 2.00 GHz, 1.86 GHz, 1.73 GHz, 1.60 GHz, 1.46 GHz, 1.33 GHz, 1.20 GHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance current policy: frequency should be within 1.20 GHz and 2.39 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 1.20 GHz. cpufreq stats: 2.39 GHz:13.74%, 2.26 GHz:0.09%, 2.13 GHz:0.08%, 2.00 GHz:0.08%, 1.86 GHz:0.07%, 1.73 GHz:0.07%, 1.60 GHz:0.08%, 1.46 GHz:0.08%, 1.33 GHz:0.11%, 1.20 GHz:85.61% (15560) [...CPUs 1-3 elided, but similar...] Any idea how to get Ubuntu and the various tools to recognise this as a 2.80GHz processor? And ideally to report the actual speed when running in turbo mode too, but that's not critical. Edit: I should probably add that the BIOS (& Windows) are quite happy that it's a 2.80GHz CPU. Thanks.

    Read the article

  • Drive reporting incorrect free space

    - by Oli
    So I swapped my shiny SATA SSD for an even shinier PCI-E SSD. I run my core OS on the SSD because it's silly-fast. I did this on my old SSD so I created a new EXT4 partition and then just dded the data across (sorry I don't know the exact command I ran anymore) and after reinstalling grub, I booted onto the PCI-E SSD. At first glance everything had worked perfectly and things were running faster than ever. But then I noticed the free disk space on the new, larger drive: it was almost exactly the same as it was on the other disk... A disk that was half its size. So it looks as if I've copied the files across incorrectly and it's copied some of the filesystem metadata along with it. Tools like du and Disk Usage Analyzer come back with the correct figures. Things that look at the partition (and not the files) seem to think the drive is 120GB I've been using this drive for a week now so it's way out of sync with the old SSD so dumping the data and starting again isn't a job that fills me with joy but two questions: Is there a way to fix my filesystem so it knows what it's really on about? fsck e2fsck and badblocks all seem to be able to scan it without finding a problem with it. If I do plug my old SSD back in, copy the data off my PCI-E on to it and then copy it back onto a fresh filesystem (eg juggle the data around), what's the best way of doing that? I obviously want to keep all the permissions and softlinks where they are.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >