Search Results

Search found 856 results on 35 pages for 'spreadsheet'.

Page 28/35 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Preserving NULL values in a Double Variable

    - by Sam
    Hi, I'm working on a vb.net application which imports from an Excel spreadsheet. If rdr.HasRows Then Do While rdr.Read() If rdr.GetValue(0).Equals(System.DBNull.Value) Then Return Nothing Else Return rdr.GetValue(0) End If Loop Else I was using string value to store the double values and when preparing the database statement I'd use this code: If (LastDayAverage = Nothing) Then command.Parameters.AddWithValue("@WF_LAST_DAY_TAG", System.DBNull.Value) Else command.Parameters.AddWithValue("@WF_LAST_DAY_TAG", Convert.ToDecimal(LastDayAverage)) End If I now have some data with quite a few decimal places and the data was put into the string variable in scientific notation, so this seems to be the wrong approach. It didn't seem right using the string variable to begin with. If I use a double or decimal type variable, the blank excel values come across as 0.0. How can I preserve the blank values? Thanks

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Windows 8 and the future of Silverlight

    - by Laila
    After Steve Ballmer's indiscrete 'MisSpeak' about Windows 8, there has been a lot of speculation about the new operating system. We've now had a few glimpses, such as the demonstration of 'Mosh' at the D9 2011 conference, and the Youtube video, which showed a touch-centric new interface for apps built using HTML5 and JavaScript. This has caused acute anxiety to the programmers who have followed the recommended route of WPF, Silverlight and .NET, but it need not have caused quite so much panic since it was, in fact, just a thin layer to make Windows into an apparently mobile-friendly OS. More worryingly, the press-release from Microsoft was at pains to say that 'Windows 8 apps use the power of HTML5, tapping into the native capabilities of Windows using standard JavaScript and HTML', as if all thought of Silverlight, dominant in WP7, had been jettisoned. Ironically, this brave new 'happening' platform can all be done now in Windows 7 and an iPad, using Adobe Air, so it is hardly cutting-edge; in fact the tile interface had a sort of Retro-Zune Metro UI feel first seen in Media Centre, followed by Windows Phone 7, with any originality leached out of it by the corporate decision-making process. It was kinda weird seeing old Excel running alongside stodgily away amongst all the extreme paragliding videos. The ability to snap and resize concurrent apps might be a novelty on a tablet, but it is hardly so on a PC. It was at that moment that it struck me that here was a spreadsheet application that hadn't even made the leap to the .NET platform. Windows was once again trying to be all things to all men, whereas Apple had carefully separated Mac OS X development from iOS. The acrobatic feat of straddling all mobile and desktop devices with one OS is looking increasingly implausible. There is a world of difference between an operating system that facilitates business procedures and a one that drives a device for playing pop videos and your holiday photos. So where does this leave Silverlight? Pretty much where it was. Windows 8 will support it, and it will continue to be developed, but if these press-releases reflect the thinking within Microsoft, it is no longer seen as the strategic direction. However, Silverlight is still there and there will be a whole new set of developer APIs for building touch-centric apps. Jupiter, for example, is rumoured to involve an App store that provides new, Silverlight based "immersive" applications that are deployed as AppX packages. When the smoke clears, one suspects that the Javascript/HTML5 is merely an alternative development environment for Windows 8 to attract the legions of independent developers outside the .NET culture who are unlikely to ever take a shine to a more serious development environment such as WPF or Silverlight. Cheers, Laila

    Read the article

  • Goodbye, Spreadsheets and Hello Modern ERP

    - by Christine Randle
    By: Steve Cox, Vice President, Oracle Accelerate for Midsize Companies     Signs of the resurging economy continue to sprout, with green shoots rising across different sectors and industries. With the economy on the rebound, businesses are increasing their investment in technology to keep up with growth and evolving demands; as proof, Gartner recently increased its worldwide IT spending forecast for 2012 to $3.6 trillion, anticipating a 3 percent increase from 2011 spending.   One of the segments most reliant on technology to catapult growth is midsize companies – established businesses leveraging every competitive efficiency and advantage to compete with much larger enterprises. We find that to compete against the big guys, they need to create an internal technology infrastructure to fuel that growth. Goodbye, spreadsheets and hello modern ERP.   While many businesses postponed upgrading or replacing financial and HR management systems during the recession, now some have started dusting off RFPs and revisiting technology options. Years ago, midsize organizations used spreadsheet-based systems and processes to manage employees, customers, partners, products and revenue. We’ve found that as companies scale up, they are apt to avoid heavily customizing their existing systems, and instead are more prone to standardize on a modern, enterprise-class ERP system.   Modern ERP platforms enable growing companies to immediately address the most pressing challenges – accounting, talent management, customer retention, et. al. Midsize companies implement these systems and processes to help them earn more, go public or expand globally.   And today, choice is a primary factor when selecting an ERP solution. Businesses have more deployment options now than ever before, depending on their unique structures and needs. Whether the preference is on demand, cloud, hosted or on premise, a modular, scalable deployment is available to meet the need.   With modern ERP systems, business that once struggled to do more with fewer resources have access to the same quality tools as larger competitors. By adopting top tier ERP systems tailored to individual business needs, midsize companies can support business operations while creating an enterprise system that seamlessly scales up to fuel future growth. Meaning that the ERP decision that your company makes today, will have legs to serve your business for years to come.

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Analysing SQLBits Feedback

    - by jamiet
    Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments. The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot: I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form. I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort. Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought of it. It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive metadata associated with it but I live in hope. @Jamiet

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • Differences between a retail and wholesale website, and how difficult would it be to integrate them? [closed]

    - by kmy
    I was told to come here for some guidance, I know that my questions can be quite broad, but I just really want some kind of direction to go towards (links, articles, etc). So here it goes: For the past month I have been planning and started implementing a wholesale website, but plans got changed on me and I need to allow both retail and wholesale orders which will also depend on whether the account is a regular customer or also has a seller account. There are various types of products and it has been decided that I would use subcategories as tables to organize them and allow various specific item fields. So here are some of my specific questions: How different is the database structure for each item to be available for both wholesale and retail? Will it just be adding a quantity column and have two different pricing, or much more complicated? I am unaware if there's any price tables but would that be more difficult? They use Quickbooks POS software, how difficult/inefficient will it be to update inventory quantities if they have like 2000-4000 products? And what would be the best ways to extract the information and update the system? I know it export an excel spreadsheet so maybe a suggested php plugin for it? How difficult is this project in your professional opinion, and how big should a team usually be? (At the moment it is just me.) Projected time for planning, implementation, and quality assurance in accordance to team size? I am an entry level developer and I know that I do not have enough knowledge to direct myself on this website...What kind of web developer skills will I need to find to help me? (My company is willing to hire people to get this website done as fast as possible.) Also, what would be some great questionnaires to ask the product stakeholder about what he wants from the website? (He has made it clear he is neither computer or internet savvy...) Sorry for the amount of questions, I'm an entry level web developer and do not have a senior to look up to for guidance. I have knowledge in HTML, CSS, Javascript, jQuery, PHP, MySQL, phpMyAdmin. I have never used any frameworks like Zend, Magento, etc, and is doing this all from scratch. So far the website is built in an object oriented way, MVC architecture to the best of my abilities, but I have many doubts because I would really want to have this done right. If I have been unclear on some parts please tell me and I'll add more detail. I'm sure I'll have more additional questions later on, if anyone is open to that please tell me. Thanks in advance!

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • Non-functioning AutoFilter on Locked Cells in Office 2008 - works in Office 2007

    - by Sarcas
    I'm looking into a problem for someone, who works in a mixed OS environment. She has created an Excel spreadsheet in Office 2007 to act as a directory, with AutoFilter turned on for names, email addresses, departments etc. To make sure no one accidentally edits email addresses (for example), she has protected the work sheet. Accessing this worksheet on a PC running Excel 2007, everything runs as you'd expect. You can filter the sheet by any of the auto-filtered columns, and because the sheet is protected, the data integrity is guaranteed. However, if you access the sheet on a Mac running Excel 2008, you can't filter the columns. What's strange here is that the AutoFilter dropdown arrows do appear in each of the column headers as you would expect. It's just that nothing happens if you click on them. If you select one of the column header cells (say, 'First Name') and check the menu: Data-Filter, you can see that AutoFilter is ticked. As another datapoint, you also seem to be able to apply an Advanced filter to these rows on the protected sheets. Does anyone know why this might be? It seems to be a compatibility issue between Excel 2007/2008 (I know the codebase isn't the same), but I can't find any references to it in documentation or forums anywhere, and it would be good to know if there's a way around this. Thanks!

    Read the article

  • On a dual-GPU laptop, is using the discrete GPU ever more power efficient?

    - by Mahmoud Al-Qudsi
    Given a laptop with a dual integrated/discrete GPU configuration, is it ever more power efficient to use the discrete GPU instead of the integrated? Obviously when writing an email or working on a spreadsheet, the integrated GPU will always use less power. But let's say you're doing something graphics-medium but not graphics-intensive/heavy - is there a point where it actually makes sense to fire up the discrete GPU, not for performance but for power-saving reasons? Off the top of my head, I can think of a scenario where the external GPU supports hardware decoding of a particular video codec - I'd imagine there is a "price point" where using the GPU saves more energy than decoding that fully in software would. But I think most GPUs, integrated or discrete, pretty much decode just the plain-Jane h264. But maybe there is something more complicated, perhaps if you're doing something like desktop/windowing animations or a flash animation on a website (not an embedded flash video) - maybe the discrete GPU will use enough less power to make up for switching to it? I guess this question can be summed up as to whether or not you can say beyond doubt that if you don't care for performance on a laptop with two GPUs, always use the integrated GPU for maximum battery life.

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • EXCEL generate a character in one column of a row, only if other columns in the same row are not bl

    - by Simon
    My speadsheet keeps track of when patients leave a specific floor of the hospital. There are columns in which where each patient goes is documented (1 column for "home", another column for "rehab facility", another for "other floor", etc.). Only when the patient leaves the hospital altogether does it count as a discharge, in which case the “discharge” column needs to have something in it. What formula can I use to generate, say, an "x" in the "discharge" column if certain "where they went" columns in the same row contain something, but not if there is nothing in any of them? Currently, to accomplish this I am using =IF(OR(M30,N30,O30,P30,Q30,R30,S30),"x") in row 3 of the "discharge" column (rows 1 and 2 are headings), and I have used "fill down" in all subsequent rows. To suppress the "FALSE" this formula yields when the condition is not true, I have applied conditonal formatting to the entire column that if the value=FALSE then the font is white (same colour as background). Is there a more efficient, elegant and idiot-proof way of doing this? The conditional formatting of text colour could potentially confuse everyone but the person who built the spreadsheet (me).

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • CSV engine on MySQL server

    - by Jeff
    I don't think that this is a programming question so I am going to ask it here - Reading the book high performance mysql, I read about the CSV engine. The paragraph says: The CSV engine can treat comma-separated values (CSV) files as table, but it does not support indexes on them. This engine lets you copy files in and out of the database while the server is running. If you export a CSV file from a spreadsheet and save it in the MySQL server's data directory, the server can read it immediately. Similary, if you write data to a CSV table, an external program can read it right away. CSV tables are especially useful as a data interchange format and for certain kinds of logging. What I get from this paragraph is that I can copy a .CSV file into the data directory of database, and it should show as a table that is able to be read from. However, whenever I copy a test .csv file into the directory, it does not appear as a table. I can't access it. I am using MySQL 5.5 also Does anyone know why this is not working, or what I am doing wrong? Thanks

    Read the article

  • Migrating 10-15 Websites Running Linux, LAMP, RoR, WordPress

    - by Michael
    Task is to move 10-15 websites running Linux to new servers hosted by Amazon. These boxes are currently on dedicated servers. Some sites are running WordPress, some have custom CMS, and others might have RoR applications. Unfortunately, there is sparse documentation regarding each site and how services/files are dependent on each other which means there is a lot of detective work that needs to take place. My goal is to properly document each site, what makes them work, etc., so future admins have at least something to work with. Currently my strategy is to download each site so I have a backup of the files then scan through them looking for configuration files -- db connections, apache configs, etc. Then, create a nice spreadsheet with these findings and migrate these out to the new server. My question to ServerFault is this, are there things you would look out for? Easier ways to handle this task that I'm missing? Points will be awarded to answers that help with efficiency. Thanks in advance.

    Read the article

  • Excel not properly recalculating values

    - by gms8994
    I have an excel sheet with values in it (this sheet is generated by a custom perl script, but I don't think that's where the problem lies). In it, I have a formula: =sum(indirect(concatenate(address(6,column()),":",address(17,column())))) The purpose of this formula is to give me the SUM() of the cells in the current column, between rows 6 and 17. In Gnumeric Spreadsheet, as soon as I open the file, this works. But in Excel (both 2003 and 2007), opening the file gives #VALUE! errors in the fields with this formula, stating that the INDIRECT call with the values $B$6:$B$17 will result in an error. Here's the kink in the issue. If I edit the field (via F2), and make no changes, and hit enter, the values update. Also, it seems, if I save the file as .xlsx (Excel 2007 format), the values update upon opening. Unfortunately, I'm not sure that creating an xlsx is a possibility with the modules that I'm using, and many of our clients probably wouldn't be able to use it anyway. Any suggestions? Editing 200+ files every month for each client isn't going to be feasible, so if there's something I'm missing, please let me know.

    Read the article

  • How to read an Excel file, get and set the information using POI

    - by user1399713
    I'm using Java to read a form that is in an Excel spreadsheet that the user fills in with information about geometric shape. Ex: Shape :_________ Color :_________ Area: _________ Perimeter:________ So far the code I have can I can read what I want in the form and print out the values of Shape, Color, Area, Perimeter. public class RangeSetter { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { FileInputStream file = new FileInputStream(new File("test2.xls")); //C:\Users\Yo\Documents // Setup code String cname = "Shape"; HSSFWorkbook wb = new HSSFWorkbook(file); // retrieve workbook // Retrieve the named range // Will be something like "$C$10,$D$12:$D$14"; int namedCellIdx = wb.getNameIndex(cname); Name aNamedCell = wb.getNameAt(namedCellIdx); // Retrieve the cell at the named range and test its contents // Will get back one AreaReference for C10, and // another for D12 to D14 AreaReference[] arefs = AreaReference.generateContiguous(aNamedCell.getRefersToFormula()); for (int i=0; i<arefs.length; i++) { // Only get the corners of the Area // (use arefs[i].getAllReferencedCells() to get all cells) CellReference[] crefs = arefs[i].getAllReferencedCells(); for (int j=0; j<crefs.length; j++) { // Check it turns into real stuff Sheet s = wb.getSheet(crefs[j].getSheetName()); Row r = s.getRow(crefs[j].getRow()); Cell c = r.getCell(crefs[j].getCol()); if (c!= null ){ switch(c.getCellType()){ case Cell.CELL_TYPE_STRING: System.out.println(c.getStringCellValue()); } } } } What I want to do is to create a method that gets the that information and another that sets it. So far I can only print to the console

    Read the article

  • How to read cell data in excel and output to command prompt

    - by Max Ollerenshaw
    Hi All, I'm a sys admin and I am trying to learn how to use powershell... I have never done any type of scripting or coding before and I have been teaching myself online by learning from the technet script centre and online forums. What I am trying to accomplish is to open an excel spreadsheet get information from it (usernames and password) and then output it into the command prompt in powershell. When ever I try to do this I get an Exception calling "InvokeMember" anyway, here is the code I have so far: function Invoke([object]$m, [string]$method, $parameters) { $m.PSBase.GetType().InvokeMember( $method, [Reflection.BindingFlags]::InvokeMethod, $null, $m, $parameters,$ciUS ) } $ciUS = [System.Globalization.CultureInfo]'en-US' $objExcel = New-Object -comobject Excel.Application $objExcel.Visible = $False $objExcel.DisplayAlerts = $False $objWorkbook = Invoke $objExcel.Workbooks.Open "C:\PS\User Data.xls" Write-Host "Numer of worksheets: " $objWorkbook.Sheets.Count $objWorksheet = $objWorkbook.Worksheets.Item(1) Write-Host "Worksheet: " $objWorksheet.Name $Forename = $objWorksheet.Cells.Item(2,1).Text $Surname = $objWorksheet.Cells.Item(2,2).Text Write-Host "Forename: " $Forename Write-Host "Surname: " $Surname $objExcel.Quit() If (ps excel) { kill -name excel} I have read many different posts on forums and articles on how to try and get around the en-US problem but I cannot seem to get around it and hope that someone here can help! Here is the Exeption problem I mentioned: Exception calling "InvokeMember" with "6" argument(s): "Method 'System.Management.Automation.PSMethod.C:\PS\User Data.x ls' not found." At C:\PS\excel.ps1:3 char:33 + $m.PSBase.GetType().InvokeMember <<<< ( + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException Numer of worksheets: You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:18 char:45 + $objWorksheet = $objWorkbook.Worksheets.Item <<<< (1) + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull Worksheet: You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:21 char:37 + $Forename = $objWorksheet.Cells.Item <<<< (2,1).Text + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull You cannot call a method on a null-valued expression. At C:\PS\excel.ps1:22 char:36 + $Surname = $objWorksheet.Cells.Item <<<< (2,2).Text + CategoryInfo : InvalidOperation: (Item:String) [], RuntimeException + FullyQualifiedErrorId : InvokeMethodOnNull Forename: Surname: This is the first question I have ever asked, try to be nice! :)) Many Thanks Max

    Read the article

  • Decyphering Seagate drive model numbers?

    - by Stefan Lasiewski
    I'm comparing Seagate's Enterprise and Desktop drives for a variety of old and new servers. These servers come from different generations, so options like size (73GB, 2TB) and interface (SATA vs SAS 3.0Gbps vs SAS 6Gbps vs SCSI Ultra320) are widely variable. I'm trying to compare the sizes, speeds and interfaces, but I'm getting thrown off by different models. Also, their website is not the best. Does anyone know of a documented explanation of the Seagate model numbers? And is there a single spreadsheet which compares the features for all drives (or all 'Enterprise' drives?). Seagate drives have model numbers like this: Model ST3600057SS 6-Gb/s SAS 600 GB None at Cheetah® 15K Hard Drives Model ST373455LW Ultra320 SCSI 73.4 GB 68-Pin LW at Cheetah® 15K Hard Drives Model ST32000644NS SATA 3Gb/s 2 TB None at Constellation™ ES Hard Drives Model ST973452SS 6-Gb/s SAS 73 GB None Savvio® 15K Hard Drives Model ST9200011FS SATA 3Gb/s 200 GB Pulsar™ Solid State Drives I understand the model numbers read something like this: ST - SOMETHING1 - SIZE - SOMETHING2 - INTERFACE Where the fields mean something like this: ST : For 'Seagate'? 'Seagate Technoligies'? SOMETHING1 - This field has number, but I'm not sure what that represents. SIZE - Size in Gigabytes. This is a number like '73' or '300' or '2000' SOMETHING2 - This field also has a number, but I'm not sure what it means. INTERFACE - This field seems to indicate the Interface. 'SS' means SAS, 'FC' means Fibre Channel, but I don't see how to distinguish between 6Gbps SAS and 3Gbps SAS, or different SATA or FC speeds. I don't see a field which indicates the RPM (15K , 10K, 7.2K) etc. Is this part of the model number?

    Read the article

  • Numbering grouped data in Excel

    - by Jeff
    I have an Excel spreadsheet (2010) with data similar to this: Dogs Brown Nice Dogs White Nice Dogs White Moody Cats Black Nice Cats Black Mean Cats White Nice Cats White Mean I want to group these animals but I only care about species and color. I don't care about disposition. I want to assign group numbers to the set as shown here. 1 Dogs Brown Nice 2 Dogs White Nice 2 Dogs White Moody 3 Cats Black Nice 3 Cats Black Mean 4 Cats White Nice 4 Cats White Mean I was able to select all the species and colors, then from the data tab select 'advanced', then 'unique records only'. This collapsed the data so that I could number the visible rows. Then when I 'cleared' the filter I could easily just fill the blank areas under the numbers with the number above. The problem is that my real data has far too many rows for this to be practical. Also, the trick about entering 1 in the first cell, 2 in the cell below, selecting both then dragging the corner down to 'auto-number' doesn't seem to work when you're viewing filtered rows. Any way to do this?

    Read the article

  • How to Create an XML File from an Excel File

    - by nicorellius
    I have an Excel spreadsheet file that has 5 or so columns and hundreds of lines. I need to convert this (export these data) to an XML file. I'm interested in three of the columns and they correspond to these XML tags, where info1 can be followed by info2, info3, etc... <?xml version="1.0" encoding="UTF-8" ?> <list> <info1> <id>111</id> <value>222</value> <des>333</des> </info1> </list> If possible, I would like to avoid building this XML manually. It wouldn't be too much trouble to rearrange the Excel file such that the three columns I'm interested in were in their own file. But then I would need to export those data into an XML file of the above format. Any ideas?

    Read the article

  • How can I filter data based on items in a list?

    - by user2964366
    How can I filter entries containing any specific word in a list of words? For example, I have a list of road names in Singapore. Amoy Street, Singapore Ann Siang Hill Anson Road Arab Street Armenian Street, Singapore BBaghdad Street (Singapore) Balestier Road Banda Street Bartley Road Beach Road, Singapore Bencoolen Street Bernam Street Boat Quay Boon Tat Street Boundary Road, Singapore Bras Basah Road Bugis Street Bukit Batok Road Bukit Pasoh Road Bukit Timah Road CCantonment Road, Singapore Choa Chu Kang Road Clarke Quay Clementi Road Club Street Collyer Quay Connaught Drive Craig Road (Singapore) Cross Street and many more My spreadsheet has a large number of entries like the following, which may or may not contain road names mentioned in my list: Saw an accident at Thomson Road Found this by accident 6 vehicles crashed at Balestier Road I wanna crash now. So tired. Bus collides with bicycle at Arab Street. Accident at City Road. You can crash my house later. How do I filter to return entries that contains any road name identified in the list of names? How do I introduce an array/list of road names into Microsoft Excel and then relate it to a filter function?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >