Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 13/1257 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Count a row VS Save the Row count after each update

    - by SAFAD
    I want to know whether saving row count in a table is better than counting it each time of the proccess. Quick Example : A visitor goes to Group Clan, the page displays clan information and Members who have joined the group,Should the page look for all the users who joined the clan and count them, or just display the number of members already saved in table ? I think the first one is not possible to get manipulated with but IT MIGHT cost performance Your Ideas ?

    Read the article

  • SQL SERVER – What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1

    - by Pinal Dave
    This is the first part of the series Incremental Statistics. Here is the index of the complete series. What is Incremental Statistics? – Performance improvements in SQL Server 2014 – Part 1 Simple Example of Incremental Statistics – Performance improvements in SQL Server 2014 – Part 2 DMV to Identify Incremental Statistics – Performance improvements in SQL Server 2014 – Part 3 Statistics are considered one of the most important aspects of SQL Server Performance Tuning. You might have often heard the phrase, with related to performance tuning. “Update Statistics before you take any other steps to tune performance”. Honestly, I have said above statement many times and many times, I have personally updated statistics before I start to do any performance tuning exercise. You may agree or disagree to the point, but there is no denial that Statistics play an extremely vital role in the performance tuning. SQL Server 2014 has a new feature called Incremental Statistics. I have been playing with this feature for quite a while and I find that very interesting. After spending some time with this feature, I decided to write about this subject over here. New in SQL Server 2014 – Incremental Statistics Well, it seems like lots of people wants to start using SQL Server 2014′s new feature of Incremetnal Statistics. However, let us understand what actually this feature does and how it can help. I will try to simplify this feature first before I start working on the demo code. Code for all versions of SQL Server Here is the code which you can execute on all versions of SQL Server and it will update the statistics of your table. The keyword which you should pay attention is WITH FULLSCAN. It will scan the entire table and build brand new statistics for you which your SQL Server Performance Tuning engine can use for better estimation of your execution plan. UPDATE STATISTICS TableName(StatisticsName) WITH FULLSCAN Who should learn about this? Why? If you are using partitions in your database, you should consider about implementing this feature. Otherwise, this feature is pretty much not applicable to you. Well, if you are using single partition and your table data is in a single place, you still have to update your statistics the same way you have been doing. If you are using multiple partitions, this may be a very useful feature for you. In most cases, users have multiple partitions because they have lots of data in their table. Each partition will have data which belongs to itself. Now it is very common that each partition are populated separately in SQL Server. Real World Example For example, if your table contains data which is related to sales, you will have plenty of entries in your table. It will be a good idea to divide the partition into multiple filegroups for example, you can divide this table into 3 semesters or 4 quarters or even 12 months. Let us assume that we have divided our table into 12 different partitions. Now for the month of January, our first partition will be populated and for the month of February our second partition will be populated. Now assume, that you have plenty of the data in your first and second partition. Now the month of March has just started and your third partition has started to populate. Due to some reason, if you want to update your statistics, what will you do? In SQL Server 2012 and earlier version You will just use the code of WITH FULLSCAN and update the entire table. That means even though you have only data in third partition you will still update the entire table. This will be VERY resource intensive process as you will be updating the statistics of the partition 1 and 2 where data has not changed at all. In SQL Server 2014 You will just update the partition of Partition 3. There is a special syntax where you can now specify which partition you want to update now. The impact of this is that it is smartly merging the new data with old statistics and update the entire statistics without doing FULLSCAN of your entire table. This has a huge impact on performance. Remember that the new feature in SQL Server 2014 does not change anything besides the capability to update a single partition. However, there is one feature which is indeed attractive. Previously, when table data were changed 20% at that time, statistics update were triggered. However, now the same threshold is applicable to a single partition. That means if your partition faces 20% data, change it will also trigger partition level statistics update which, when merged to your final statistics will give you better performance. In summary If you are not using a partition, this feature is not applicable to you. If you are using a partition, this feature can be very helpful to you. Tomorrow: We will see working code of SQL Server 2014 Incremental Statistics. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Statistics, Statistics

    Read the article

  • Increase application performance

    - by Prayos
    I'm writing a program for a company that will generate a daily report for them. All of the data that they use for this report is stored in a local SQLite database. For this report, the utilize pretty much every bit of the information in the database. So currently, when I query the datbase, I retrieve everything, and store the information in lists. Here's what I've got: using (var dataReader = _connection.Select(query)) { if (dataReader.HasRows) { while (dataReader.Read()) { _date.Add(Convert.ToDateTime(dataReader["date"])); _measured.Add(Convert.ToDouble(dataReader["measured_dist"])); _bit.Add(Convert.ToDouble(dataReader["bit_loc"])); _psi.Add(Convert.ToDouble(dataReader["pump_press"])); _time.Add(Convert.ToDateTime(dataReader["timestamp"])); _fob.Add(Convert.ToDouble(dataReader["force_on_bit"])); _torque.Add(Convert.ToDouble(dataReader["torque"])); _rpm.Add(Convert.ToDouble(dataReader["rpm"])); _pumpOneSpm.Add(Convert.ToDouble(dataReader["pump_1_strokes_pm"])); _pumpTwoSpm.Add(Convert.ToDouble(dataReader["pump_2_strokes_pm"])); _pullForce.Add(Convert.ToDouble(dataReader["pull_force"])); _gpm.Add(Convert.ToDouble(dataReader["flow"])); } } } I then utilize these lists for the calculations. Obviously, the more information that is in this database, the longer the initial query will take. I'm curious if there is a way to increase the performance of the query at all? Thanks for any and all help. EDIT One of the report rows is called Daily Drilling Hours. For this calculation, I use this method: // Retrieves the timestamps where measured depth == bit depth and PSI >= 50 public double CalculateDailyProjectDrillingHours(DateTime date) { var dailyTimeStamps = _time.Where((t, i) => _date[i].Equals(date) && _measured[i].Equals(_bit[i]) && _psi[i] >= 50).ToList(); return _dailyDrillingHours = Convert.ToDouble(Math.Round(TimeCalculations(dailyTimeStamps).TotalHours, 2, MidpointRounding.AwayFromZero)); } // Checks that the interval is less than 10, then adds the interval to the total time private static TimeSpan TimeCalculations(IList<DateTime> timeStamps) { var interval = new TimeSpan(0, 0, 10); var totalTime = new TimeSpan(); TimeSpan timeDifference; for (var j = 0; j < timeStamps.Count - 1; j++) { if (timeStamps[j + 1].Subtract(timeStamps[j]) <= interval) { timeDifference = timeStamps[j + 1].Subtract(timeStamps[j]); totalTime = totalTime.Add(timeDifference); } } return totalTime; }

    Read the article

  • Best Web Site Copying Software

    - by GregH
    I just wanted to get some opinions on the best "web site copying" software out there (free or commercial is fine). I have a site that I've recently become responsible for managing, and the previous consultant has not provided operating system access. As such, the plan is to re-host the web site. I realize there are a lot of different issues to consider in doing this. However, I don't have much choice in the matter now. The plan is to use web site copying software (ala HTTrack) to "rip" the web site, and then modify what is downloaded back in to a maintainable site. This, of course, involves HTML, css, javascript, etc on the front-end. I'd like to recover as much of the site as possible to make re-creating it as easy as possible.

    Read the article

  • Pricing personalized software?

    - by john ryan
    Currently i'm working on a Purchased Order System Application Project for a small scale company. The Software that i am working on is personalized based on the on their business requirement. The company told me to create proposal include the price how much is the application is so they can process the check for me. The person who give me this project is the company supervisor and also a former supply chain supervisor in my employer before which i work also in some of their applications back then.So i want to be fair. This is my first time to create an application as a sideline so i really never experienced pricing a software even though i am working as full time web developer in a big company. Any tips and help ?

    Read the article

  • Software Architecture - From design to sucessful implementation

    - by user20358
    As the subject goes; once a software architect puts down the high level design and approach to a software that is to be developed from scratch, how does the team ensure that it is implemented successfully? To my mind the following things will need to be done Proper understanding of requirements Setting down coding practices and guidelines Regular code reviews to ensure the guidelines are being adhered to Revisiting the requirements phase and making necessary changes to design based on client inputs if there are any changes to requirements Proper documentation of what is being done in code Proper documentation of requirements and changes to them Last but not the least, implementing the design via object oriented code where appropriate Did I miss anything? Would love to hear any mistakes that you have learned from in your project experiences. What went wrong, what could have been done better. Thanks for taking the time..

    Read the article

  • My Rhythmbox plugin can't meet the Ubuntu Software Center "my-app" requirements

    - by allquixotic
    At http://developer.ubuntu.com/publish/my-apps-packages/ the following technical requirements are cited: Technical requirements In order for your application to be distributed in the Software Centre it must: Be in one, self-contained directory when installed Be able to be installed into the /opt/ directory (*) Be executable by all users from the /opt/ directory (**) Write all configuration settings to ~/.config/ (This can be one file or a directory containing multiple configuration files) A Rhythmbox plugin cannot satisfy any of these requirements. Rhythmbox has compiled-in locations where it looks for installed plugins. So, is there no way for me to publish my app in Ubuntu Software Center? Would it have to go into Universe repository (which would require tremendously more work and political maneuvering to get it accepted)? I already have all the Debian package infrastructure built for it, so I have made a PPA for it.

    Read the article

  • Quantitfying a cost for a software project

    - by The Elite Gentleman
    Disclaimer: I didn't know exactly where to put this question. If you feel that this question is not suitable for Programmers @ StackExchange, feel free to migrate it. Background: Broadening my last question, there is a request for tender for a software system that's open and I have decided to take it on. I am a software developer & engineer by profession and, in this tender process, I have to put on the pricing for my bid. I have been provided a documentation consisting of functional and non-functional requirements only. I have to put a project manager's cap on and think of all aspects, e.g. cost for implementation for the project, resources needed, etc. My question is: Is there a project framework that I can follow that breaks the project cycle into steps and corresponding cost aspect or how would I go about best calculating/approximating the cost for the project?

    Read the article

  • Developing OpenGLES2 apps for Ubuntu Software Center

    - by Bram
    I have a game for iOS and Android that I now want to port to Ubuntu. I plan to distribute it with Ubuntu Software Center. Preferrably for free with an in-app-purchase. My codebase is currently based on OpenGL ES2 and written in C++. I could rewrite to OpenGL, but having progammable shaders is a must. Fixed pipeline OpenGL will not suffice. Is there a feature in place that lets you specify OpenGL requirements in the Ubuntu Software Center? I want to make sure that only Ubuntu users with compatible hardware will be able to download my game. Any APIs I could use for getting a suitable OpenGL context, or am I expected to just use glx for this? Or is the use of GTK mandatory?

    Read the article

  • Optical Character Recognition software recommendations?

    - by Tim
    I have seen some ebooks/papers that were apparently scanned from their paper versions but the text in the ebooks/papers can amazingly be copied out. I suppose the directly-scanned versions must have been processed by some Optical Character Recognition software. So I would like to know what are the recommended Optical Character Recognition softwares? Especially those that are either for Ubuntu or free? If those for Windows are far more superior, please let me know as well. I am particularly interested in those OCRs that can accept a scanned pdf file as input and still produce as output another pdf file that looks the same as the input one but with its text copyable. Thanks and regards! Please limit one software per answer

    Read the article

  • How do I completly remove software? [closed]

    - by Dustin
    Possible Duplicate: Why are the hidden settings-folders not deleted when purging an application? Ok I have 8GB on my desktop dual booting win 7. I installed a few games, assualtcube, tremolous, and spring. But I had removed them using 'remove --purge' and only to find they are still lingering and purge does not well truly purge them. I need them completely gone and not secretly hogging space when I purged them. How can i completely remove software, like it is gone and not going to take any more of my precious space? Shrinking 7 and growing Ubuntu is not an answer I am accepting, i want to know how to completely remove software. Thank you for your time and your answers.

    Read the article

  • Ubuntu Software Center (10.04LTS) Missing Packages

    - by Chris
    I recently downgraded from 11.10 to 10.04LTS for compatibility and support of specific development tools. The Ubuntu Software center is missing many packages that I had access to in 11.10, most notably LibreOffice. (Also development tools.) Is there a way to update software sources to find the missing packages? (Or are they incompatible with 10.04?) Synaptic does not have libre either. I feel like I am working with a gimped version of USC.

    Read the article

  • Using cracked software and tools [closed]

    - by Lena Aslo
    I am seeing people complaining about expensive tools such as Dreamweaver or Photoshop. I am just wondering about that, because everyone knows that they can get this software running for free (if it is done illegally). Why don't they just use a cracked version? Is it so likely to get caught? I feel that nowadays a lot of people are using cracked software but whenever the topic is mentioned, they ALL say PSSST!!! or start criticizing it, even though they are doing it themselves...

    Read the article

  • Software patents [closed]

    - by user71622
    not exactly sure where I should post this but I have a question about filing for software patents. If I have an idea for a UI feature, how do I go about patenting that feature. I don't have any code written and I'm afraid my coding skills aren't up to snuff in order to code the thing I imagined but aside from that, can anyone give me any general guidance and info? I've gone through the CIPO site (Canada's patent office) but haven't come away with enough information on what I'm trying to patent but what I have understood is that I have to very thorough in describing what I'm trying to patent and showing that it could work. If anyone has gone through the process of patenting software, can you tell me about your experience? I would like to hear about US and Canadian experiences Thank you!

    Read the article

  • Optical Character Recognition software recommendations?

    - by Tim
    I have seen some ebooks/papers that were apparently scanned from their paper versions but the text in the ebooks/papers can amazingly be copied out. I suppose the directly-scanned versions must have been processed by some Optical Character Recognition software. So I would like to know what are the recommended Optical Character Recognition softwares? Especially those that are either for Ubuntu or free? If those for Windows are far more superior, please let me know as well. I am particularly interested in those OCRs that can accept a scanned pdf file as input and still produce as output another pdf file that looks the same as the input one but with its text copyable. Thanks and regards! Please limit one software per answer

    Read the article

  • Software Center crashes and terminal errors

    - by user97521
    *note*I'm a new user to Ask Ubuntu and I've only recently switched to Ubuntu 12.04. When I try to open the software center (Ubuntu 12.04 32-bit) it will flash open, load for maybe 1-2 seconds, and then close. When i try using: sudo apt-get purge software center sudo apt-get install sudo apt-get update sudo apt-get upgrade I get this within the terminal: Reading package lists... Error! E: Problem parsing dependency Depends E: Error occurred while processing printer-driver-hpcups (NewVersion2) E: Problem with MergeList /var/lib/dpkg/status E: The package lists or status file could not be parsed or opened. *The problem fixed itself after I shut down my laptop for the night and turned it back on to check my e-mail this afternoon. If anyone could tell me how to fix this problem in the future please do, I would like to learn about these kind of things because i don't plan on putting windows on my laptop again :P *

    Read the article

  • Documenting software architectures that serve multiple markets

    - by wsb3383
    Hello, I'm the lead developer/architect wanna-be on a J2EE based system/platform at work that serves both real estate and automotive markets. The systems consists of a set of database back ends, web services and two web clients. The platform ends up serving 3 different products: an internal vehicle inventory system for use by company analysts, an external dealer management system (commercialized product), and a real estate inventory system (commercialized). In other words, it follows a software product lines approach....My question is, I'm having trouble communicating to other technical and some business people how this platform architecture is one system that serves multiple markets (by leveraging some existing assets combined with minor modifications)....Is there a formal modeling language that can simplify communicating this intent? I should note that I haven't read much about software product lines, so I'm not sure if there is actually a standard modeling approach to SPL that i'm not aware of....I'm also interested in knowing if there are special configuration management practices for such systems. thanks,

    Read the article

  • How to use caching to increase render performance?

    - by Christian Ivicevic
    First of all I am going to cover the basic design of my 2d tile-based engine written with SDL in C++, then I will point out what I am up to and where I need some hints. Concept of my engine My engine uses the concept of GameScreens which are stored on a stack in the main game class. The main methods of a screen are usually LoadContent, Render, Update and InitMultithreading. (I use the last one because I am using v8 as a JavaScript bridge to the engine. The main game loop then renders the top screen on the stack (if there is one; otherwise, it exits the game) - actually it calls the render methods, but stores all items to be rendered in a list. After gathering all this information the methods like SDL_BlitSurface are called by my GameUIRenderer which draws the enqueued content and then draws some overlay. The code looks like this: while(Game is running) { Handle input if(Screens on stack == 0) exit Update timer etc. Clear the screen Peek the screen on the stack and collect information on what to render Actually render the enqueue screen stuff and some overlay etc. Flip the screen } The GameUIRenderer uses as hinted a std::vector<std::shared_ptr<ImageToRender>> to hold all necessary information described by this class: class ImageToRender { private: SDL_Surface* image; int x, y, w, h, xOffset, yOffset; }; This bunch of attributes is usually needed if I have a texture atlas with all tiles in one SDL_Surface and then the engine should crop one specific area and draw this to the screen. The GameUIRenderer::Render() method then just iterates over all elements and renders them something like this: std::for_each( this->m_vImageVector.begin(), this->m_vImageVector.end(), [this](std::shared_ptr<ImageToRender> pCurrentImage) { SDL_Rect rc = { pCurrentImage->x, pCurrentImage->y, 0, 0 }; // For the sake of simplicity ignore offsets... SDL_Rect srcRect = { 0, 0, pCurrentImage->w, pCurrentImage->h }; SDL_BlitSurface(pCurrentImage->pImage, &srcRect, g_pFramework->GetScreen(), &rc); } ); this->m_vImageVector.clear(); Current ideas which need to be reviewed The specified approach works really good and IMHO it is really has a good structure, however the performance could be definitely increased. I would like to know what do you suggest, how to implement efficient caching of surfaces etc so that there is no need to redraw the same scene over and over again? The map itself would be almost static, only when the player moves, we would need to move the map. Furthermore animated entities would either require updates of the whole map or updates of only the specific areas the entities are currently moving in. My first approaches were to include a flag IsTainted which should be used by the GameUIRenderer to decide whether to redraw everything or use cached version (or to not render anything so that we do not have to Clear the screen and let the last frame persist). However this seems to be quite messy if I have to manually handle in my Render method of the screen class if something has changed or not.

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Cant install software

    - by user53209
    So I just installed the ubuntu 11.10 .. And when i goto software center and try to download any software(use source).. all i get is a window saying that "Failed to download repository information" , "check your internet connection" and W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_universe_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch htt p ://archive.ubuntu.com/ubuntu/dists/oneiric/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_main_i18n_Index , W:Failed to fetch http ://archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_i18n_Index , W:Failed to fetch htt ://archive.ubuntu.com/ubuntu/dists/oneiric/restricted/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_restricted_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_universe_i18n_Index , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_main_source_Sources Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_restricted_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_universe_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_multiverse_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch h tp://archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_main_i18n_Index , W:Failed to fetch h ttp://archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_multiverse_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_restricted_i18n_Index , W:Failed to fetch htt p://archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_universe_i18n_Index , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_main_source_Sources Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_restricted_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_universe_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_multiverse_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch h ttp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_main_i18n_Index , W:Failed to fetch h ttp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_multiverse_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/restricted/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_restricted_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_universe_i18n_Index , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-security_main_source_Sources Hash Sum mismatch , E:Some index files failed to download. They have been ignored, or old ones used instead." Neglect the http typos, its to evade the 2 hyperlink.. I tried making a file in the apt.conf.d folder also, and added proxy entries, also set the system proxy.. and also the "network proxy", but nothing works.. And now I cant install any software!! Help needed

    Read the article

  • New Whitepaper: Oracle E-Business Suite on Exadata

    - by Steven Chan
    Our Maximum Availability Architecture (MAA) team has quietly been amassing a formidable set of whitepapers about the Oracle Exadata Database Machine.  They're available here:MAA Best Practices - Exadata Database MachineIf you're one of the lucky ones with access to this hardware platform, you'll be pleased to hear that the MAA team has just published a new whitepaper with best practices for EBS environments:Oracle E-Business Suite on ExadataThis whitepaper covers the following topics:Getting to Exadata -- a high level overview of fresh installation on, and migration to, Exadata Database Machine with pointers to more detailed documentation High Availability and Disaster Recovery -- an overview of our MAA best practices with pointers to our detailed MAA Best Practices documentation Performance and Scalability -- best practices for running Oracle E-Business Suite on Exadata Database Machine based on our internal testing

    Read the article

  • Software management for 2 programmers

    - by kajo
    Hi all, me and my very good friend do a small bussiness. We have company and we develop web apps using Scala. We have started 3 months ago and we have a lot of work now. We cannot afford to employ another programmer because we can't pay him now. Until now we try to manage entire developing process very simply. We use excel sheets for simple bug tracking and we work on client requests on the fly. We have no plan for next week or something similar. But now I find it very inefficient and useless. I am trying to find some rules or some methodology for small team or for only two guys. For example Scrum is, imo, unadapted for us. There are a lot of roles (ScrumMaster, Product Owner, Team...) and it seems overkill. Can you something advise me? Have you any experiences with software management in small teams? Is any methodology of current agile development fitten for pair of programmers? Is there any software management for simple bug tracking, maybe wiki or time management for two coders? thanks a lot for sharing.

    Read the article

  • theoretical and practical matrix multiplication FLOP

    - by mjr
    I wrote traditional matrix multiplication in c++ and tried to measure and compare its theoretical and practical FLOP. As I know inner loop of MM has 2 operation therefore simple MM theoretical Flops is 2*n*n*n (2n^3) but in practice I get something like 4n^3 + number of operation which is 2 i.e. 6n^3 also if I just try to add up only one array a[i][j]++ practical flops then calculate like 3n^3 and not n^3 as you see again it is 2n^3 +1 operation and not 1 operation * n^3 . This is in case if I use 1D array in three nested loops as Matrix multiplication and compare flop, practical flop is the same (near) the theoretical flop and depend exactly as the number of operation in inner loop.I could not find the reason for this behaviour. what is the reason in both case? I know that theoretical flop is not the same as practical one because of some operations like load etc. system specification: Intel core2duo E4500 3700g memory L2 cache 2M x64 fedora 17 sample results: Matrix matrix multiplication 512*512 Real_time: 1.718368 Proc_time: 1.227672 Total flpops: 807,107,072 MFLOPS: 657.429016 Real_time: 3.608078 Proc_time: 3.042272 Total flpops: 807,024,448 MFLOPS: 265.270355 theoretical flop: 2*512*512*512=268,435,456 Practical flops= 6*512^3 =807,107,072 Using 1 dimensional array float d[size][size]:512 or any size for (int j = 0; j < size; ++j) { for (int k = 0; k < size; ++k) { d[k]=d[k]+e[k]+f[k]+g[k]+r; } } Real_time: 0.002288 Proc_time: 0.002260 Total flpops: 1,048,578 MFLOPS: 464.027161 theroretical flop: *4n^2=4*512^2=1,048,576* practical flop : 4n^2+overhead (other operation?)=1,048,578 3 loop version: Real_time: 1.282257 Proc_time: 1.155990 Total flpops: 536,872,000 MFLOPS: 464.426117 theoretical flop:4n^3 = 536,870,912 practical flop: *4n^3=4*512^3+overheads(other operation?)=536,872,000* thank you

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >