Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 627/837 | < Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Cheap server stress testing

    - by acrosman
    The IT department of the nonprofit organization I work for recently got a new virtual server running CentOS (with Apache and PHP 5), which is supposed to host our website. During the process of setting up the server I discovered that the slightest use of the new machine caused major performance problems (I couldn't extract tarballs without bringing it to a halt). After several weeks of casting about in the dark by tech support, it now appears to be working fine, but I'm still nervous about moving the main site there. I have no budget to work with (so no software or services that require money), although due to recent cut backs I have several older desktops that I could use if it helps. The site doesn't need to withstand massive amounts of traffic (it's a Drupal site just a few thousand visitors a day), but I would like to put it through a bit of it paces before moving the main site over. What are cheap tools that I can use to get a sense if the server can withstand even low levels of traffic? I'm not looking to test the site itself yet, just fundamental operation of the server.

    Read the article

  • I'm a SubVersion geek, why I should consider or not consider Mercurial or Git or any other DRCS?

    - by Pierre 303
    I tried to understand the benefits of DRCS. I must recognize I still doesn't get it. Here are my current beliefs. I'm ready to destroy them thanks to your expertise. I know I'm probably resisting to change. I just want to evaluate how much that change will cost me. Merging hell can be solved by just applying good practices such as continuous integration. There is no such good practice than having a private branch for a few days when you are in a self managing team with real collaboration. I use branching for that for very rare cases, and I keep a branch for every major version, in which I fix bugs merged from the trunk. I see the value of committing offline then pushing online. But continuous integration can help on this too. I work on very large projects, and I never noticed SubVersion to be slow even when the server is 5000km away on the internet and my small connection (less than 1024D/128U). Harddisk space is cheap, so having a copy of source code locally doesn't look like a problem to me. I already have a full copy of the last version on my disk. I don't understand the distributed thing there (maybe THIS IS the key to my understanding?) I not new in the industry, and judging by my difficulty to understand, I don't think DRCS are easier to understand than SubVersion like. If fact, I don't understand... Doctor, give me your diagnostic.

    Read the article

  • Does Google Analytics exclude Campaign traffic from Facebook in the Social reports?

    - by user1612223
    For a while we have used campaign tags when putting posts on Facebook so that we can run campaign reports in Google analytics on those links. However it appears that traffic from those links are being excluded in Google's Social reports. For example between 7/20 and 8/19 I'm seeing 123 Visits where Facebook is the source in my Campaigns report, but only 29 Visits where Facebook is the source in my Social Sources report. Main questions: Does Google exclude campaign traffic from it's social reports? If it does, is there any way to reconcile that so that the traffic shows up in both reports? If it doesn't, what could be causing the vast discrepancy? One observer noted that we are setting the Medium to "Post" when passing the campaign parameters, and that Google may only allow "Referral" traffic in it's social reports (Just speculation). In that case we could potentially change the Medium to "Referral", but that would undermine some of our strategy in being able to set different mediums. I have also considered that maybe the campaign traffic came to the site several times, and the social report may count the same user as less visits, however over 70% of the Facebook campaign traffic is new traffic, so at a minimum there would need to be over 85 Visits on the Social side for that argument to be valid. I've done several searches for any information on this topic, and haven't run across much of anything. I did post the same question on Google's Product Forum and have not gotten a response. The title of that question was 'Facebook Campaign Traffic Not Showing in Social Reports'. The inability to pass campaign data on Facebook posts would make evaluating the performance of those specific posts very difficult, so I'm hoping there is a solution to this.

    Read the article

  • Series On Embedded Development (Part 3) - Runtime Optionality

    - by Darryl Mocek
    What is runtime optionality? Runtime optionality means writing and packaging your code in such a way that all of the features are available at runtime, but aren't loaded and used if the feature isn't used. The code is separate, and you can even remove the code to save persistent storage if you know the feature will not be used. In native programming terms, it's splitting your application into separate shared libraries so you only have to load what you're using, which means it only impacts volatile memory when enabled at runtime. All the functionality is there, but if it's not used at runtime, it's not loaded. A good example of this in Java is JVMTI, Java's Virtual Machine Tool Interface. On smaller, embedded platforms, these libraries may not be there. If the libraries are not there, there's no effect on the runtime as long as you don't try to use the JVMTI features. There is a trade-off between size/performance and flexibility here. Putting code in separate libraries means loading that code will take longer and it will typically take up more persistent space. However, if the code is rarely used, you can save volatile memory by including it in a separate library. You can also use this method in Java by putting rarely-used code into one or more separate JAR's. Loading a JAR and parsing it takes CPU cycles and volatile memory. Putting all of your application's code into a single JAR means more processing for that JAR. Consider putting rarely-used code in a separate library/JAR.

    Read the article

  • SIMPLEST way to set up password protection for a static site, with basic admin UI?

    - by Joseph Turian
    I have a static site. I would like the simplest approach to password protecting a directory, with a basic admin UI for adding/removing users. I will have so few users that I don't care about performance. I don't care if it's PHP or Django or whatever, I just want a complete software package. Apache basic auth isn't good, because you can't log out. Nor is there a UI for adding users. I tried throwing everything behind Django auth and serving the files through Django. However, Chrome treats all my text/css headers as text/plain, so I don't get any stylesheets showing. I can't use mod_xsendfile on my server because I can't reconfigure Apache to add new modules. I think this approach is overkill anyway. I can try configuring Nginx's X-Accel-Redirect, however that requires implementing all the Django code for auth myself, and I'd prefer an existing solution. However, this is my backup plan. Is there a code package that implements authentication with basic admin for a static site?

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • creative & complex vs simple and readable

    - by Shirish11
    Which is a better option? Its not always that when you have something creative your code is going to look ugly. But at times it does go a bit ugly. e.g. if ( (object1(0)==object2(0) && (object1(1)==object2(1) && (object1(2)==object2(2) && (object1(3)==object2(3)){ retval = true; else retval = false; is simple and readable bool retValue = (object1(0)==object2(0)) && (object1(1)==object2(1)) && (object1(2)==object2(2)) && (object1(3)==object2(3)); but having something like this will make some newbies scratch their heads. So what do I go for? including simple code everywhere might sometime hamper my performance. what I could think of was commenting wherever necessary but at times u get too curious to know what is actually happening. Any suggestions are welcome.

    Read the article

  • New Certification Exam: "Oracle Database 12c: SQL Fundamentals" Released (1Z0-061)

    - by Brandye Barrington
    Oracle Certification begins testing this week for the new Oracle Database 12c Administrator Certified Associate (OCA) certification.  Testing for the Oracle Database 12c: SQL Fundamentals (1Z0-061) exam is now underway. Visit pearsonvue.com/oracle and register for exam 1Z0-061. You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification Website. Earning the Oracle Database 12c Administrator Certified Associate (OCA) credential demonstrates that you carry the foundational knowledge and skills needed to administer the Oracle Database, and sets the stage for your future progression to Oracle Database 12c Administrator Certified Professional (OCP). With Oracle Database 12c, you will experience the benefits of an Oracle Database that is re-engineered for Cloud computing. Multitenant architecture brings enterprises unprecedented hardware and software efficiencies, performance and manageability benefits, and fast and efficient Cloud provisioning. Oracle Database 12c certifications emphasize the full set of skills that DBAs need in today's competitive marketplace. Be among the first to obtain this ground breaking new Oracle Certified Associate (OCA) certification by registering for this exam today. QUICK LINKS Certification Path: Oracle Database 12c Administrator Certified Associate (OCA) Certification Exam: Oracle Database 12c: SQL Fundamentals (1Z0-061) Registration: pearsonvue.com/oracle

    Read the article

  • So my employer wants me to do less programming and focus on IT support

    - by Rich
    I was hired into a non tech company's IT department as a programmer a few years back, and after several rounds of lay offs, we're down to a skeleton crew. I've saved the company hundreds of thousands of dollars with my projects and management has been happy with them (although most of the stakeholders have since left the company). Management now wants me to limit the programming that I do and spend most of my time on IT support: putting out fires, dealing with vendors, outsourced contractors, supporting company systems, managing projects, etc. I am a little burnt out on programming since I've been pushed pretty hard for the past several years. However, I'm not sure if this is a good career move in the long run. I'm a decent programmer (and also good with databases) but not obsessed with it to the point of coding outside of work. I'm approaching my mid 30s and there's potential ageism to deal with down the line. While I'm fortunate to have survived the lay offs, it sorta feels like my job is being "dumbed down". I have both good technical skills and people skills...but it doesn't take a genius to do what I'm doing now. And my success is being increasingly linked to others' performance rather than my own... Just looking for some advice. Is it time to move on? That's not really an easy thing to do since I'd likely have to move to another area to find another comparable tech job. Should I go after another pure technical role? Or should I stay and try to make this work? People say do what you "enjoy" but it doesn't really matter to me as long as I'm getting paid. Also the ageism thing is on the horizon and could be an issue eventually. I'm making a decent (but not great) salary. Should I chase money and maximize my income while I still have a chance? Or be happy with a moderate salary and 40 hour work week?

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • How can state changes be batched while adhering to opaque-front-to-back/alpha-blended-back-to-front?

    - by Sion Sheevok
    This is a question I've never been able to find the answer to. Batching objects with similar states is a major performance gain when rendering many objects. However, I've been learned various rules when drawing objects in the game world. Draw all opaque objects, front-to-back. Draw all alpha-blended objects, back-to-front. Some of the major parameters to batch by, as I understand it, are textures, vertex buffers, and index buffers. It seems that, as long as you are adhering to the above two rules, there's little to be done in regards to batching. I see one possibility to batch, while still adhering to the above two rules. Opaque objects can still be drawn out of depth-order, because drawing them front-to-back is merely a fillrate optimization, meanwhile state changes may very well be far more expensive than the overdraw of drawing out of depth-order. However, non-opaque objects, those that require alpha-blending at least, must be drawn back-to-front in order to avoid rendering artifacts. Is the loss of the fillrate optimization for opaques worth the state batching optimization?

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Are there design patterns or generalised approaches for particle simulations?

    - by romeovs
    I'm working on a project (for college) in C++. The goal is to write a program that can more or less simulate a beam of particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following: there is a World that holds all objects you can add objects to this world such as Particle, Dipole and Quadrupole time is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear). each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations) save the Particle positions repeat This seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field) So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?

    Read the article

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • System in low graphics, deleted linux, grub rescue, can't access windows

    - by First timer
    So I'm pretty new to Ubuntu but I managed to install it with no big problems on both my desktop and netbook. When I installed it on my brother's netbook everything went horribly wrong and now I fear the system is close to beyond repair. The problem was first that it said it did not have any space left (seemed ridiculous since it had a lot). Then Ubuntu began booting into a "System is running in low graphics mode error" which I then tried to fix, using all the tips I could find in here but nothing helped. I think the graphics error and lack of space might have been related but I can't be sure. Finally I gave up repairing Ubuntu and went for a reinstall. Shouldn't have done that! I read that I should simply open Ubuntu through a live usb and choose GParted to delete the Linux partitions so I did and rebooted accordingly. Next, I was to install Ubuntu but now I am only given the option to wipe the whole disk for Ubuntu, not install along with windows 7. If I access GParted I can still see the ntfs partitions that hold windows 7 (there are 2: one labeled RECOVERY and another labeled OS and boot) so why can't I access them? Btw. the OS and boot has a little red mark with a warning that 1 cluster is referenced to multiple times, don't know what that means. If I boot without the live usb I am sent directly into a grub rescue "black screen of the computer will follow no orders". Please, I know that the easiest might be to simply wipe the whole thing clean but there are important files and programs on windows 7. Is there a way to just access windows? It is a dell inspiron 1018 mini netbook, so I have no cd input and no windows 7 installation cd.

    Read the article

  • Restoring backup with Deja Dup from external HD

    - by widgg
    So, here's the problem that I have with restoring. First, I backed up like everything. So I had place on the external HD for only one copy. I had to reinstall everything, but I didn't worry because I had a backup. But now, restoring doesn't work and it starts to be annoying. So, I right click "Restore missing files...". Then I have the popup window from Deja Dup asking where is the backup. So, I select the external HD and either put nothing in "folder" or just put ".". Thinking that it should be the base to look for backups. In both cases, after a while scanning, I get: The Volume "Filesystem root" has only 139.7mb disk space remaining. But my partition "/home" has 799.6Gb free. Also, I just want to restore some files, I don't need all of them. On my external HD, there's a file called: duplicity-full.20120514T220834Z.manifest. A text file. In it, I can see that everything is partitions in files of 52mb. So, small files are in one archive while very large files are split in multiple one. But I can see the exact list of file that I have. So, I'm guessing that my backup is intact. What am I doing wrong ? Is possible that it failes, even to list all the files, because I don't have enough space left on my ExtHD ? Why can't duplicity used another location to do that ?

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Problem in my terminal related clamav

    - by Hejar Hejar
    Recently I decided to use Avast Antivirus. I uninstalled clamav and installed Avast. Now whenever I use my terminal I get the following errors.: 5 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/1,250 kB of archives. After this operation, 5,120 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... dpkg: warning: files list file for package `clamav-daemon' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `python-pyclamav' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `clamav-base' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `clamav-freshclam' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `libclamav6' missing, assuming package has no files currently installed. dpkg: warning: files list file for package `python-clamav' missing, assuming package has no files currently installed. (Reading database ... 554940 files and directories currently installed.) Please help. Thank you

    Read the article

  • Run a VirtualBox VM in a second X-Server with Graphic support

    - by Scindix
    I'm starting a VirtualBox VM (Windows 7) in a second X-Server (Ubuntu 14.04) and i'm using the following xinit script (/path/.vboximage): optirun VBoxManage startvm <vm name> & exec tinywm I recognized that while running Virtualbox normally under Gnome (Unity to be precise ;-) ) I get full graphics support. But when I run it on a second xserver there seem to be some problems. E.g. Windows Aero doesn't seem to work and Chrome WebGL demos are running with poorer performance. I'm not a big Windows expert, so I don't know how I could check the used Graphic card (specification). But it is very obvious that something has changed when running the vm in the extra X-Server. Also when I try to replace tinywm with compiz I get the unity frame around the VM, which also seems to have no graphics acceleration (no transparency effects) So it seems that the X-Server doesn't have Graphic acceleration at all. I have a NVidia 525m and an Intel HD3000 which are both capable of advanced graphics. I'm starting the above script with startx /path/.vboximage - :1 How could I fix that?

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Computer suddenly won't boot - stops at a flashing prompt

    - by Dave M G
    I have been running Ubuntu on my computer for a long time, and I have been using 11.10 since it became available in October. Suddenly, this morning, when I rebooted, the computer would not reach the log in screen. I go through the standard POST boot sequence, and I also get a splash screen for my Nvidia graphics card, so at least most of the hardware seems to be working. After that, all I get is a flashing text prompt - one blinking white underline character on a screen that is otherwise completely blank. I don't think it is even reaching GRUB. No key input is possible. I have tried various key combinations to try and initiate some kind of interface, be it command line or anything else. The only key combination that works is [CTRL]+[ALT]+[Delete] to reboot. I realize this is likely to be a hardware problem, but it could be an Ubuntu problem(?), so I'm hoping for a specific set of troubleshooting steps so I can diagnose and repair this issue. My current suspicion is that one of the drives in my 2 disk software RAID has failed (even though they should be too new for that). However, this computer is critical to my work, so I'd like to invite advice on any possibilities so as to waste as little time as possible in fixing this machine.

    Read the article

< Previous Page | 623 624 625 626 627 628 629 630 631 632 633 634  | Next Page >