Search Results

Search found 19690 results on 788 pages for 'result partitioning'.

Page 423/788 | < Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >

  • Compressed disk image on Linux

    - by Aaron Digulla
    I just got my new computer with a much bigger harddisk. I think I copied all important files over but just to be sure, I'd like to keep a disk image of my old disk. To save space, I'd like to compress it but I didn't find an option to mount a compressed image. My goals: Result must be easy to access No need to decompress the whole thing before I can access anything Files should be quick to locate - no TAR/CPIO archive Necessary space should be less than just copying the files over So ideally, I'm looking for a read-only, compressed file system which I can create in a file and which grows automatically.

    Read the article

  • How can I throttle the bandwidth consumed by Windows Automatic Updates?

    - by eleven81
    We have many Windows XP computers sharing one connection to the internet. These machines are set to download all available automatic updates and then prompt the user to install them. Whenever Patch Tuesday rolls around, our internet usage pegs out, and remains that way for most of the day, and sometimes into the following Wednesday. This hurts! I still want the machines to start to download the updates as soon as they are available, but if it takes until Thursday or Friday before the last updates are downloaded, that's still better than the latency and dropped connections we are seeing now as a result of the internet connection bottleneck. What can I do to throttle back how rapidly each machine downloads the updates, while still having them all start the download process as soon as the updates are available? I have no desire to run a WSUS server. Also, the internet connection is more than enough, whenever there are no updates to download.

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Coordinate and positioning problem on iOS with cocos2d-x

    - by Vexille
    I'm using cocos2d-x alongside with Marmalade and running some tests and tutorials before starting an actual project with them. So far things are working reasonably well on the windows simulator, Android and even on Blackberry's Playbook, but on iOS devices (iPhone and iPad) the positioning seems to be off. To make things clearer, I put together a scene that just draws an image in the middle of the screen. It worked as expected on everything else, but this is the result I got on an iPhone: To get the coordinates for the center of the screen I'm using the VisibleRect class from the TestCpp sample. It just uses sharedOpenGLView to get the visible size and visible origin, and calculate the center from that. CCSprite* test = CCSprite::create("Ball.png", CCRectMake(0, 0, 80, 80) ); test->setPosition( ccp(VisibleRect::center().x, VisibleRect::center().y) ); this->addChild(test); Also I have a noBorder policy set on AppDelegate: CCEGLView::sharedOpenGLView()->setDesignResolutionSize(designSize.width, designSize.height, kResolutionNoBorder); One funny thing is that I tried to deploy the TestCpp sample project to some iOS devices and it worked reasonably well on the iPhone, but on the iPad the application was only being drawn on a small portion of the screen - just like what happened on the iPhone when I tried using the ShowAll policy.

    Read the article

  • Scanning php uploads in tmp directory with clamdscan fails

    - by Nikola
    I can't seem to get this thing to work, some permission problem maybe, but i can't even run clamdscan normally form console with root the result is always Permission denied. for example i create a file test.txt (eicar file) in /tmp and execute "clandscan /tmp/test.txt" in console logged in as root and i get "/tmp/test.txt: Access denied. ERROR ". The clamd demon is running with user clamav could that be the reason? Now i want to scan the same file (/tmp/test.txt) via php , so i run (i have chowned the file to apache:apache ) $cmd="clamdscan /tmp/test.txt"; exec($cmd,$a,$b); i get error 127 i try with the full path of the command /usr/bin/clamdscan i get error 126 (command is found but is not executable), this means that apache doesn't have the permission to execute /usr/bin/clamdscan ? what could be the problem?

    Read the article

  • Is it possible to get xRandR to see two separate outputs with the nvidia driver?

    - by rumtscho
    I have two monitors, which I have set up with nvidia-settings in Twinview. The result: When I want to do something in xRandR, it does not function. It doesn't report one output per video card head, but a single output mapped to the combined area of both monitors: rumtscho@bradbury:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 3840 x 1440, current 3840 x 1440, maximum 3840 x 1440 default connected 3840x1440+0+0 0mm x 0mm 3840x1440 50.0* Now I promised somebody to help test a driver. The developer is using an open source driver for Intel video cards, and his driver assumes that there is more than one xRandR output, each mapped to a monitor. So I tried rewriting my xorg.conf to somehow get two outputs to show up, but failed. Googling showed that people faced with the xRandR-nvidia problem either stopped using xRandR and achieved what they needed with nvidia-settings, or changed their driver to nouveau. The first is not going to help in my situation, and I am not willing to give up the proprietary driver, because Compiz won't work without it. So does anybody know a way to get nvidia to actually pass on information on outputs to xRandR?

    Read the article

  • C++ Succinctly now available!

    - by Michael B. McLaughlin
    Over the summer I worked with SyncFusion to create an eBook based off of my C# to C++ guide for their free Succinctly Series of eBooks. Today the result, C++ Succinctly, was published for download. It is a free (registration required; they make tools and libraries for .NET development so you might get an occasional email from them – I’ve been signed up for a few months and have had maybe 3 emails total so it’s not horrible super spam or anything ) and you can download it as a PDF or a Kindle .MOBI file (or both). I’m excited with how it turned out and enjoyed working with the people at SyncFusion. The book contains a total of 20 code samples, which you can download from BitBucket (there’s a link very early in the book). Almost all of the code is also inline in the book itself so that you don’t need to worry about flipping back and forth between your dev machine and your eReader (but if you want to try to understand a concept better, you can easily download the code, open it up in VS 2012, and play around with it to see what happens when you tinker with things). The code does require Visual Studio 2012 because of its expanded support for C++11 features and since I wrote all of the samples as Console programs for clarity and compactness, you will need a version that supports C++ desktop development (currently VS 2012 Pro, Premium, or Ultimate). Sometime this Fall, Microsoft will be releasing Visual Studio 2012 Express for Windows Desktop which should provide a free way to use the samples. That said, I tested all of the samples with MinGW and only the StorageDurationSample will not compile with it due to the thread-local storage code. If you comment that out then you can compile and run all the samples with MinGW (or using a recent version of GCC in a GNU/Linux environment, or any other C++ compiler that provides the same level of C++11 support that Visual Studio 2012 does). I hope it proves helpful to those of you who choose to check it out!

    Read the article

  • How to Reap Anticipated ROI in Large-Scale Capital Projects

    - by Sylvie MacKenzie, PMP
    Only a small fraction of companies in asset-intensive industries reliably achieve expected ROI for major capital projects 90 percent of the time, according to a new industry study. In addition, 12 percent of companies see expected ROIs in less than half of their capital projects. The problem: no matter how sophisticated and far-reaching the planning processes are, many organizations struggle to manage risks or reap the expected value from major capital investments. The data is part of the larger survey of companies in oil and gas, mining and metals, chemicals, and utilities industries. The results appear in Prepare for the Unexpected: Investment Planning in Asset-Intensive Industries, a comprehensive new report sponsored by Oracle and developed by the Economist Intelligence Unit. Analysts say the shortcomings in large-scale, long-duration capital-investments projects often stem from immature capital-planning processes. The poor decisions that result can lead to significant financial losses and disappointing project benefits, which are particularly harmful to organizations during economic downturns. The report highlights three other important findings. Teaming the right data and people doesn’t guarantee that ROI goals will be achieved. Despite involving cross-functional teams and looking at all the pertinent data, executives are still failing to identify risks and deliver bottom-line results on capital projects. Effective processes are the missing link. Project-planning processes are weakest when it comes to risk management and predicting costs and ROI. Organizations participating in the study said they fail to achieve expected ROI because they regularly experience unexpected events that derail schedules and inflate budgets. But executives believe that using more-robust risk management and project planning strategies will help avoid delays, improve ROI, and more accurately predict the long-term cost of initiatives. Planning for unexpected events is a key to success. External factors, such as changing market conditions and evolving government policies are difficult to forecast precisely, so organizations need to build flexibility into project plans to make it easier to adapt to the changes. The report outlines a series of steps executives can take to address these shortcomings and improve their capital-planning processes. Read the full report or take the benchmarking survey and find out how your organization compares.

    Read the article

  • A (slight) Change of Focus

    - by StuartBrierley
    When I started this blog in September 2009 I was working as a BizTalk developer for a financial institution based in the South West of England.  At the time I was developing using BizTalk Server 2004 and intended to use my blog to collate and share any useful information and experiences that I had using this version of BizTalk (and occasionally other technologies) in an effort to bring together as many useful details as I could in one place. Since then my circumstances have changed and I am no longer working in the financial industry using BizTalk 2004.  Instead I have recently started a new post in the logistics industry, in the North of England, as "IT Integration Manager".  The company I now work for has identified a need to boost their middleware/integration platform and have chosen BizTalk Server 2009 as their platform of choice; this is where I come in. To start with my role is to provide the expertise with BizTalk that they currently lack, design and direct the initial BizTalk 2009 implementation and act as lead developer on all pending BizTalk projects.  Following this it is my hope that we will be able to build on the initial BizTalk "proof of concept" and eventually implement a fully robust enterprise level BizTalk 2009 environment. As such, this blog is going to see a shift in focus from BizTalk 2004 to BizTalk 2009 and at least initially is likely to include posts on the design and installation of our BizTalk environment - assuming of course that I have the time to write them! The last post I made was the start of a chapter by chapter look at the book SOA Patterns with BizTalk Server 2009.  Due to my change of job I am currently "paused" half way through this book, and my lack of posts on the subject are directly as a result of the job move and the pending relocation of my family.  I am hoping to write about my overall opinion of this book sometime soon; so far it certainly looks like it will be a positive one. Thanks for reading; I'm off to manage some integration.

    Read the article

  • Dual Monitor 'How To' for 12.04

    - by Kim Prince
    I recently built my own PC and was delighted with the result, except for a problem with dual monitors. After having tried a few different combinations of hardware, I think what I really need is a 'how to' explanation. My motherboard is an MSI Z77MA-G45, which has an analogue, a DVI, and a HDMI port. Initially I hooked monitors up to the DVI and analogue port and it seemed to work fine. Both screens worked independently of each other, it was great. After a few days I started turning my PC off at night, and when I tried to turn it back on it would boot into the terminal mode. I would have to turn one of the monitors off and after rebooting a few times, it would eventually boot into an X Window session. Occasionally I would see an error relating to Xorg. I upgraded the motherboard BIOS but that made no difference. Eventually, I installed a graphics card - an NVIDIA GeForce GT 520. Now it seems that my on board graphics have been disabled completely, and I am reliant on the graphics card. Furthermore, the graphics card seems to only recognise one screen at a time. (The first time I rebooted with both plugged in, it flashed up a message saying that it was auto-selecting DVI). Anyhow, I think I need some 'how to' (or perhaps 'where to'), from here. For example, is X Windows configuration the next place to look? And how do I go about configuring X Windows? (Note that in Systems Settings it says my graphics driver is 'unknown', and when I ask it to detect monitors, it sees only the one!)

    Read the article

  • Laptop runs HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu additions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • USB webcam detected in KVM, but doesn't work

    - by Gene Vincent
    I have installed XP in a virtual machine running on Linux with QEMU/KVM (qemu-kvm-0.11.0-4.5.2). I export my Linux webcam to KVM using the switches "-usb -usbdevice host:046d:0929". The XP guest sees the webcam and the drivers install, but the camera only shows a black image. When I open the camera in Windows Explorer, it says "0 images" and a black image, while on a real XP, it says "1 image" and shows the video from the camera. I tried the same with a different webcam, but the result is the same. Any ideas what might be wrong or how I could debug this ?

    Read the article

  • What happens when a HTTP request is terminated prematurely?

    - by Gowtham
    Suppose, I enter a URL in my browser and browser submits the HTTP request. The remote HTTP server accepts the request and initiates a long task to serve the request. If I terminate the request before it is complete (for example, press Esc or in Firefox), how is the request closed? Will the browser communicate this abort request to the server (I think it doesn't)? Presuming no, upon completion of the long task, what will the server do with the result? Does it send it back anyway? If it does, what will happen? Does it reach till my PC? Or gets lost on the way? This is just for my curiosity. Thanks for your time :)

    Read the article

  • LDAP ACI Debugging

    - by user13332755
    If you've ever wondered which ACI in LDAP is used for a special ADD/DELETE/MODIFY/SEARCH request you need to enable ACI debugging to get details about this. Edit/Modify dse.ldifnsslapd-infolog-area: 128nsslapd-infolog-level: 1ACI Logging will be placed at 'errors' file, looks like: [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  Num of ALLOW Handles:15, DENY handles:0 [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  Processed attr:nswmExtendedUserPrefs for entry:uid=mparis,ou=people,o=vmdomain.tld,o=isp [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  Evaluating ALLOW aci index:33 [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  ALLOW:Found READ ALLOW in cache [22/Jun/2011:15:25:08 +0200] - INFORMATION - NSACLPlugin - conn=-1 op=-1 msgId=-1 -  acl_summary(main): access_allowed(read) on entry/attr(uid=mparis,ou=people,o=vmdomain.tld,o=isp, nswmExtendedUserPrefs) to (uid=msg-admin-redzone.vmdomain.tld-20100927093314,ou=people,o=vmdomain.tld,o=isp) (not proxied) (reason: result cached allow , deciding_aci  "DA anonymous access rights", index 33)

    Read the article

  • Set ReturnPath globally in Postfix

    - by Gaia
    I have Magento using Sendmail and Wordpress using PHPmailer to send webapp-generated mail. Occasionally, someone will enter their email address incorrectly and the mail (let's say, a purchase receipt) will bounce back to the return-path specified by the script. I dont want to set the return path for each vhost, especially because it is not easily done. Ideally, WP would use the address of the blog admin and Magento would use one of the numerous email fields specified, but they default to using username@machinename (in my case, username is the system user and machinename is a FQDN, but it is not the same as the actual vhost FQDN). The result is that bounced mail returns to the server and, since the server is used only for outbound SMTP, the messages sit there, undelivered and worse, unread. I'm Postfix 2.6.6 on CentOS 6.3, is it possible to globally force a specific returnpath for all messages sent via PHP on the server?

    Read the article

  • How to print out information about Task Scheduler in powershell script?

    - by Jimboy
    I am trying to print out information from the Task Scheduler from the local computer in a powershell script so other users can print out this information as well and not have to access the Task Scheduler. I need the script to print out the name, status, triggers, next run time, last run time, last run result, author and created. I can print out the information about the name, next run time, and last run time, but the rest wont print out when i run the script. I have already got a little start on my script and got the fields down. $schedule = new-object -com("Schedule.Service") $schedule.connect() $tasks = $schedule.getfolder("\").gettasks(0) $tasks | select Name,Status,Triggers,NextRunTime,LastRunTime,LastRunResult,Author,Created | ft foreach ($t in $tasks) { foreach ($a in $t.Actions) { $a.Path } } Any help or suggestions would be appreciated.

    Read the article

  • Bridging two sockets

    - by Itehnological
    I wondered if it is possible to bridge two incoming tcp sockets. For example: Client A -----> Server <----- Client B The the server sends it's magic to both clients and then they connect to each other bypassing the server Server Client A ----------><---------- Client B UPDATE: The idea is when those clients can't bind to ports to listen to still be able to create connection between each other with the help of the server. For example Client A and Client B have tcp sockets with the server. User A decides to chat with User B and creates a new tcp connection with the server with the request to bridge it with User B. The server sends that request to Client B and it also opens up a new tcp connection with the server for that chat line. Now when the server has both chat connections from A and B it bridges them and they can work without the server, and as a result the server won't have to process all the messages and files the two users share. That's the idea/

    Read the article

  • startx error no desktop manager

    - by WikiWitz
    I have Backtrack 5R2 KDE. I started recovery mode and did a failsafe xorg configuration. After that, I cannot load the KDE manager when I enter the startx command after logging in. Whenever I do a startx command (as root), the result resembles the following: This is not the actual output (I just drew this with MS paint because I cannot do a printscreen). The screen is just black with the icon in the upper left corner. The other pop-up menu appears when left-clicking the mouse. I tried the cp xorg.conf.failsafe xorg.conf advice from other websites with no luck. I have also tried the 'reconfigure option(s)' form the recovery mode with no success.

    Read the article

  • Install Base Transaction Error Troubleshooting

    - by LuciaC
    Oracle Installed Base is an item instance life cycle tracking application that facilitates enterprise-wide life cycle item management and tracking capability.In a typical process flow a sales order is created and shipped, this updates Inventory and creates a new item instance in Install Base (IB).  The Inventory update results in a record being placed in the SFM Event Queue.  If the record is successfully processed the IB tables are updated, if there is an error the record is placed in the csi_txn_errors table and the error needs to be resolved so that the IB instance can be created.It's extremely important to be proactive and monitor IB Transaction Errors regularly.  Errors cascade and can build up exponentially if not resolved. Due to this cascade effect, error records need to be considered as a whole and not individually; the root cause of any error needs to be resolved first and this may result in the subsequent errors resolving themselves. Install Base Transaction Error Diagnostic Program In the past the IBtxnerr.sql script was used to diagnose transaction errors, this is now replaced by an enhanced concurrent program version of the script. See the following note for details of how to download, install and run the concurrent program as well as details of how to interpret the results: Doc ID 1501025.1 - Install Base Transaction Error Diagnostic Program  The program provides comprehensive information about the errors found as well as links to known knowledge articles which can help to resolve the specific error. Troubleshooting Watch the replay of the 'EBS CRM: 11i and R12 Transaction Error Troubleshooting - an Overview' webcast or download the presentation PDF (go to Doc ID 1455786.1 and click on 'Archived 2011' tab).  The webcast and PDF include more information, including SQL statements that you can use to identify errors and their sources as well as recommended setup and troubleshooting tips. Refer to these notes for comprehensive information: Doc ID 1275326.1: E-Business Oracle Install Base Product Information Center Doc ID 1289858.1: Install Base Transaction Errors Master Repository Doc ID: 577978.1: Troubleshooting Install Base Errors in the Transaction Errors Processing Form  Don't forget your Install Base Community where you can ask questions to help you resolve your IB transaction errors.

    Read the article

  • reverse proxy http to tomcat

    - by John Q
    I've configured an Apache server with SSL and reverse proxy to a tomcat <VirtualHost domain.com:1443> [...] ProxyRequests Off ProxyPreserveHost On ProxyPass / http://local.com:8080/ ProxyPassReverse / http://local.com:8080 SSLEngine on [...] </VirtualHost> Tomcat is listening on 8080. The issue is that the app on tomcat is redirecting the request (HTTP 302 Moved temporairly). For example, if I use the URL https:// domain.com:1443/folder, reverse proxy launch the request http:// local.com:8080/folder, then, the app redirect to "/subfolder", so the final request is: http://domain.com:1443/folder/subfolder. Result is a 400 Bad request error code, as the request is HTTP on my SSL port. Do you know how I can fix this issue ? Thanks in advance.

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • Is it safe to use a single switch for multiple subnets?

    - by George Bailey
    For a moment, forget about whether the following is typical or easy to explain, is it safe and sound? Internet | ISP supplied router x.x.x.1 (public subnet) | switch-------------------------------------+ | (public subnet) | (public subnet) BVI router (switch with an access list) NAT router | (public subnet) | (private subnet 192.168.50.1) +--------------------------------switch----+ (both subnets) | | computer with IP 192.168.50.2 ------+ +----computer with IP x.x.x.2 I don't plan to implement this setup, but I am curious about it. The 50.2 computer may send a packet to the x.2 computer, but it will use 50.1 as the router, since 50.2 knows that the subnet is different. Would this result in the packet being received twice by the x.2 machine, first directly through the switch, second by way of the two routers? Do you see any problems with this aside from how confusing it is, and that it would put one switch doing the work of two subnets?

    Read the article

  • Excel chart won't update, based on calculated cells

    - by samJL
    I have an Excel document (2007) with a chart (Clustered Column) that gets its Data Series from cells containing calculated values The calculated values never change directly, but only as a result of other cells in the sheet changing When I change other cells in the sheet, the Data Series cells are recalculated, and show new values - but the Chart based on this Data Series refuses to update automatically I can get the Chart to update by saving/closing, or toggling one of the settings (such as reversing x/y axis and then putting it back), or by re-selecting the Data Series Every solution I have found online doesn't work - I have Calculation set to automatic - Ctrl+Alt+F9 updates everything fine, EXCEPT the chart - I have recreated the chart several times, and on different computers - I have tried VBA scripts like: Application.Calculate Application.CalculateFull Application.CalculateFullRebuild ActiveWorkbook.RefreshAll DoEvents None of these update or refresh the chart I do notice that if I type over my Data Series, actual numbers instead of calculations, it will update the chart - it's as if Excel doesn't want to recognize changes in the calculations Has anyone experienced this before or know what I might do to fix the problem? Thank you

    Read the article

  • Optimal way to make MySQL backups for fairly large databases (MyISAM / InnoDB)

    - by WinkyWolly
    Currently we have one beefy MySQL database that runs a couple of high traffic Django based websites as well as some e-commerce websites of decent size. As a result we have a fair amount of large databases using both InnoDB and MyISAM tables. Unfortunately we've recently hit a wall due to the amount of traffic so I've setup another master server to help alleviate reads / backups. Now at the moment I simply use mysqldump with a few arguments and it's proven to be fine.. until now. Obviously mysqldump is a slow quick method however I believe we've outgrown its use. I now need a good alternative and have been looking into utilizing Maatkits mk-parallel-dump utility or an LVM snapshot solution. Succinct short version: I have a fairly large MySQL databases I need to backup Current method using mysqldump is inefficient and slow (causing issues) Looking into something such as mk-parallel-dump or LVM snapshots Any recommendations or ideas would be appreciated - since I have to re-do how we're doing things I rather have it done properly / most efficient :).

    Read the article

  • Using C++ but not using the language's specific features, should switch to C?

    - by Petruza
    I'm developing a NES emulator as a hobby, in my free time. I use C++ because is the language I use mostly, know mostly and like mostly. But now that I made some advance into the project I realize I'm not using almost any specific features of C++, and could have done it in plain C and getting the same result. I don't use templates, operator overloading, polymorphism, inheritance. So what would you say? should I stay in C++ or rewrite it in C? I won't do this to gain in performance, it could come as a side effect, but the idea is why should I use C++ if I don't need it? The only features of C++ I'm using is classes to encapsulate data and methods, but that can be done as well with structs and functions, I'm using new and delete, but could as well use malloc and free, and I'm using inheritance just for callbacks, which could be achieved with pointers to functions. Remember, it's a hobby project, I have no deadlines, so the overhead time and work that would require a re-write are not a problem, might be fun as well. So, the question is C or C++?

    Read the article

< Previous Page | 419 420 421 422 423 424 425 426 427 428 429 430  | Next Page >