Search Results

Search found 7618 results on 305 pages for 'backup exec'.

Page 124/305 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • I have a error building a .vdproj on msbuild with nant

    - by Luís Custódio
    I'm getting used to using nant for build releases. But I have started to use asp.net MVC, and i choice make the setup for installation with a .vdproj . But, when I call the: < exec program="${dotnet.dir}/msbuild.exe" commandline='"./Wum.sln" /v:q /nologo /p:Configuration=Release' / in nant, my result is: [exec] D:\My Documents\Visual Studio 2008\Projects\Wum\Wum.sln : warning MS B4078: The project file "Wum.Setup\Wum.Setup.vdproj" is not supported by MSBuild and cannot be built. Someone have some clue, or a solution? If I use the devenv, I'll have a problem? Thanks in advance.

    Read the article

  • MSSQL2008: DTC Transaction - Internal abort

    - by Teutales
    Hi all, I write a small own replication - a trigger which fires an DTC INSERT to another server (one reason for my own "replication": while trigger is running it calculates some data, another: it works from an express version to an express version). When I do the initial insert from the same Host with the windows authentification it works fine. But there is a webserver on another host, which uses the sqlserver login (for testing sa). When this Host do the initial insert I get a Internal abort after the entlisting and creating phase in the DTCTransaction EventClass (Profiler). The magic is: When I first fire it from the same Host with the windows authentification, I can fire it from the webserver and it works fine. But I just have to wait some minutes and it won't work. Where is my error in reasoning... Thanks! Greetz Teutales Here is my initial server script: EXEC master.dbo.sp_addlinkedserver @server = @Servername, @srvproduct=N'SQL Server' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = @Servername, @locallogin = NULL , @useself = N'False', @rmtuser = @Serverlogin, @rmtpassword = @Serverpwd

    Read the article

  • DataTable identity column not set after DataAdapter.Update/Refresh on table with "instead of"-trigge

    - by Arno
    Within our unit tests we use plain ADO.NET (DataTable, DataAdapter) for preparing the database resp. checking the results, while the tested components themselves run under NHibernate 2.1. .NET version is 3.5, SqlServer version is 2005. The database tables have identity columns as primary keys. Some tables apply instead-of-insert/update triggers (this is due to backward compatibility, nothing I can change). The triggers generally work like this: create trigger dbo.emp_insert on dbo.emp instead of insert as begin set nocount on insert into emp ... select @@identity end The insert statement issued by the ADO.NET DataAdapter (generated on-the-fly by a thin ADO.NET wrapper) tries to retrieve the identity value back into the DataRow: exec sp_executesql N' insert into emp (...) values (...); select id, ... from emp where id = @@identity ' But the DataRow's id-Column is still 0. When I remove the trigger temporarily, it works fine - the id-Column then holds the identity value set by the database. NHibernate on the other hand uses this kind of insert statement: exec sp_executesql N' insert into emp (...) values (...); select scope_identity() ' This works, the NHibernate POCO has its id property correctly set right after flushing. Which seems a little bit counter-intuitive to me, as I expected the trigger to run in a different scope, hence @@identity should be a better fit than scope_identity(). So I thought no problem, I will apply scope_identity() instead of @@identity under ADO.NET as well. But this has no effect, the DataRow value is still not updated accordingly. And now for the best part: When I copy and paste those two statements from SqlServer profiler into a Management Studio query (that is including "exec sp_executesql"), and run them there, the results seem to be inverse! There the ADO.NET version works, and the NHibernate version doesn't (select scope_identity() returns null). I tried several times to verify, but to no avail. Of course this just shows the resultset coming from the database, whatever happens inside NHibernate and ADO.NET is another topic. Also, several session properties defined by T-SQL SET are different in the two scenarios (Management Studio query vs. application at runtime) This is a real puzzle to me. I would be happy about any insights on that. Thank you!

    Read the article

  • Cleaning Up Unused Users and Groups (Ubuntu 10.10 Server)

    - by PhpMyCoder
    Hello experts, I'm very much a beginner when it comes to Ubuntu and I've been learning the ropes by diving in and writing a (backend-language independent) web app framework that relies on apache, some clever mod_rewrites, Ubuntu permissions, groups, and users. One thing that really annoys my inner clean-freak is that there are loads of users and groups that are created when Ubuntu is installed that are never used (Or so I think). Since I'm just running a simple web app server, I would like to know: What users/groups can I remove? Since you'll probably ask for it...here's a list of all the users on my box (excluding the ones I know that I need): root daemon bin sys sync man lp mail uucp proxy backup list irc gnats nobody libuuid syslog And a list of all of the groups: root daemon bin sys adm tty disk lp mail uucp man proxy kmem dialout fax voice cdrom floppy tape sudo audio dip backup operator list irc src gnats shadow utmp video sasl plugdev users nogroup libuuid crontab syslog fuse mlocate ssl-cert lpadmin sambashare admin

    Read the article

  • Force initial Google Drive sync with a non-empty folder?

    - by Terrance Shaw
    I upgraded my iMac with an SSD last night and restored from a Time Capsule backup. Everything is now working substantially zippier and overall better, with the exception of one thing: Google Drive refuses to continue to sync with the Google Drive folder that it'd been using before the upgrade, and I ultimately ended up having to just delete the folder and let it resync from scratch to get past its stubborn error (alternatively, I suppose I could've simply moved the contents, set the path to the now-empty folder, then moved them back). Is there any way to get past this particular issue (for future reference), or is it something that Google put in place to ensure that a new user doesn't go and specify their root drive as the backup destination?

    Read the article

  • Ant build.xml requires user input, but Eclipse has no tty

    - by carneades
    I'm trying to better integrate Eclipse with my build.xml. My build file calls GNU Make for the native portion of the program, and the Makefile uses sudo to movethe compiled libs into system path. Unfortunately that requires entering a password, and Eclipse's terminal doesn't accept user input. So the result from running the build in eclipse is: [exec] sudo: no tty present and no askpass program specified [exec] make: *** [install] Error 1 Any way around this problem? Can the ant build be elevated to root some other way?

    Read the article

  • Recovering OS X Mail Accounts Lost in Crash

    - by Tim
    I had a hard crash on my Mac PowerBook and when I restarted, Mail came up with only my MobileMe account still available and I cannot figure out how to restore the other eight email accounts I have. The directories in ~/Library/Mail all seem to be there. I even did an rsync of the modified .plist files from a TimeMachine backup of the directory from before the crash (unfortunately, I was on travel, so the backup is more than a week old and I'd like to try and recover from that point without having to entirely restore from TimeMachine). I also did a fix permissions. So my questions are where exactly is the account information for Mac Mail kept? Any thoughts of what might have caused the failure? Why does only MobileMe come up? Any other thoughts on how to fix things?

    Read the article

  • Switching BIOS SATA RAID/AHCI setting causes BSOD at Windows Start - Why?

    - by thephatp
    I just changed my disk setup from: 1 SATA HDD Primary OS Disk 2x SATA HDD Backup Disks in RAID 1 TO: 1 SATA SSD Primary OS Disk 1 SATA HDD Backup Disk [No RAID] Everything worked great, no problem. So, since I don't have a RAID array anymore, I decided that I could change my BIOS setting to AHCI instead of RAID. I have a Gigabyte GA-P35-DS3R v1.0 mobo. These are my steps: Settings Integrated Peripherals "SATA RAID/AHCI Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Next Step: Launch BIOS settings Integrated Peripherals "Onboard SATA/IDE Ctrl Mode" = RAID -- Changed this setting to AHCI Reboot Windows Start screen shows up, but as the color orbs are spinning into focus, BSOD and immediate restart Repeated reboot several times, same outcome Switch both settings back to RAID, reboot, and Windows starts up just fine, no issues. What am I missing? Why can't I set it to AHCI mode without BSODs?

    Read the article

  • Can't write to file - 'Operation not permitted' WITH sudo

    - by charliehorse55
    I am having trouble writing to a few files on an external HD. I am using it to store media files as well as my time machine backup. The drive is formatted as HFS+ Journaled, and other files on the drive can be written successfully. Additionally, the time machine backup is working perfectly. Permissions for the file: $ ls -le -@ Parks\ and\ Recreation\ -\ S01E01.avi -rw-rw-rw-@ 1 evantandersen staff 182950496 22 May 2009 Parks and Recreation - S01E01.avi com.apple.FinderInfo 32 Things I have already tried: sudo chflags -N sudo chown myusername sudo chown 666 sudo chgrp staff Checked that the file is not locked (get info in finder) Why can't I modify that file? Even with sudo I can't modify it at all.

    Read the article

  • Failover tmpfs mirroring. Am I doing it right?

    - by user45286
    My goal is to have a certain directory to be available as tmpfs. There will be some modifications during server uptime in this dir and those modifications must be synced to non-tmpfs persistent dir on HDD over rsync. After server boot the latest version from non-tmpfs persistent dir must be moved to tmpfs and rsync syncing to be started. I'm afraid that rsync will erase non-tmpfs backup if tmpfs dir will be empty.. I'm doing it in this way right now: create tmpfs partition in /etc/fstab cat /etc/rc.local (pseudocode) delete "tmpfs rsync" cronjob from /var/spool/cron/crontabs if there is any cp -r /path/to/non-tmpfs-backup /path/to/tmpfs/dir append /var/spool/cron/crontabs with "tmpfs rsync" cronjob What do you think?

    Read the article

  • trouble backing up large mysql database

    - by Patrick
    I have a wordpress MU database with something like 10,000+ tables for various user's blogs. I need to upgrade wordpress MU to newest version, but want to backup the DB before hand. PHPMyAdmin fails to even load the page when i click export. Ive tried going into the server (windows) and using dos command line: mysqldump -u USERNAME -p PASSWORD> BACKUP.sql but it hangs for a minute and gives me the error: error 23: out of resources when opinging file '.\USERNAME\wp_1037_links.MYD' (Errorcode: 24) when using LOCK Tables What am i doing wrong, or should i be doing? Is PHPMyAdmin right for something this size? Is there a better way of doing this than the two methods i tried? **Note that this is not my site, so any suggestions as to the setup of the DB ill have to run by the owner. Im just here for WP related crap, this is kind of out of scope for what i was brought on to do.

    Read the article

  • vbscript calling svnadmin dump

    - by Dexton
    Hi, Running the following vbscript to call svnadmin dump fails (i.e. no dump is being created) Set objShell = CreateObject("WScript.Shell") Set objShellExec = objShell.Exec("svnadmin dump C:\svn_repos > C:\fullbackup") I discovered from another post, http://stackoverflow.com/questions/445121/svn-dump-fails-with-wscript-shell/2400011#2400011 that i had to create a new command interpreter using cmd as follows: Set objShellExec = objShell.Exec("%comspec% /c" & "svnadmin dump C:\svn_repos > C:\fullbackup") This successfully created the dump but I could never read the output information (i.e. * Dumped revision 100. * Dumped revision 101. etc). I tried Do While objWshScriptExec.Status = 0 Wscript.Echo objShellExec.StdOut.Readline Wscript.Echo objShellExec.StdErr.Readline WScript.Sleep 100 Loop but nothing ever gets displayed. May I know how i can read the output information and also why I needed to create a new command interpreter using "%comspec% /c" before the svnadmin dump command would execute correctly? Thanks. Regards, Dexton

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • How many hardlinks in a drive?

    - by acidzombie24
    I made a backup of one of my external drives to another. They are both NTFS filesystems. I moved ALL disk contents into a folder called a and right clicked to get file/folder/size count. They are exactly the same. However windows reports J having 1.33gb (backup) and Q: as 521mb. Now I think maybe its because of hardlinks, I must have more on J then Q. How might I figure out how many hardlinks I have in a drive?

    Read the article

  • IIS 6 SSL Restore from PFX without Deleting Pending Request

    - by Sev
    I requested a new SSL certificate from a certificate authority, but until they process it my site is losing business. Before doing so, I had backed up the original certificate to a PFX file. Now when I try to restore the backup, it forces me to delete it, or process the request. Since the new one isn't ready yet, is there any way to restore the backup, without deleting the request? Or will it cause any issues if I delete the request to install the new one when it comes in? Server is IIS 6

    Read the article

  • Running lame from php

    - by gok
    I am trying to run lame from a php script. I have tried these, but no luck, I don't get anything returned! Any ideas? system('lame', $returnarr); system('lame --help', $returnarr); exec('lame', $returnarr); passthru('lame', $returnarr); even this one returns nothing: exec('which lame', $returnarr); I am on OSX and final deployment will be on Linux. Do you have better suggestions for an automated wav-mp3 conversion? From php, should I execute a bash script that executes Lame?

    Read the article

  • How to Set Linux Bonding Interface to Gigabit

    - by Kyle Brandt
    I have enabled Linux active backup mode bonding. Each interface is a gigabit interface, but the bond interface seems to end up at 100 Megabit: bonding: bond0: Warning: failed to get speed and duplex from eth1, assumed to be 100Mb/sec and Full. ... bnx2: eth0 NIC Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON ... bonding: bond0: backup interface eth1 is now up ethtool apparently can't provide info on bond: sudo ethtool bond0 Settings for bond0: No data available So does this mean I am operating at 100 or 1000 Megabit (My guess is 1000)? If it is only 100, what options in the ifcfg scripts or the modprobe bonding options do I need to sett to make it 1000?

    Read the article

  • Backing up VM data to host drive on Windows 7

    - by malcolms
    Hi, I have created a VM for Virtual PC in windows 7. I am writing a batch file to backup data in the VM to a host USB drive. I have shared the host drives. I have a USB drive that I want to backup to. But how do refer to the USB drive in the batch file. I cannot seem to map a drive to it, It is called "H on Malcolm-Desktop" in windows explorer. This is what I have tried. XCOPY C:\Inetpub\wwwroot "\\H on Malcolm-Desktop\HALII_VHD_Backup\DataBackup\Inetpub\wwwroot" /S /E /Y /D How do I write this command? Malcolm

    Read the article

  • SQL Server 2000 msdb database loading/suspect

    - by Blake Parcell
    My SQL Server recently suffered a raid controller/hard drive crash. After getting my hard drive problem corrected I soon found that some of my databases were (suspect) namely msdb. I am not a DBA by any means however am somewhat familiar with the daily SQL activities that happen on my server. So I restored from backup, and tried to bring my msdb database online. It is now forever stuck in (Loading\Suspect) and I am unable to script backups for my important databases. I can recreate all of the backup plans etc if i can somehow get a working msdb. Any help would be greatly appreciated. I am currently using: Microsoft SQL Server 2000 Version: 8.00.194

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • Inner Join with more than a field

    - by Leandro
    Precise to do a select with inner join that has relationship in more than a field among the tables Exemple: DataSet dt = new Select().From(SubConta.Schema) .InnerJoin(PlanoContabilSubConta.EmpSubContaColumn, SubConta.CodEmpColumn) .InnerJoin(PlanoContabilSubConta.FilSubContaColumn, SubConta.CodFilColumn) .InnerJoin(PlanoContabilSubConta.SubContaColumn, SubConta.TradutorColumn) .Where(PlanoContabilSubConta.Columns.EmpContabil).IsEqualTo(cEmp) .And(PlanoContabilSubConta.Columns.FilContabil).IsEqualTo(cFil) .And(PlanoContabilSubConta.Columns.Conta).IsEqualTo(cTrad) .ExecuteDataSet(); But the generated sql is wrong: exec sp_executesql N'/* GetDataSet() */ SELECT [dbo].[SubContas].[CodEmp], [dbo].[SubContas].[CodFil], [dbo].[SubContas].[Tradutor], [dbo].[SubContas].[Descricao], [dbo].[SubContas].[Inativa], [dbo].[SubContas].[DataImplantacao] FROM [dbo].[SubContas] INNER JOIN [dbo].[PlanoContabilSubContas] ON [dbo].[SubContas].[CodEmp] = [dbo].[PlanoContabilSubContas].[EmpSubConta] INNER JOIN [dbo].[PlanoContabilSubContas] ON [dbo].[SubContas].[CodFil] = [dbo].[PlanoContabilSubContas].[FilSubConta] INNER JOIN [dbo].[PlanoContabilSubContas] ON [dbo].[SubContas].[Tradutor] = [dbo].[PlanoContabilSubContas].[SubConta] WHERE EmpContabil = @EmpContabil0 AND FilContabil = @FilContabil1 AND Conta = @Conta2 ',N'@EmpContabil0 varchar(1),@FilContabil1 varchar(1),@Conta2 varchar(1)',@EmpContabil0='1',@FilContabil1='1',@Conta2='1' What should be made to generate this sql? exec sp_executesql N'/* GetDataSet() */ SELECT [dbo].[SubContas].[CodEmp], [dbo].[SubContas].[CodFil], [dbo].[SubContas].[Tradutor], [dbo].[SubContas].[Descricao], [dbo].[SubContas].[Inativa], [dbo].[SubContas].[DataImplantacao] FROM [dbo].[SubContas] INNER JOIN [dbo].[PlanoContabilSubContas] ON [dbo].[SubContas].[CodEmp] = [dbo].[PlanoContabilSubContas].[EmpSubConta] AND [dbo].[SubContas].[CodFil] = [dbo].[PlanoContabilSubContas].[FilSubConta] AND [dbo].[SubContas].[Tradutor] = [dbo].[PlanoContabilSubContas].[SubConta] WHERE EmpContabil = @EmpContabil0 AND FilContabil = @FilContabil1 AND Conta = @Conta2 ',N'@EmpContabil0 varchar(1),@FilContabil1 varchar(1),@Conta2 varchar(1)',@EmpContabil0='1',@FilContabil1='1',@Conta2='1'

    Read the article

  • Debugging in Maven?

    - by aduric
    Is it possible to launch a debugger such as jdb from maven? I have a pom.xml file that compiles the project successfully. However, the program hangs somewhere and I would really like to launch jdb or an equivalent debugger to see what's happening. I compile using mvn compile and launch using: mvn exec:java -Dexec.mainClass="com.mycompany.app.App" I was expecting something like: mvn exec:jdb -Dexec.mainClass="com.mycompany.app.App" to launch the debugger but, as usual, my expectations are incongruent with maven's philosophy. Also, I couldn't find any documentation (on Maven's website or google) to describe how debugging works. I suspect that I have to use some plugin.

    Read the article

  • Qtestlib: QNetworkRequest not executed

    - by dzen
    I would like to test an asynchronous request to a webserver. For that purpose I'm creating a simple unittest to quickly try a few lines of code: void AsynchronousCall::testGet() { QNetworkAccessManager *nam = new QNetworkAccessManager(this); QUrl url("http://myownhttpserver.org"); QNetworkRequest req(url); this->connect(nam, SIGNAL(finished(QNetworkReply*)), this, SLOT(reqFinished(QNetworkReply *))); QNetworkReply *rep = nam->get(req); } void AsynchronousCall::reqFinished(QNetworkReply *rep) { qDebug() << rep->readAll(); qDebug() << "finshed"; } The problem is that reqFinished() is never reached. If I had a simple QEventLoop and and a loop.exec() just after the nam-get(req); the request is executed. Any hint ? Do I have to use a loop.exec() in my every unittests ?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >