Search Results

Search found 15087 results on 604 pages for 'copy constructor'.

Page 107/604 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • why nginx rewrite post request from /login to //login?

    - by jiangchengwu
    There is a if statement, which will rewrite url when the client is Android. Everything ok. But, something got strange. Nginx will write post request /login to //login, even if the block of if statement is bank. So I got a 404 page. As the jetty server only accept /login request. Server conf: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; if ( $http_user_agent ~ Android ){ # rewrite something, been commented } } Debug info, origin log https://gist.github.com/3799021 ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http script regex: "Android" 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" ... 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:29:49 [debug] 26416#0: *1 HTTP/1.1 404 Not Found Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:29:49 GMT Content-Type: text/html;charset=ISO-8859-1 Transfer-Encoding: chunked Connection: keep-alive Cache-Control: must-revalidate,no-cache,no-store Content-Encoding: gzip ... Only when I commented the block in the configration file: location / { proxy_pass http://localhost:8785/; proxy_set_header Host $http_host; proxy_set_header Remote-Addr $http_remote_addr; proxy_set_header X-Real-IP $remote_addr; #if ( $http_user_agent ~ Android ){ # #} } The client can get an 200 response. Debug info, origin log https://gist.github.com/3799023 ... "POST /login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... 2012/09/28 16:27:19 [debug] 26319#0: *1 HTTP/1.1 200 OK Server: nginx/1.2.1 Date: Fri, 28 Sep 2012 08:27:19 GMT Content-Type: application/json;charset=UTF-8 Content-Length: 17 Connection: keep-alive ... As the log: 2012/09/28 16:29:49 [notice] 26416#0: *1 "Android" matches "Android/1.0", client: 106.187.97.22, server: ireedr.com, request: "POST /login HTTP/1.1", host: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script if 2012/09/28 16:29:49 [debug] 26416#0: *1 post rewrite phase: 4 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 5 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 6 2012/09/28 16:29:49 [debug] 26416#0: *1 generic phase: 7 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 8 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 9 2012/09/28 16:29:49 [debug] 26416#0: *1 access phase: 10 2012/09/28 16:29:49 [debug] 26416#0: *1 post access phase: 11 2012/09/28 16:29:49 [debug] 26416#0: *1 try files phase: 12 2012/09/28 16:29:49 [debug] 26416#0: *1 posix_memalign: 0000000001E798F0:4096 @16 2012/09/28 16:29:49 [debug] 26416#0: *1 http init upstream, client timer: 0 2012/09/28 16:29:49 [debug] 26416#0: *1 epoll add event: fd:13 op:3 ev:80000005 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Host: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "ireedr.com" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "X-Real-IP: " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script var: "106.187.97.22" 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: " " 2012/09/28 16:29:49 [debug] 26416#0: *1 http script copy: "Connection: close " 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept-Encoding: identity, deflate, compress, gzip" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "Accept: */*" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "User-Agent: Android/1.0" 2012/09/28 16:29:49 [debug] 26416#0: *1 http proxy header: "POST //login HTTP/1.0 Host: ireedr.com X-Real-IP: 106.187.97.22 Connection: close Accept-Encoding: identity, deflate, compress, gzip Accept: */* User-Agent: Android/1.0 " ... Maybe post rewrite phase had rewrite the request. Anybody can help me to solve this problem or know why nginx do that ? Much appreciated.

    Read the article

  • Bacula not backing up all the files it should be doing

    - by Nigel Ellis
    I have Bacula (5.2) running on a Fedora 14 system backing up several different computers including Windows 7, Windows 2003 and Windows 2008. When backing up the Windows 2008 server the backup stops after a relatively small amount has been backed up and says the backup was okay. The fileset I am trying to backup should be around 323Gb, but it manages a mere 27Gb before stopping - but not erring. I did try creating a mount on the Fedora computer to the server I am trying to backup, and Bacula managed to copy 58Gb. When I tried to use the mount to copy the files manually I was able to copy them all - there are no problems with permissions etc. on the mount. Please can anyone give a reason why Bacula would just stop? I have heard there is a 260 character limit, but some of the files that should have been copy resolve to a shorter filename than some that have been backed up.

    Read the article

  • Get exact size in bytes of a disk & partitions in windows

    - by Antonius Bloch
    Hi, I'm using dd (under cygwin) to copy a shadow image of a disk in windows. Shadow copy will only give me a partion, so what I am doing is: 1) using dd to grab the disk header (32k on Win2003) 2) using dd to copy the shadow partition 3) using dd to copy the end of of the disk (8 meg reserved on Win2003) 4) stitch them all together and boot on KVM I need the exact size of all the partitions and non partitioned space on this windows drive. Unfortunately most windows disk tools seem to fudge the numbers a bit, or at least give me a different size than Linux does. I could guess like this 32k + partition size + 8M, but I want to double check. If I make a mistake I could lose data. This is on a remote & live Windows 2003 server so no offline solutions will be helpful. Latest cygwin is installed.

    Read the article

  • how to back up data from a machine that keeps hanging

    - by Amit Phatarphekar
    Hello - I have a storage server running opensolaris. But lately its been acting up - it hangs at random times due to some SCSI/ATA related error messages. I've tried to fix it without any progress, so I'm giving up now. The machine keeps hanging every 30 minutes or 1 hr ...sometimes after 4 hrs. Its very unpredictable. So I've decided to just reformat the storage server and start from scratch...maybe I'll just not use solaris and install something else, since the errors are related to solaris running on ATA HDD or something. Question - Before I reformat it, I want to back up some of the important data on it. Like it has a VM with 200 GB disk files, it has a whole bunch of ISOs stored on it etc etc. I'm using a simple scp to copy the files over to a different machine. My issue is that, because the machine hangs....sometimes my file copy is incomplete and I have to start all over again. Lets say I'm trying to copy a 200GB file which takes like 4 hrs....IF the machine hangs before the whole file i copied over...I have to recopy the file from scratch. Is there a solution to copy the files over such that if the machine hangs or network goes down..the copying can resume from where it left off? - like if 50 GB of a 200GB file was copied and machine hung....next time, it'll just continue to copy rest of the amount, instead of starting all over again. Thanks Amit

    Read the article

  • Excel 2007 - Adding line breaks in a cell and no line over 50 characters

    - by Richard Drew
    I have notes stored in an excel cell. I add line breaks and dates every time I add a new note. I need to copy this to another program, but it has a line limit of 50 characters. I want a line break for each new date and for when each date's comment goes over 50 characters. I'm able to do one or the other, but I can't figure out how to do both. I'd prefer words not to be split up, but at this point I don't care. Below is some sample input. If needed for a SUBSTITUTE or REPLACE function, I could add a ~ before each date in my input as a delimiter. Sample Input: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates and locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] 05/14 - Copy sent to John Public and [email protected] Ideal Output: 07/03 - FU on query. Copies and history included. CC to Jane Doe and John Public 06/29 - Cust claiming not to have these and wrong PO on query form. Responded with inv sent dates an d locations, correct PO values, and copies. 06/27 - New ticket opened using query form 06/12 - Opened ticket with helpdesk asking status 05/21 - Copy submitted to [email protected] om 05/14 - Copy sent to John Public and email@custome r.com

    Read the article

  • Decrypting EFS w/o altering timestamp - possible?

    - by grojo
    Hi, I'd like to decrypt some EFS-encrypted files, but I do not know how to do that without altering the timestamp. When encrypting/decrypting files, the modified-time is set to the current time. I'd like to preserve the original timestamp, as the files have not really changed. Is this possible? Solutions i dont think work copy to/from FAT (timestamp resolution differs) copy to from Samba share (same) programmatically copy original timestamp and reapply after decryption (possible, but need to handle decryption time which may vary)

    Read the article

  • switchover in postgresql

    - by user1010280
    I am using Postgresql 9.0 with Streaming replication. So, during switchover I follow these steps:- Get the server timestamp on primary. Get the current log position on primary. Set Verify Log location Verify Transaction Received Location Shutdown DB on production. Synchronize the transaction logs from PR to DR. Trigger a failover on the DR Database by creating the trigger file specified in recovery.conf Verify DB Mode on DR Copy the control file from from DR to primary. copy the temporary stats file from DR to primary. copy the history file from DR to primary. Create recovery.conf file. Start Database in standby mode in primary. Verify DB mode on PR At step (6), I have to copy last wal generated on Primary to standby and sync both PR and standby. but this thing takes time to copy files because this remote. So that postgres will keep seraching for wal for long time and after that it stops the server. So I want to know is there any way so that I can ask postgres to stop seraching or locating WAL after shutdown??? because postgres tries to locate this wal every 5 seconds. Please reply as soon as possible..its urgent...

    Read the article

  • Copying between 2 network devices

    - by Dave Rook
    My network has 3 devices - 1 PC and 2 other network devices (which could be PC, NAS, external hard drive etc) If I want to copy data from one network device to another network device from my PC (using the copy and paste method (i.e right click a file, select copy and then right click into the destination folder and click paste) and therefore not using any tools that could be built into the network devices for such a transaction), does the data get routed via my PC or use my systems resources?

    Read the article

  • Use windows 7 inside virtual box,as guest i mean, to create a Windows 7 USB using "Windows 7 USB/DVD Download Tool" ? (Linux as host)

    - by Abel Coto
    I want to download the Windows 7 professional iso (x32), from microsoft, and , i can do two things. Or buy a new burner , as mine doesn't work (i am trying to decide what dvd writer i could buy) or use a usb dongle to copy the iso to it , and install it via usb. I want to install Windows 7 in a netbook that now has debian,and in my pc. I think i have to buy only the license for the pc , as the netbook came with windows 7 preinstalled, so i suppose that i can use that serial to activate the windows , although i don't know how to install windows 7 starter instead of professional (i think if you remove a file from the iso, windows let you choose the edition to install). The problem is that in both pcs there isn't any windows , only debian. My father has a netbook with windows 7 starter, but i think it hasn't antivirus (at least until have the Karspersky Internet security for 3 pcs bought ), and i don't trust to make the usb there , if i don't now that there isn't any virus or malware. So i am trying to find a way of Create a Windows 7 usb installation , to at least be able to install windows 7 in the netbook without a external dvd writer. I know that with dd in linux you can copy a debian.iso to the usb , and then install debian with it (i've done it) using something like dd if=win7.iso of=/dev/sdb, but i don't know if this would work for windows 7 iso,and if dd will correctly copy the iso to the usb. I suppose that if you are able to boot and install windows 7 from the usb , is that the method works,and you can forget of problems later with the windows 7 installation (problems because some files could not be copied or like). So , i remembered that Microsoft created a tool to copy the iso to the usb using windows. So i thought that i could install in my pc , virtual box , as i have VT and 8 GB ram in it, and download the iso from microsoft ,install windows 7 in the virtual machine , and then copy the iso inside the machine , donwload the iso tool, and atach a usb to the pc, connect it to the guest , and use the tool to copy the iso to the USB. But i don't now if is possible to use a virtual machine to do this , or the virtualization could give problems with the usb, or something. I have found some minutes ago this How to make a windows 7 usb flash install media, from linux? The first method (dd) is the one i like more , and i trust more ( i don't now if the second method using ms-sys , works well , and if i can trust it. I understand that a iso is like a .rar , but no compressed,only containing the files ,so mount the iso and cp the data inside perhaps is ok. Although the method i like more is the microsoft one (more because is from microsoft , and i suppose they now what they do ,at least with this usb related thing, than anything). Perhaps worth more to buy a external dvd writer haha ... Should the virtual machine method work ?

    Read the article

  • Using ffmpeg to cut up video

    - by Neil
    I am using ffmpeg like this e.g.: ffmpeg -i input.wmv -ss 60 -t 60 -acodec copy -vcodec copy output.wmv to cut out a section of a large file. The -ss part works fine but the -t is ignored. That is, it correctly removes the first -ss seconds but then just keeps going to the end of the input with the copy. Is there a way to use ffmpeg to cut off the end of a video without recoding it?

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • Merge several mp4 video using ffmpeg and php [on hold]

    - by rihab
    I would like to merge several videos using ffmpeg and php. I want to retrieve the names of the videos dynamically but I can't merge all the videos together I only get i-1 merged videos This is the code I use: <?php $checkBox = $_POST['language']; $output=rand(); function conv($checkBox){ $tab=array(); for($i=0; $i<sizeof($checkBox); $i++) { $intermediate=rand(); $tab[$i]=$intermediate; exec("C:\\ffmpeg\\bin\\ffmpeg -i C:\\wamp\\www\\video_qnb\\model\\input\\$checkBox[$i].mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts C:\\wamp\\www\\video_qnb\\model\\output\\$intermediate.ts"); } return $tab; } $t=conv($checkBox); for($i=0;$i<sizeof($t); $i++) { if($i!=0) { if(sizeof($t)<=2) { exec('C:\\ffmpeg\\bin\\ffmpeg -i "concat:C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i-1].'.ts|C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i].'.ts" -c copy -bsf:a aac_adtstoasc C:\\wamp\\www\\video_qnb\\model\\output\\'.$output.'.mp4'); } else { exec('C:\\ffmpeg\\bin\\ffmpeg -i "concat:C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i-1].'.ts|C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i].'.ts" -c copy -bsf:a aac_adtstoasc C:\\wamp\\www\\video_qnb\\model\\output\\'.$output.'.mp4'); exec("C:\\ffmpeg\\bin\\ffmpeg -i C:\\wamp\\www\\video_qnb\\model\\output\\".$output.".mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts C:\\wamp\\www\\video_qnb\\model\\output\\i.ts"); exec('C:\\ffmpeg\\bin\\ffmpeg -i "concat:C:\\wamp\\www\\video_qnb\\model\\output\\i.ts|C:\\wamp\\www\\video_qnb\\model\\output\\'.$t[$i+1].'.ts" -c copy -bsf:a aac_adtstoasc C:\\wamp\\www\\video_qnb\\model\\output\\final.mp4'); $i++; } } } ?> Can anyone help me??

    Read the article

  • Mapped network drive connection timeout

    - by Terix
    I have server "Alpha" and server "Beta". Server Beta has a shared folder, that is mapped on server Alpha as "X:" On server Alpha there is a .vbs script that runs and take some files on local drive and copy them on X: drive. My issue is if no user log on server Alpha for a long time, it seems like the tcp connection underneath the mapped drive has a timeout, and the vbs script fails on the copy of file. As soon I log with remote desktop on Alpha server, the .vbs is successfull on the copy of the files. I have made many tests, using file logs to check what was happening and I found no way to refresh the connection and let the .vbs be able to copy files unattended.. I have always to log with remote desktop on Alpha server to refresh the connection and let the .vbs copy the files without issues. What can I do to avoid to log every time? The .vbs script runs 3 times a day and is very annoying. I do not have control over server Beta so I cannot change anything there, and I am very limited on changes I can do on server Alpha ( I cannot change registry and that sort of things)

    Read the article

  • Google chrome crashes when you paste from a remote session

    - by oo
    I often have a remote desktop session or a java remoting tool up and whenever i copy from within a remote session and paste into chrome browser, the browser freezes and i have to kill it for anything to work again. has anyone seen this or have a resolution? i have to remember to copy and paste into notepad first and then copy from notepad and then into chrome which is a pain. I am using google chrome 5.0

    Read the article

  • Samsung 830 SSD

    - by anru
    I have a 128G SSD(830) of samsung installed on my win7 ultimate 64 bits machine. I have tried to copy a folder from my C drive to the SSD drive. And I have found out that copy speed is so slow, please look at picture below: I am just want to know, if this is because of I was tried to copy so many small files By the way. the SSD is Sata 3, but my mobo only has SATA 2 interface, I do not know if connect SATA 3 device to SATA 2 interface contributes to slow copy speed.

    Read the article

  • Default profile for large

    - by user63434
    Hi I am setting up a master image to clone to all same machine type Windows 7 client, I login as administrastor and installed all the programs and changed the desktop settings etc, but my local administrator profile is 244megs in size, which will become the default profile of the local machine when sysprep, we have a 2003 server that I want to use mandatory profile for all login users which means I need to copy this profile to the server so when any users login to the domain they are using this profile, loading a 244megs profile is going to be very slow since it will be removed from the client when they logoff. So next time they login it will take a long time again. Is there anything I can do, can I just copy just the bare minimum files from the default profile to the server, as I am not sure what parts I need, I read that I must copy my documents, my documents/pictures so the folder redirection will work. What else do I need to copy to the server? I have firefox xmark sync also and MS words etc. THanks

    Read the article

  • Why can't I set Windows 7 folder to Writeable?

    - by Clay Nichols
    Moved a SATA HD from one PC to another. Copied almost all the files from old drive to new, except for one folder ("oldFolder") and it's subfolders and files. I tried to copy just a single file (to simplify things): Can't copy that file to same directory. (get above error) CAN copy file to Desktop. Can not copy files' container folder to Desktop. Under Properties for that OldFolder: Read Only is Checked. Security: Everyone set to Allow everything except "special permissions" All users are set to allow WRITE.

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • problem with PHP class function ()

    - by lusey
    hello this is my first question here and i hope you can help me .. I am trying to find a soloution of the towers of hanoi problem by three search ways (BFS-DFS-IDS) so I use "state" class whitch defined by 5 variables as here : class state { var $tower1 = array(); var $tower2 = array(); var $tower3 = array(); var $depth; var $neighbors = array(); and it also has many function one of them is getneighbors() which supposed to fill the array $neighbors with state neighbors and they are from the type "state" and here is the function : function getneighbors () { $temp=$this->copy(); $neighbor1= $this->copy(); $neighbor2= $this->copy(); $neighbor3= $this->copy(); $neighbor4= $this->copy(); $neighbor5= $this->copy(); $neighbor6= $this->copy(); if(!Empty($temp->tower1)) { if(!Empty($neighbor1->tower2)) { if(end($neighbor1->tower1) < end($neighbor1->tower2)) { array_unshift($neighbor1->tower2,array_pop($neighbor1->tower1)); array_push($neighbors,$neighbor1); }} else { array_unshift($neighbor1->tower2, array_pop($neighbor1->tower1)); array_push($neighbors,$neighbor1); } if(!Empty($neighbor2->tower3)) { if(end($neighbor2->tower1) < end($neighbor2->tower3)) { array_unshift($neighbor2->tower3, array_pop($neighbor2->tower1)); array_push($neighbors,$neighbor2); }} else { array_unshift($neighbor2->tower3,array_shift($neighbor2->tower1)); array_push($neighbors,$neighbor2); } } if(!Empty($temp->tower2)) { if(!Empty($neighbor3->tower1)) { if(end($neighbor3->tower2) < end($neighbor3->tower1)) { array_unshift($neighbor3->tower1,array_shift($neighbor3->tower2)); array_push($neighbors,$neighbor3); } } else { array_unshift($neighbor3->tower1,array_shift($neighbor3->tower2)); array_push($neighbors,$neighbor3); } if(!Empty($neighbor4->tower3)) { if(end($neighbor4->tower2) < end($neighbor4->tower3)) { array_unshift($neighbor4->tower1,array_shift($neighbor4->tower2)); array_push($neighbors,$neighbor4); } } else{ array_unshift($neighbor4->tower3,array_shift($neighbor4->tower2)); array_push($neighbors,$neighbor4); } } if(!Empty($temp->tower3)) { if(!Empty($neighbor5->tower1)) { if(end($neighbor5->tower3) < end($neighbor5->tower1)) {array_unshift($neighbor5->tower1,array_shift($neighbor5->tower3)); array_push($neighbors,$neighbor5); } } else{ array_unshift($neighbor5->tower1,array_shift($neighbor5->tower3)); array_push($neighbors,$neighbor5);} if(!Empty($neighbor6->tower2)) { if(end($neighbor6->tower3) < end($neighbor6->tower2)) { array_unshift($neighbor6->tower2,array_shift($neighbor6->tower3)); array_push($neighbors,$neighbor6); }} else{ array_unshift($neighbor6->tower2,array_shift($neighbor6->tower3)); array_push($neighbors,$neighbor6);} } return $neighbors; } note that toString and equals and copy are defined too now the problem is that when I call getneighbors() it returns an empty $neighbors array can you pleas tell me the problem ?

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • The Best Tools for Enhancing and Expanding the Features of the Windows Clipboard

    - by Lori Kaufman
    The Windows clipboard is like a scratch pad used by the operating system and all running applications. When you copy or cut some text or a graphic, it is temporarily stored in the clipboard and then retrieved later when you paste the data. We’ve previously showed you how to store multiple items to the clipboard (using Clipboard Manager) in Windows, how to copy a file path to the clipboard, how to create a shortcut to clear the clipboard, and how to copy a list of files to the clipboard. There are some limitations of the Windows clipboard. Only one item can be stored at a time. Each time you copy something, the current item in the clipboard is replaced. The data on the clipboard also cannot be viewed without pasting it into an application. In addition, the data on the clipboard is cleared when you log out of your Windows session. NOTE: The above image shows the clipboard viewer from Windows XP (clipbrd.exe), which is not available in Windows 7 or Vista. However, you can download the file from deviantART and run it to view the current entry in the clipboard in Windows 7. Here are some additional useful tools that help enhance or expand the features of the Windows clipboard and make it more useful. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >