Search Results

Search found 5416 results on 217 pages for 'storage'.

Page 99/217 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • CD Drive not discovered

    - by user1009073
    I have a self built computer. it uses a P6T Deluxe motherboard, which has both SATA and IDE ports. This was built several years ago, and had an IDE CD/DVD drive. This drive started going bad (would not burn CDs correctly), so I decided to replace it. I had difficultly finding an IDE DVD drive, so I bought a SATA DVD drive. I opened the comnputer, took out the old DVD drive. I left the IDE cable in place, connected to the motherboard, but it is not connected to any drives. I hooked up the new DVD drive, both power and with a SATA data cable (SATA port 3 if I recall). (Sony Optiarc 24x , Newegg URL: http://www.newegg.com/Product/Product.aspx?Item=N82E16827118067 ) When I power on my computer, the drive does NOT show up in Explorer. I can hit the DVD eject button, and the drive will open up, so I know it at least is getting power. I thought, maybe something in the BIOS. When I go to BIOS, boot devices, it shows (1) floppy, (2) my hard drive (3) ATAPI CD Drive. The only other possible BIOS option I could find was uder 'Storage Configuration'. Configure Storage as: My setting is RAID, since I am using two drives in a RAID configuration. Other options were IDE and ACHI. Other than trying to find an IDE DVD drive, is there anything else I can try? The drive does not show up at all in Windows Explorer. I did put in a CD thinking that might help, but nothing happened. Thanks, GS

    Read the article

  • VMWare vSphere 5: 4 pNICs for iSCSI vs. 2 pNICs

    - by gravyface
    New SAN for me, never used before: it's an IBM DS3512, dual controller with a quad 1GbE NIC per controller that a client bought and needs help setting up. Hosts (x2) have 8 pNICs and while I usually reserve 2 pNICs for iSCSI per host (and 2 for VM, 2 for management, 2 for vMotion, staggered across adapters), these extra ports on the SAN have me wondering if storage I/O would be significantly improved with 2 additional NICs per host, or if the limitations of the vmkernel/initiator would prevent the additional multipaths from ever being realized. I'm not seeing alot of 4 pNIC iSCSI implementations per host; 2 is the de facto standard from what I've read/seen online. I could and probably will do some I/O testing, but just wondering if there's a "wall" that someone else has discovered long ago (i.e. before 10GbE) that makes a 4 NIC iSCSI per host setup somewhat pointless. Just to clarify: I'm not looking for a how-to, but an explanation (link to paper, VMWare recommendation, benchmark, etc.) as to why 2-NIC configurations are the norm vs. 4-NIC iSCSI configurations. i.e. storage vendor limitations, VMKernel/initiator limitations, etc.

    Read the article

  • Trying to retrieve data with a thermaltake blacX enclosure: Windows 7 believes the drive to be "uninitialized"

    - by Peeter Joot
    I have a laptop that won't boot. It appears to be a power problem ... laptop auto-turns-off within about 10 seconds of pressing the power button (with power buttons lighted temporarily and no display with or without external monitor). I've followed the dell troubleshooting guide which suggested reseating the memory modules and the hard drives, but that didn't help. Before trying to have the laptop serviced, I wanted to get some data off off the hard drive. I bought a thermaltake blacX enclosure, intending to use this to both use to retrieve the drive data with, and then later use as external storage. Following the instructions (insert cables, insert drive power on) goes fine, and Windows 7 on another laptop installs the device driver software. However, no drive letter shows up in 'Computer'. Under Computer-manage-storage I see the drive is there, and there's an option to "initialize" the drive. The Windows "initialize" dialog gives me the option to pick between "MBR" and "GPT" partitioning, which sounds like a good way to destroy the data on the drive. I'm thinking that I've purchased the wrong device for the job (or that my old drive is damaged). The old drive to recover info from is a Western Digital 500G/7200rpm SATA drive if that is relavent. Both the original laptop and the one I'm using for recovery are running Windows 7. Does anybody have experience with using a blacX enclosure to recover data off an already formatted drive?

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • Mysqld shutting down by itself

    - by AJ Naidas
    I'm running a Wordpress Blog that gets medium-high traffic. It is hosted in an Ubuntu Server 2GB Memory 2 Core Processor 40GB SSD Disk, 3TB Transfer. The problem is that MySQL shuts down by itself after an hour or two. I had to restart mysql each and every time this happens. I checked the logs and this is what I found: 140612 6:48:14 [Warning] Using unique option prefix myisam-recover instead of myisam-recover-options is deprecated and will be removed in a future release. Please use the full name instead. 140612 6:48:14 [Note] Plugin 'FEDERATED' is disabled. 140612 6:48:14 InnoDB: The InnoDB memory heap is disabled 140612 6:48:14 InnoDB: Mutexes and rw_locks use GCC atomic builtins 140612 6:48:14 InnoDB: Compressed tables use zlib 1.2.3.4 140612 6:48:14 InnoDB: Initializing buffer pool, size = 1.4G InnoDB: mmap(1502412800 bytes) failed; errno 12 140612 6:48:14 InnoDB: Completed initialization of buffer pool 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool 140612 6:48:14 [ERROR] Plugin 'InnoDB' init function returned error. 140612 6:48:14 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 140612 6:48:14 [ERROR] Unknown/unsupported storage engine: InnoDB 140612 6:48:14 [ERROR] Aborting 140612 6:48:14 [Note] /usr/sbin/mysqld: Shutdown complete judging by this line: 140612 6:48:14 InnoDB: Fatal error: cannot allocate memory for the buffer pool I suspect that this is a memory problem, but I would like to hear from the experts here before I conclude. Is this a lack of memory problem? Do you think the value of max_connections in my.cnf (currently 100) is a potential cause and needs increasing? TIA.

    Read the article

  • Can a non-redundant RAID5 cause any serious problems (compared to RAID0)?

    - by leemes
    I used to have a three-disc RAID5 (mdadm) in my computer for personal media storage (music, videos, photos, programs, games, ...). It had three discs with 750 GB each, resulting in an array capacity of 1.5 TB. One day (one year ago), I needed one of those discs to install another operating system. I thought, I don't need the redundancy anymore since I backup the most important stuff (personal photos e.g.) on an external disc anyway. So I decided to remove one of the three discs without converting the RAID to RAID0 or even two separate discs, because I had no temporary storage (since one cannot simply convert the RAID5 to RAID0 AFAIK). So now, for about one year, I have a non-redundant RAID5 with 2 of 3 discs running. Sometimes, one of the discs has a defective contact at the power cable or something similar causing the drive to stop working temporarily (I don't know exactly what it is). Since it still works when rebooting the computer and in most cases by calling some mdadm commands, it wasn't that problematic. Note that the data is not very critical, since I still have a backup of the most important stuff. But in the last few weeks, one of the drives fails very frequently (every few hours), so it gets really annoying to manage this. My questions are: Is there any disadvantage (apart from the annoying management) of a non-redundant RAID5 (with one drive less than typical) over a RAID0? If I understand it correctly, both have no redundancy and the same capacity. On a temporary drive failure, I can restart the array in both cases, assuming that the drive itself still works after the failure. Can it happen that the drive contents alter on a drive failure, making the array inconsistent? If so, can I tell mdadm to check the array for failures (without a file system level checking tool)? Since the drive most probably only has a defective contact causing it to fail for a second only, can I tell mdadm to automatically restart the array, so I will not even notice the failure if no application wanted to access the file system during the failure?

    Read the article

  • Exchange migration: ExchangeTransport warnings after uninstalling source server

    - by carlpett
    After disabling/uninstalling Exchange from our source SBS2003 server, I'm getting these warnings: Event 5020 "The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 sourceserver.domain.local in Routing Group [...]" Event 5006 "Cannot find route to Mailbox Server CN=SOURCESERVER [...] for store CN=[...]", for Public folder, First storage group and Recovery storage group. I followed the technet article here: http://technet.microsoft.com/en-us/library/bb288905.aspx (linked from the SBS 2003 - 2011 migration guide). When uninstalling Exchange, I got a warning about NNTP not being found in the registry, but that didn't seem relevant, and the uninstall continued. The server was subsequently removed from the domain and shut down, as per the instructions. If I open the Public Folder Management console on the Exchange 2010 server, the public folders \NON_IPM_SUBTREE\EFORMS_REGISTRY and \Archived mails gives an error on "Update content". I haven't found anything else which indicates something is wrong. We never really used the public folders on the old server, so there isn't really anything lost. Can I just remove these folders and let them be created anew?

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • How to fix Windows 7 device removal notification loop

    - by Barry Kelly
    Bit of an odd one this. One of our PCs is getting caught in a loop some time after being turned on, usually after a USB storage device has been attached - sometimes an iPod, sometimes a GPS. Specifically, Windows Explorer starts showing a drive icon and letter (E:, as of right now) for the System partition (the small hidden one at the start of the boot drive). Then, the icon disappears. Then it reappears again. And disappears. It does this very quickly, at what looks like maybe 50 times a second. CPU usage in this loop is also very high; averages about 66%. This machine has an i7 920 CPU, which is quad core with hyperthreading; so this usage rate works out to about 5 100% busy threads, along with whatever normal idle load is (particularly Task Manager itself). Inspecting with Process Explorer shows that the device removal notification infrastructure has gone berserk. The threads in system service processes (i.e. apart from Windows Explorer) which are using all the CPU power relate to device notification. The Disk Management MMC snap-in also fails to run when the loop starts. The only way to break the loop, it seems, is to reboot the machine. Anyone seen anything similar to this, and know of a way to fix it? Machine details: Windows 7 x64, fully patched i7 920, 12GB RAM Intel SSD 80GB (X25-M, I believe; not G2) 2TB 5.2K disk for bulk storage AMD HD 5870 Further hardware details await. I'm going to go through and update all drivers I can find.

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Which would be more reliable for data archival - SD card or a generic USB thumbdrive?

    - by Visitor
    I've been thinking lately what should I preferably use for data storage and archival. I will say in advance that I do not use flash memory as the only storage media - I also keep my data on the hard drives and optical disks - flash memory is but one of the several backup solutions that duplicate each other. For the flash memory however I do have a choice - to use a generic USB thumbdrive or a SD card. Are there any indications that SD cards may be better and more reliable? From browsing people's review on the web I see that many complaints about USB sticks have to do with them completely failing, losing file system and stop being recognized by the OS. At the same time, most of the complaints for SD cards deal with just write speeds not holding up to the promise - failure reports are but a portions of those for the USB sticks. Are SD cards indeed more reliable? Am I also correct in my assumptions that SD cards use higher grade NAND chips than USB thumbdrives? At least, for class 10 cards, because the specification dictates the minimum performance and the manufacturers have to preselect better chips. While it is common for USB sticks to promise high speeds "up to XX MB/sec" but the reality is they very often deliver speeds 2-3 times less than promised. Do SD cards get better NAND chips and USB thumbdrives receive the discarded chips? Any thoughts would be appreciated.

    Read the article

  • xutility file???

    - by user574290
    Hi all. I'm trying to use c code with opencv in face detection and counting, but I cannot build the source. I am trying to compile my project and I am having a lot of problems with a line in the xutility file. the error message show that it error with xutility file. Please help me, how to solve this problem? this is my code // Include header files #include "stdafx.h" #include "cv.h" #include "highgui.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <assert.h> #include <math.h> #include <float.h> #include <limits.h> #include <time.h> #include <ctype.h> #include <iostream> #include <fstream> #include <vector> using namespace std; #ifdef _EiC #define WIN32 #endif int countfaces=0; int numFaces = 0; int k=0 ; int list=0; char filelist[512][512]; int timeCount = 0; static CvMemStorage* storage = 0; static CvHaarClassifierCascade* cascade = 0; void detect_and_draw( IplImage* image ); void WriteInDB(); int found_face(IplImage* img,CvPoint pt1,CvPoint pt2); int load_DB(char * filename); const char* cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; // BEGIN NEW CODE #define WRITEVIDEO char* outputVideo = "c:\\face_counting1_tracked.avi"; //int faceCount = 0; int posBuffer = 100; int persistDuration = 10; //faces can drop out for 10 frames int timestamp = 0; float sameFaceDistThreshold = 30; //pixel distance CvPoint facePositions[100]; int facePositionsTimestamp[100]; float distance( CvPoint a, CvPoint b ) { float dist = sqrt(float ( (a.x-b.x)*(a.x-b.x) + (a.y-b.y)*(a.y-b.y) ) ); return dist; } void expirePositions() { for (int i = 0; i < posBuffer; i++) { if (facePositionsTimestamp[i] <= (timestamp - persistDuration)) //if a tracked pos is older than three frames { facePositions[i] = cvPoint(999,999); } } } void updateCounter(CvPoint center) { bool newFace = true; for(int i = 0; i < posBuffer; i++) { if (distance(center, facePositions[i]) < sameFaceDistThreshold) { facePositions[i] = center; facePositionsTimestamp[i] = timestamp; newFace = false; break; } } if(newFace) { //push out oldest tracker for(int i = 1; i < posBuffer; i++) { facePositions[i] = facePositions[i - 1]; } //put new tracked position on top of stack facePositions[0] = center; facePositionsTimestamp[0] = timestamp; countfaces++; } } void drawCounter(IplImage* image) { // Create Font char buffer[5]; CvFont font; cvInitFont(&font, CV_FONT_HERSHEY_SIMPLEX, .5, .5, 0, 1); cvPutText(image, "Faces:", cvPoint(20, 20), &font, CV_RGB(0,255,0)); cvPutText(image, itoa(countfaces, buffer, 10), cvPoint(80, 20), &font, CV_RGB(0,255,0)); } #ifdef WRITEVIDEO CvVideoWriter* videoWriter = cvCreateVideoWriter(outputVideo, -1, 30, cvSize(240, 180)); #endif //END NEW CODE int main( int argc, char** argv ) { CvCapture* capture = 0; IplImage *frame, *frame_copy = 0; int optlen = strlen("--cascade="); const char* input_name; if( argc > 1 && strncmp( argv[1], "--cascade=", optlen ) == 0 ) { cascade_name = argv[1] + optlen; input_name = argc > 2 ? argv[2] : 0; } else { cascade_name = "C:\\Program Files\\OpenCV\\OpenCV2.1\\data\\haarcascades\\haarcascade_frontalface_alt_tree.xml"; input_name = argc > 1 ? argv[1] : 0; } cascade = (CvHaarClassifierCascade*)cvLoad( cascade_name, 0, 0, 0 ); if( !cascade ) { fprintf( stderr, "ERROR: Could not load classifier cascade\n" ); fprintf( stderr, "Usage: facedetect --cascade=\"<cascade_path>\" [filename|camera_index]\n" ); return -1; } storage = cvCreateMemStorage(0); //if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0') ) // capture = cvCaptureFromCAM( !input_name ? 0 : input_name[0] - '0' ); //else capture = cvCaptureFromAVI( "c:\\face_counting1.avi" ); cvNamedWindow( "result", 1 ); if( capture ) { for(;;) { if( !cvGrabFrame( capture )) break; frame = cvRetrieveFrame( capture ); if( !frame ) break; if( !frame_copy ) frame_copy = cvCreateImage( cvSize(frame->width,frame->height), IPL_DEPTH_8U, frame->nChannels ); if( frame->origin == IPL_ORIGIN_TL ) cvCopy( frame, frame_copy, 0 ); else cvFlip( frame, frame_copy, 0 ); detect_and_draw( frame_copy ); if( cvWaitKey( 30 ) >= 0 ) break; } cvReleaseImage( &frame_copy ); cvReleaseCapture( &capture ); } else { if( !input_name || (isdigit(input_name[0]) && input_name[1] == '\0')) cvNamedWindow( "result", 1 ); const char* filename = input_name ? input_name : (char*)"lena.jpg"; IplImage* image = cvLoadImage( filename, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } else { /* assume it is a text file containing the list of the image filenames to be processed - one per line */ FILE* f = fopen( filename, "rt" ); if( f ) { char buf[1000+1]; while( fgets( buf, 1000, f ) ) { int len = (int)strlen(buf); while( len > 0 && isspace(buf[len-1]) ) len--; buf[len] = '\0'; image = cvLoadImage( buf, 1 ); if( image ) { detect_and_draw( image ); cvWaitKey(0); cvReleaseImage( &image ); } } fclose(f); } } } cvDestroyWindow("result"); #ifdef WRITEVIDEO cvReleaseVideoWriter(&videoWriter); #endif return 0; } void detect_and_draw( IplImage* img ) { static CvScalar colors[] = { {{0,0,255}}, {{0,128,255}}, {{0,255,255}}, {{0,255,0}}, {{255,128,0}}, {{255,255,0}}, {{255,0,0}}, {{255,0,255}} }; double scale = 1.3; IplImage* gray = cvCreateImage( cvSize(img->width,img->height), 8, 1 ); IplImage* small_img = cvCreateImage( cvSize( cvRound (img->width/scale), cvRound (img->height/scale)), 8, 1 ); CvPoint pt1, pt2; int i; cvCvtColor( img, gray, CV_BGR2GRAY ); cvResize( gray, small_img, CV_INTER_LINEAR ); cvEqualizeHist( small_img, small_img ); cvClearMemStorage( storage ); if( cascade ) { double t = (double)cvGetTickCount(); CvSeq* faces = cvHaarDetectObjects( small_img, cascade, storage, 1.1, 2, 0/*CV_HAAR_DO_CANNY_PRUNING*/, cvSize(30, 30) ); t = (double)cvGetTickCount() - t; printf( "detection time = %gms\n", t/((double)cvGetTickFrequency()*1000.) ); if (faces) { //To save the detected faces into separate images, here's a quick and dirty code: char filename[6]; for( i = 0; i < (faces ? faces->total : 0); i++ ) { /* CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, colors[i%8], 3, 8, 0 );*/ // Create a new rectangle for drawing the face CvRect* r = (CvRect*)cvGetSeqElem( faces, i ); // Find the dimensions of the face,and scale it if necessary pt1.x = r->x*scale; pt2.x = (r->x+r->width)*scale; pt1.y = r->y*scale; pt2.y = (r->y+r->height)*scale; // Draw the rectangle in the input image cvRectangle( img, pt1, pt2, CV_RGB(255,0,0), 3, 8, 0 ); CvPoint center; int radius; center.x = cvRound((r->x + r->width*0.5)*scale); center.y = cvRound((r->y + r->height*0.5)*scale); radius = cvRound((r->width + r->height)*0.25*scale); cvCircle( img, center, radius, CV_RGB(255,0,0), 3, 8, 0 ); //update counter updateCounter(center); int y=found_face(img,pt1,pt2); if(y==0) countfaces++; }//end for printf("Number of detected faces: %d\t",countfaces); }//end if //delete old track positions from facePositions array expirePositions(); timestamp++; //draw counter drawCounter(img); #ifdef WRITEVIDEO cvWriteFrame(videoWriter, img); #endif cvShowImage( "result", img ); cvDestroyWindow("Result"); cvReleaseImage( &gray ); cvReleaseImage( &small_img ); }//end if } //end void int found_face(IplImage* img,CvPoint pt1,CvPoint pt2) { /*if (faces) {*/ CvSeq* faces = cvHaarDetectObjects( img, cascade, storage, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING, cvSize(40, 40) ); int i=0; char filename[512]; for( i = 0; i < (faces ? faces->total : 0); i++ ) {//int scale = 1, i=0; //i=iface; //char filename[512]; /* extract the rectanlges only */ // CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); CvRect face_rect = *(CvRect*)cvGetSeqElem( faces, i); //IplImage* gray_img = cvCreateImage( cvGetSize(img), IPL_DEPTH_8U, 1 ); IplImage* clone = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, img->nChannels ); IplImage* gray = cvCreateImage (cvSize(img->width, img->height), IPL_DEPTH_8U, 1 ); cvCopy (img, clone, 0); cvNamedWindow ("ROI", CV_WINDOW_AUTOSIZE); cvCvtColor( clone, gray, CV_RGB2GRAY ); face_rect.x = pt1.x; face_rect.y = pt1.y; face_rect.width = abs(pt1.x - pt2.x); face_rect.height = abs(pt1.y - pt2.y); cvSetImageROI ( gray, face_rect); //// * rectangle = cvGetImageROI ( clone ); face_rect = cvGetImageROI ( gray ); cvShowImage ("ROI", gray); k++; char *name=0; name=(char*) calloc(512, 1); sprintf(name, "Image%d.pgm", k); cvSaveImage(name, gray); //////////////// for(int j=0;j<512;j++) filelist[list][j]=name[j]; list++; WriteInDB(); //int found=SIFT("result.txt",name); cvResetImageROI( gray ); //return found; return 0; // }//end if }//end for }//end void void WriteInDB() { ofstream myfile; myfile.open ("result.txt"); for(int i=0;i<512;i++) { if(strcmp(filelist[i],"")!=0) myfile << filelist[i]<<"\n"; } myfile.close(); } Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Error 13 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 18 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 23 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 10 error C2868: 'std::iterator_traits<_Iter>::value_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 25 error C2868: 'std::iterator_traits<_Iter>::reference' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 20 error C2868: 'std::iterator_traits<_Iter>::pointer' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 5 error C2868: 'std::iterator_traits<_Iter>::iterator_category' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 15 error C2868: 'std::iterator_traits<_Iter>::difference_type' : illegal syntax for using-declaration; expected qualified-name c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 9 error C2602: 'std::iterator_traits<_Iter>::value_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 24 error C2602: 'std::iterator_traits<_Iter>::reference' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 19 error C2602: 'std::iterator_traits<_Iter>::pointer' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 4 error C2602: 'std::iterator_traits<_Iter>::iterator_category' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 14 error C2602: 'std::iterator_traits<_Iter>::difference_type' is not a member of a base class of 'std::iterator_traits<_Iter>' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 7 error C2146: syntax error : missing ';' before identifier 'value_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 22 error C2146: syntax error : missing ';' before identifier 'reference' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 17 error C2146: syntax error : missing ';' before identifier 'pointer' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 2 error C2146: syntax error : missing ';' before identifier 'iterator_category' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 12 error C2146: syntax error : missing ';' before identifier 'difference_type' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766 Error 6 error C2039: 'value_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 765 Error 21 error C2039: 'reference' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 769 Error 16 error C2039: 'pointer' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 768 Error 1 error C2039: 'iterator_category' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 764 Error 11 error C2039: 'difference_type' : is not a member of 'CvPoint' c:\program files\microsoft visual studio 9.0\vc\include\xutility 766

    Read the article

  • XSL Template outputting massive chunk of text, rather than HTML. But only on one section

    - by Throlkim
    I'm having a slightly odd situation with an XSL template. Most of it outputs fine, but a certain for-each loop is causing me problems. Here's the XML: <area> <feature type="Hall"> <Heading><![CDATA[Hall]]></Heading> <Para><![CDATA[Communal gardens, pathway leading to PVCu double glazed communal front door to]]></Para> </feature> <feature type="Entrance Hall"> <Heading><![CDATA[Communal Entrance Hall]]></Heading> <Para><![CDATA[Plain ceiling, centre light fitting, fire door through to inner hallway, wood and glazed panelled front door to]]></Para> </feature> <feature type="Inner Hall"> <Heading><![CDATA[Inner Hall]]></Heading> <Para><![CDATA[Plain ceiling with pendant light fitting and covings, security telephone, airing cupboard housing gas boiler serving domestic hot water and central heating, telephone point, storage cupboard housing gas and electric meters, wooden panelled doors off to all rooms.]]></Para> </feature> <feature type="Lounge (Reception)" width="3.05" length="4.57" units="metre"> <Heading><![CDATA[Lounge (Reception)]]></Heading> <Para><![CDATA[15' 6" x 10' 7" (4.72m x 3.23m) Window to the side and rear elevation, papered ceiling with pendant light fitting and covings, two double panelled radiators, power points, wall mounted security entry phone, TV aerial point.]]></Para> </feature> <feature type="Kitchen" width="3.05" length="3.66" units="metre"> <Heading><![CDATA[Kitchen]]></Heading> <Para><![CDATA[12' x 10' (3.66m x 3.05m) Double glazed window to the rear elevation, textured ceiling with strip lighting, range of base and wall units in Beech with brushed aluminium handles, co-ordinated working surfaces with inset stainless steel sink with mixer taps over, co-ordinated tiled splashbacks, gas and electric cooker points, large storage cupboard with shelving, power points.]]></Para> </feature> <feature type="Entrance Porch"> <Heading><![CDATA[Balcony]]></Heading> <Para><![CDATA[Views across the communal South facing garden, wrought iron balustrade.]]></Para> </feature> <feature type="Bedroom" width="3.35" length="3.96" units="metre"> <Heading><![CDATA[Bedroom One]]></Heading> <Para><![CDATA[13' 6" x 11' 5" (4.11m x 3.48m) Double glazed windows to the front and side elevations, papered ceiling with pendant light fittings and covings, single panelled radiator, power points, telephone point, security entry phone.]]></Para> </feature> <feature type="Bedroom" width="3.05" length="3.35" units="metre"> <Heading><![CDATA[Bedroom Two]]></Heading> <Para><![CDATA[11' 4" x 10' 1" (3.45m x 3.07m) Double glazed window to the front elevation, plain ceiling with centre light fitting and covings, power points.]]></Para> </feature> <feature type="bathroom"> <Heading><![CDATA[Bathroom]]></Heading> <Para><![CDATA[Obscure double glazed window to the rear elevation, textured ceiling with centre light fitting and extractor fan, suite in white comprising of low level WC, wall mounted wash hand basin and walk in shower housing 'Triton T80' electric shower, co-ordinated tiled splashbacks.]]></Para> </feature> </area> And here's the section of my template that processes it: <xsl:for-each select="area"> <li> <xsl:for-each select="feature"> <li> <h5> <xsl:value-of select="Heading"/> </h5> <xsl:value-of select="Para"/> </li> </xsl:for-each> </li> </xsl:for-each> And here's the output: Hall Communal gardens, pathway leading to PVCu double glazed communal front door to Communal Entrance Hall Plain ceiling, centre light fitting, fire door through to inner hallway, wood and glazed panelled front door to Inner Hall Plain ceiling with pendant light fitting and covings, security telephone, airing cupboard housing gas boiler serving domestic hot water and central heating, telephone point, storage cupboard housing gas and electric meters, wooden panelled doors off to all rooms. Lounge (Reception) 15' 6" x 10' 7" (4.72m x 3.23m) Window to the side and rear elevation, papered ceiling with pendant light fitting and covings, two double panelled radiators, power points, wall mounted security entry phone, TV aerial point. Kitchen 12' x 10' (3.66m x 3.05m) Double glazed window to the rear elevation, textured ceiling with strip lighting, range of base and wall units in Beech with brushed aluminium handles, co-ordinated working surfaces with inset stainless steel sink with mixer taps over, co-ordinated tiled splashbacks, gas and electric cooker points, large storage cupboard with shelving, power points. Balcony Views across the communal South facing garden, wrought iron balustrade. Bedroom One 13' 6" x 11' 5" (4.11m x 3.48m) Double glazed windows to the front and side elevations, papered ceiling with pendant light fittings and covings, single panelled radiator, power points, telephone point, security entry phone. Bedroom Two 11' 4" x 10' 1" (3.45m x 3.07m) Double glazed window to the front elevation, plain ceiling with centre light fitting and covings, power points. Bathroom Obscure double glazed window to the rear elevation, textured ceiling with centre light fitting and extractor fan, suite in white comprising of low level WC, wall mounted wash hand basin and walk in shower housing 'Triton T80' electric shower, co-ordinated tiled splashbacks. For reference, here's the entire XSLT: http://pastie.org/private/eq4gjvqoc1amg9ynyf6wzg The rest of it all outputs fine - what am I missing from the above section?

    Read the article

  • Xen DomU on DRBD device: barrier errors

    - by Halfgaar
    I'm testing setting up a Xen DomU with a DRBD storage for easy failover. Most of the time, immediatly after booting the DomU, I get an IO error: [ 3.153370] EXT3-fs (xvda2): using internal journal [ 3.277115] ip_tables: (C) 2000-2006 Netfilter Core Team [ 3.336014] nf_conntrack version 0.5.0 (3899 buckets, 15596 max) [ 3.515604] init: failsafe main process (397) killed by TERM signal [ 3.801589] blkfront: barrier: write xvda2 op failed [ 3.801597] blkfront: xvda2: barrier or flush: disabled [ 3.801611] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801630] end_request: I/O error, dev xvda2, sector 52171168 [ 3.801642] Buffer I/O error on device xvda2, logical block 6521396 [ 3.801652] lost page write due to I/O error on xvda2 [ 3.801755] Aborting journal on device xvda2. [ 3.804415] EXT3-fs (xvda2): error: ext3_journal_start_sb: Detected aborted journal [ 3.804434] EXT3-fs (xvda2): error: remounting filesystem read-only [ 3.814754] journal commit I/O error [ 6.973831] init: udev-fallback-graphics main process (538) terminated with status 1 [ 6.992267] init: plymouth-splash main process (546) terminated with status 1 The manpage of drbdsetup says that LVM (which I use) doesn't support barriers (better known as tagged command queuing or native command queing), so I configured the drbd device not to use barriers. This can be seen in /proc/drbd (by "wo:f, meaning flush, the next method drbd chooses after barrier): 3: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r---- ns:2160152 nr:520204 dw:2680344 dr:2678107 al:3549 bm:9183 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 And on the other host: 3: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r---- ns:0 nr:2160152 dw:2160152 dr:0 al:0 bm:8052 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0 I also enabled the option disable_sendpage, as per the drbd docs: cat /sys/module/drbd/parameters/disable_sendpage Y I also tried adding barriers=0 to fstab as mount option. Still it sometimes says: [ 58.603896] blkfront: barrier: write xvda2 op failed [ 58.603903] blkfront: xvda2: barrier or flush: disabled I don't even know if ext3 has a nobarrier option. And, because only one of my storage systems is battery backed, it would not be smart anyway. Why does it still compain about barriers when I disabled that? Both host are: Debian: 6.0.4 uname -a: Linux 2.6.32-5-xen-amd64 drbd: 8.3.7 Xen: 4.0.1 Guest: Ubuntu 12.04 LTS uname -a: Linux 3.2.0-24-generic pvops drbd resource: resource drbdvm { meta-disk internal; device /dev/drbd3; startup { # The timeout value when the last known state of the other side was available. 0 means infinite. wfc-timeout 0; # Timeout value when the last known state was disconnected. 0 means infinite. degr-wfc-timeout 180; } syncer { # This is recommended only for low-bandwidth lines, to only send those # blocks which really have changed. #csums-alg md5; # Set to about half your net speed rate 60M; # It seems that this option moved to the 'net' section in drbd 8.4. (later release than Debian has currently) verify-alg md5; } net { # The manpage says this is recommended only in pre-production (because of its performance), to determine # if your LAN card has a TCP checksum offloading bug. #data-integrity-alg md5; } disk { # Detach causes the device to work over-the-network-only after the # underlying disk fails. Detach is not default for historical reasons, but is # recommended by the docs. # However, the Debian defaults in drbd.conf suggest the machine will reboot in that event... on-io-error detach; # LVM doesn't support barriers, so disabling it. It will revert to flush. Check wo: in /proc/drbd. If you don't disable it, you get IO errors. no-disk-barrier; } on host1 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.1:7792; } on host2 { # universe is a VG disk /dev/universe/drbdvm-disk; address 10.0.0.2:7792; } } DomU cfg: bootloader = '/usr/lib/xen-default/bin/pygrub' vcpus = '2' memory = '512' # # Disk device(s). # root = '/dev/xvda2 ro' disk = [ 'phy:/dev/drbd3,xvda2,w', 'phy:/dev/universe/drbdvm-swap,xvda1,w', ] # # Hostname # name = 'drbdvm' # # Networking # # fake IP for posting vif = [ 'ip=1.2.3.4,mac=00:16:3E:22:A8:A7' ] # # Behaviour # on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' In my test setup: the primary host's storage is 9650SE SATA-II RAID PCIe with battery. The secondary is software RAID1. Isn't DRBD+Xen widely used? With these problems, it's not going to work.

    Read the article

  • Tridion New UI Preview Site is not reflecting with the changes unless pulished

    - by Ram G
    I have new UI setup and noticing that when ever I update a page it is not refreshing with the updated changes. I do not see either the page_{sessionId/GUID}.aspx created either. Checked the session preview DB and I see the changes in PAGE_CONTENT table with new rendered content, so seems like session preview is working fine but the Preview site is not able to get the changes and refresh the UI. I have checked all the preview handlers and mappings for .aspx and made sure they are correct in web.config. Any thoughts on why the preview site not showing up the changes? I have the session preview DB setup in cd_storage_conf.xml. <StorageBindings> <Bundle src="preview_dao_bundle.xml"/> </StorageBindings> <Wrappers> <Wrapper Name="SessionWrapper"> <Timeout>120000</Timeout> <Storage Type="persistence" Id="db-session-webservice" dialect="MSSQL" Class="com.tridion.storage.persistence.JPADAOFactory"> <Pool Type="jdbc" Size="5" MonitorInterval="60" IdleTimeout="120" CheckoutTimeout="120" /> <DataSource Class="com.microsoft.sqlserver.jdbc.SQLServerDataSource"> <Property Name="serverName" Value="localhost" /> <Property Name="portNumber" Value="1433" /> <Property Name="databaseName" Value="Tridion_Broker_SessionPreview" /> <Property Name="user" Value="usr" /> <Property Name="password" Value="pwd" /> </DataSource> </Storage> </Wrapper> </Wrappers> web.config (handlers): <add verb="GET" path="*.htm" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.jpg" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.png" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.aspx" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add verb="GET" path="*.html" type="Tridion.ContentDelivery.Preview.Web.StaticFileHandler" /> <add name="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" type="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" /> Log (timestamp and DEBUG prefix removed): ClaimStore - put: uri=taf:session:id, value=tridion_db59279b-7d37-4b2e-ad98-eaaa6af7038e ClaimStore - put: uri=taf:session:id, value=tridion_db59279b-7d37-4b2e-ad98-eaaa6af7038e ClaimStore - put: uri=taf:tracking:id, value=tridion_d1fa1017-a28d-4f48-a790-b74f78c69314 ClaimStore - put: uri=taf:tracking:id, value=tridion_d1fa1017-a28d-4f48-a790-b74f78c69314 SearchClaimProcessor - No match found for referrer string http://uidemo.practice.com/en/Product/musk.aspx SearchClaimProcessor - No match found for referrer string http://uidemo.practice.com/en/Product/musk.aspx ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:devicetype, value=Desktop ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:devicetype, value=Desktop ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:mobiledevice, value=NotMobile ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:acceptlanguage, value=en-US ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:mobiledevice, value=NotMobile ClaimStore - put: uri=taf:claim:ambientdata:footprintcartridge:acceptlanguage, value=en-US PageHandler - The session wrappers are correctly installed. Any thoughts/pointers on what might be going wrong...? (sorry for the long post)

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Custom Online Backup Solution Advice

    - by Martín Marconcini
    I have to implement a way so our customers can backup their SQL 2000/5/8 databasase online. The application they use is a C#/.NET35 Winforms application that connects to a SQL Server (can be 2000/2005/2008, sometimes express editions). The SQL Server is on the same LAN. Our application has a very specific UI and we must code each form following those guidelines. There’s lots of GDI+ to give it the look and feel we want. For that reason, using a 3rd party application is not a very good idea. We need to charge the customer on a monthly/annual basis for the service. Preferably, the customer doesn’t need to care about bandwidth and storage space. It must be transparent. Given the above reqs., my first thoughts are: Solution 1: Code some sort of FTP basic functionality with behind the scenes SQL Backup mechanism, then hire a Hosting service and compress-transfer the .BAK to the Hosting. Maintain a series of Folders (for each customer). They won’t see what’s happening. They will just see a list of their files and a big “Backup now” button that will perform the SQL backup, compress it and upload it (and update the file list) ;) Pros: Not very complicated to implement, simple to use, fairly simple to configure (could have a dedicated ftp user/pass) Cons: Finding a “ftp” only hosting plan is not probably going to be easy, they usually come with a bunch of stuff. FTP is not always the best protocol. more? Solution 2: Similar to 1, but instead of FTP, find a cloud computing service like Amazon S3, Mosso or similar. Pros: Cloud Storage is fast, reliable, etc. It’s kind of easy to implement (specially if there are APIs like AWS or Mosso). Cons: I have been unable to come up with a service optimized for resellers where I can give multiple sub-accounts (one for each customer). Billing is going to be a nightmare cuz these services bill per/GB and with One account it’s impossible to differentiate each customer. Solution 3: Similar to 2, but letting the user create their own account on Amazon S3 (for example). Pros: You forget about billing and such. Cons: A mess for the customer who has to open the Amazon (or whatever) account, will be charged for that and not from you. You can’t really charge the customer (since you’re just not doing anything). Solution 4: Use one of the many backup online solutions that use the tech in cloud storage. Pros: many of these include SQL Server backup, and a lot of features that we’d have to implement. Plus web access and stuff like that will come included. Cons: Still have the billing problem described in number 2. Little of these companies (if any) offers “reseller” accounts. You have to eventually use their software (some offer certain branding). Any better approach? Summary: You have a software (.NET Winapp). You want your users to be able to backup their SQL Server databases online (and be able to retrieve the backups if needed). You ideally would like to charge the customer for this service (i.e. XX € a year).

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • (This is for a project, so yes it is homework) How would I finish this java code?

    - by user2924318
    The task is to create arrays using user input (which I was able to do), then for the second part, use a separate method to sort the array in ascending order then output it. I have gotten it to do everything I need except I don't know how I would get it to sort. The directions say to use a while loop from 0 to the length to find the minimum value then swap that with the 1st, but I don't know how to do this. This is what I have so far: public static void main(String[] args) { Scanner in = new Scanner(System.in); int storage = getNumDigits(in); if(storage == 0){ System.out.print("No digits to store? OK, goodbye!"); System.exit(0); } int []a = new int [storage]; a = getDigits(a, in); displayDigits(a); selectionSort(a); } private static int getNumDigits(Scanner inScanner) { System.out.print("Please enter the number of digits to be stored: "); int stored = inScanner.nextInt(); while(stored < 0){ System.out.println("ERROR! You must enter a non-negative number of digits!"); System.out.println(); System.out.print("Please enter the number of digits to be stored: "); stored = inScanner.nextInt(); } return stored; } private static int[] getDigits(int[] digits, Scanner inScanner) { int length = digits.length; int count = 0; int toBeStored = 0; while(count < length){ System.out.print("Enter integer " +count +": "); toBeStored = inScanner.nextInt(); digits[count] = toBeStored; count++; } return digits; } private static void displayDigits(int[] digits) { int len = digits.length; System.out.println(); System.out.println("Array before sorting:"); System.out.println("Number of digits in array: " +len); System.out.print("Digits in array: "); for(int cnt = 0; cnt < len-1; cnt++){ System.out.print(digits[cnt] + ", "); } System.out.println(digits[len-1]); } private static void selectionSort(int[] digits) { int l = digits.length; System.out.println(); System.out.println("Array after sorting:"); System.out.println("Number of digits in array: " +l); System.out.print("Digits in array: "); int index = 0; int value = digits[0]; int indVal = digits[index]; while(index < l){ indVal = digits[index]; if(indVal <= value){ indVal = value; digits[index] = value; index++; } else if(value < indVal){ index++; } System.out.print(value); //This is where I don't know what to do. } }

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • Error in running script [closed]

    - by SWEngineer
    I'm trying to run heathusf_v1.1.0.tar.gz found here I installed tcsh to make build_heathusf work. But, when I run ./build_heathusf, I get the following (I'm running that on a Fedora Linux system from Terminal): $ ./build_heathusf Compiling programs to build a library of image processing functions. convexpolyscan.c: In function ‘cdelete’: convexpolyscan.c:346:5: warning: incompatible implicit declaration of built-in function ‘bcopy’ [enabled by default] myalloc.c: In function ‘mycalloc’: myalloc.c:68:16: error: invalid storage class for function ‘store_link’ myalloc.c: In function ‘mymalloc’: myalloc.c:101:16: error: invalid storage class for function ‘store_link’ myalloc.c: In function ‘myfree’: myalloc.c:129:27: error: invalid storage class for function ‘find_link’ myalloc.c:131:12: warning: assignment makes pointer from integer without a cast [enabled by default] myalloc.c: At top level: myalloc.c:150:13: warning: conflicting types for ‘store_link’ [enabled by default] myalloc.c:150:13: error: static declaration of ‘store_link’ follows non-static declaration myalloc.c:91:4: note: previous implicit declaration of ‘store_link’ was here myalloc.c:164:24: error: conflicting types for ‘find_link’ myalloc.c:131:14: note: previous implicit declaration of ‘find_link’ was here Building the mammogram resizing program. gcc -O2 -I. -I../common mkimage.o -o mkimage -L../common -lmammo -lm ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [mkimage] Error 1 Building the breast segmentation program. gcc -O2 -I. -I../common breastsegment.o segment.o -o breastsegment -L../common -lmammo -lm breastsegment.o: In function `render_segmentation_sketch': breastsegment.c:(.text+0x43): undefined reference to `mycalloc' breastsegment.c:(.text+0x58): undefined reference to `mycalloc' breastsegment.c:(.text+0x12f): undefined reference to `mycalloc' breastsegment.c:(.text+0x1b9): undefined reference to `myfree' breastsegment.c:(.text+0x1c6): undefined reference to `myfree' breastsegment.c:(.text+0x1e1): undefined reference to `myfree' segment.o: In function `find_center': segment.c:(.text+0x53): undefined reference to `mycalloc' segment.c:(.text+0x71): undefined reference to `mycalloc' segment.c:(.text+0x387): undefined reference to `myfree' segment.o: In function `bordercode': segment.c:(.text+0x4ac): undefined reference to `mycalloc' segment.c:(.text+0x546): undefined reference to `mycalloc' segment.c:(.text+0x651): undefined reference to `mycalloc' segment.c:(.text+0x691): undefined reference to `myfree' segment.o: In function `estimate_tissue_image': segment.c:(.text+0x10d4): undefined reference to `mycalloc' segment.c:(.text+0x14da): undefined reference to `mycalloc' segment.c:(.text+0x1698): undefined reference to `mycalloc' segment.c:(.text+0x1834): undefined reference to `mycalloc' segment.c:(.text+0x1850): undefined reference to `mycalloc' segment.o:segment.c:(.text+0x186a): more undefined references to `mycalloc' follow segment.o: In function `estimate_tissue_image': segment.c:(.text+0x1bbc): undefined reference to `myfree' segment.c:(.text+0x1c4a): undefined reference to `mycalloc' segment.c:(.text+0x1c7c): undefined reference to `mycalloc' segment.c:(.text+0x1d8e): undefined reference to `myfree' segment.c:(.text+0x1d9b): undefined reference to `myfree' segment.c:(.text+0x1da8): undefined reference to `myfree' segment.c:(.text+0x1dba): undefined reference to `myfree' segment.c:(.text+0x1dc9): undefined reference to `myfree' segment.o:segment.c:(.text+0x1dd8): more undefined references to `myfree' follow segment.o: In function `estimate_tissue_image': segment.c:(.text+0x20bf): undefined reference to `mycalloc' segment.o: In function `segment_breast': segment.c:(.text+0x24cd): undefined reference to `mycalloc' segment.o: In function `find_center': segment.c:(.text+0x3a4): undefined reference to `myfree' segment.o: In function `bordercode': segment.c:(.text+0x6ac): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_label': cc_label.c:(.text+0x20c): undefined reference to `mycalloc' cc_label.c:(.text+0x6c2): undefined reference to `mycalloc' cc_label.c:(.text+0xbaa): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_label_0bkgd': cc_label.c:(.text+0xe17): undefined reference to `mycalloc' cc_label.c:(.text+0x12d7): undefined reference to `mycalloc' cc_label.c:(.text+0x17e7): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_relabel_by_intensity': cc_label.c:(.text+0x18c5): undefined reference to `mycalloc' ../common/libmammo.a(cc_label.o): In function `cc_label_4connect': cc_label.c:(.text+0x1cf0): undefined reference to `mycalloc' cc_label.c:(.text+0x2195): undefined reference to `mycalloc' cc_label.c:(.text+0x26a4): undefined reference to `myfree' ../common/libmammo.a(cc_label.o): In function `cc_relabel_by_intensity': cc_label.c:(.text+0x1b06): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_coords': convexpolyscan.c:(.text+0x6f0): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x75f): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x7ab): undefined reference to `myfree' convexpolyscan.c:(.text+0x7b8): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_poly_cacheim': convexpolyscan.c:(.text+0x805): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x894): undefined reference to `myfree' ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [breastsegment] Error 1 Building the mass feature generation program. gcc -O2 -I. -I../common afumfeature.o -o afumfeature -L../common -lmammo -lm afumfeature.o: In function `afum_process': afumfeature.c:(.text+0xd80): undefined reference to `mycalloc' afumfeature.c:(.text+0xd9c): undefined reference to `mycalloc' afumfeature.c:(.text+0xe80): undefined reference to `mycalloc' afumfeature.c:(.text+0x11f8): undefined reference to `myfree' afumfeature.c:(.text+0x1207): undefined reference to `myfree' afumfeature.c:(.text+0x1214): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x7fa): undefined reference to `mycalloc' aggregate.c:(.text+0x81c): undefined reference to `mycalloc' aggregate.c:(.text+0x868): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xbc5): undefined reference to `mymalloc' aggregate.c:(.text+0xbfb): undefined reference to `mycalloc' aggregate.c:(.text+0xc3c): undefined reference to `mycalloc' ../common/libmammo.a(aggregate.o): In function `aggregate': aggregate.c:(.text+0x9b5): undefined reference to `myfree' ../common/libmammo.a(aggregate.o): In function `aggregate_median': aggregate.c:(.text+0xd85): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_coords': convexpolyscan.c:(.text+0x6f0): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x75f): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x7ab): undefined reference to `myfree' convexpolyscan.c:(.text+0x7b8): undefined reference to `myfree' ../common/libmammo.a(convexpolyscan.o): In function `polyscan_poly_cacheim': convexpolyscan.c:(.text+0x805): undefined reference to `mycalloc' convexpolyscan.c:(.text+0x894): undefined reference to `myfree' ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x29e): undefined reference to `mymalloc' optical_density.c:(.text+0x342): undefined reference to `mycalloc' optical_density.c:(.text+0x383): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x693): undefined reference to `mymalloc' optical_density.c:(.text+0x74f): undefined reference to `mycalloc' optical_density.c:(.text+0x790): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xb2e): undefined reference to `mymalloc' optical_density.c:(.text+0xb87): undefined reference to `mycalloc' optical_density.c:(.text+0xbc6): undefined reference to `mycalloc' ../common/libmammo.a(optical_density.o): In function `linear_optical_density': optical_density.c:(.text+0x4d9): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `log10_optical_density': optical_density.c:(.text+0x8f1): undefined reference to `myfree' ../common/libmammo.a(optical_density.o): In function `map_with_ushort_lut': optical_density.c:(.text+0xd0d): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o): In function `deallocate_cached_image': virtual_image.c:(.text+0x3dc6): undefined reference to `myfree' virtual_image.c:(.text+0x3dd7): undefined reference to `myfree' ../common/libmammo.a(virtual_image.o):virtual_image.c:(.text+0x3de5): more undefined references to `myfree' follow ../common/libmammo.a(virtual_image.o): In function `allocate_cached_image': virtual_image.c:(.text+0x4233): undefined reference to `mycalloc' virtual_image.c:(.text+0x4253): undefined reference to `mymalloc' virtual_image.c:(.text+0x4275): undefined reference to `mycalloc' virtual_image.c:(.text+0x42e7): undefined reference to `mycalloc' virtual_image.c:(.text+0x44f9): undefined reference to `mycalloc' virtual_image.c:(.text+0x47a9): undefined reference to `mycalloc' virtual_image.c:(.text+0x4a45): undefined reference to `mycalloc' virtual_image.c:(.text+0x4af4): undefined reference to `myfree' collect2: error: ld returned 1 exit status make: *** [afumfeature] Error 1 Building the mass detection program. make: Nothing to be done for `all'. Building the performance evaluation program. gcc -O2 -I. -I../common DDSMeval.o polyscan.o -o DDSMeval -L../common -lmammo -lm ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' collect2: error: ld returned 1 exit status make: *** [DDSMeval] Error 1 Building the template creation program. gcc -O2 -I. -I../common mktemplate.o polyscan.o -o mktemplate -L../common -lmammo -lm Building the drawimage program. gcc -O2 -I. -I../common drawimage.o -o drawimage -L../common -lmammo -lm ../common/libmammo.a(mikesfileio.o): In function `read_segmentation_file': mikesfileio.c:(.text+0x1e9): undefined reference to `mycalloc' mikesfileio.c:(.text+0x205): undefined reference to `mycalloc' collect2: error: ld returned 1 exit status make: *** [drawimage] Error 1 Building the compression/decompression program jpeg. gcc -O2 -DSYSV -DNOTRUNCATE -c lexer.c lexer.c:41:1: error: initializer element is not constant lexer.c:41:1: error: (near initialization for ‘yyin’) lexer.c:41:1: error: initializer element is not constant lexer.c:41:1: error: (near initialization for ‘yyout’) lexer.c: In function ‘initparser’: lexer.c:387:21: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] lexer.c: In function ‘MakeLink’: lexer.c:443:16: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default] lexer.c:447:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:452:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:455:34: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:458:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:460:3: warning: incompatible implicit declaration of built-in function ‘strcpy’ [enabled by default] lexer.c: In function ‘getstr’: lexer.c:548:26: warning: incompatible implicit declaration of built-in function ‘malloc’ [enabled by default] lexer.c:552:4: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:557:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:557:28: warning: incompatible implicit declaration of built-in function ‘strlen’ [enabled by default] lexer.c:561:7: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c: In function ‘parser’: lexer.c:794:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:798:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1074:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1078:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1116:21: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1120:8: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1154:25: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1158:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1190:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1247:25: warning: incompatible implicit declaration of built-in function ‘calloc’ [enabled by default] lexer.c:1251:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c:1283:5: warning: incompatible implicit declaration of built-in function ‘exit’ [enabled by default] lexer.c: In function ‘yylook’: lexer.c:1867:9: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1867:20: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1877:12: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] lexer.c:1877:23: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] make: *** [lexer.o] Error 1

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >