Search Results

Search found 9952 results on 399 pages for 'more details'.

Page 360/399 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • c# .net MVC4 Model to represent table or form

    - by Matthew Chambers
    Hello I am a little confused with regards to models in mvc 4 and thought someone may be able to point me in the right direction. This would be most appreciated. For example if i have a table that has the following fields [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 5)] public string UserName { get; set; } [Required(ErrorMessage="Email Address is Required")] [StringLength(15, ErrorMessage = "Email Address must be between {0} and {1} in size",MinimumLength = 5 )] [DataType(DataType.EmailAddress)] [Display(Name="Email")] public string Email { get; set; } [MaxLength(25)] [Display(Name="Mobile Telephone Number")] public string Mobile {get;set;} [MaxLength(500)] [Display(Name="Headline")] public string Headline {get;set;} [Required] [StringLength(200)] [Display(Name = "First Name")] public string FirstName {get;set;} [Required] [StringLength(200)] [Display(Name="Surname")] public string Surname { get; set;} public virtual int? DayOfBirthId { get; set; } public virtual DayOfBirth DayOfBirth { get; set; } public virtual int? MonthOfBirthId { get; set; } public virtual MonthOfBirth MonthOfBirth { get; set; } public virtual int? YearOfBirthId { get; set; } public virtual YearOfBirth YearOfBirth{get;set;} This is my user profile table in the database. However I would like a form that the user registers to the site with. When they first register i do not need all the details such as telephone all i really need is there username, email address and password. Do i create another model for this. Or do i have one model and on the controller set the fields to null or empty string that are not required on registration. I have validation also so would this be set for data that has not been entered on the form. My question is ultimately should all forms represent models- should the database be redesigned to meet this required. Or should the controller set the values that are not required. Or should there be another model that represents the form be created which maps to this table. I am a little confused on this and clarification of anyone would be most appreciated.

    Read the article

  • program received signal SIGABRT (xcode)

    - by manish1990
    #import <UIKit/UIKit.h> @interface tableview : UIViewController<UITableViewDataSource> { NSArray *listOfItems; } @property(nonatomic,retain) NSArray *listOfItems; @end #import "tableview.h" @implementation tableview @synthesize listOfItems; - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier ]autorelease]; } //NSString *cellValue = [listOfItems objectAtIndex:indexPath.row]; cell.textLabel.text = [listOfItems objectAtIndex:indexPath.row]; return cell; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 3; } - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } #pragma mark - View lifecycle - (void)viewDidLoad { listOfItems = [[NSArray alloc] initWithObjects:@"first",@"second",@"third", nil]; //listOfItems = [[NSMutableArray alloc]init]; // [listOfItems addObject:@"first"]; //[listOfItems addObject:@"second"]; [super viewDidLoad]; // Do any additional setup after loading the view from its nib. } -(void)dealloc { [listOfItems release]; [super dealloc]; } @end GNU gdb 6.3.50-20050815 (Apple version gdb-1708) (Mon Aug 15 16:03:10 UTC 2011) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 438. 2012-04-27 13:33:23.276 tableview test[438:207] -[UIView tableView:numberOfRowsInSection:]: unrecognized selector sent to instance 0x6855500 2012-04-27 13:33:23.362 tableview test[438:207] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[UIView tableView:numberOfRowsInSection:]: unrecognized selector sent to instance 0x6855500' * First throw call stack: (0x13bb052 0x154cd0a 0x13bcced 0x1321f00 0x1321ce2 0x1ecf2b 0x1ef722 0x9f7c7 0x9f2c1 0xa228c 0xa6783 0x51322 0x13bce72 0x1d6592d 0x1d6f827 0x1cf5fa7 0x1cf7ea6 0x1d8330c 0x23530 0x138f9ce 0x1326670 0x12f24f6 0x12f1db4 0x12f1ccb 0x12a4879 0x12a493e 0x12a9b 0x2282 0x21f5) terminate called throwing an exceptionCurrent language: auto; currently objective-c (gdb)

    Read the article

  • Why is the page still caching even after the no-cache headers have been sent?

    - by Matthew Grasinger
    I've done a ton of research on this and have asked many people with help and still no success. Here are the details... I'm involved in developing a website that pulls data from various data files, combines them in a temp .csv file, and then is graphed using a popular graphing library: dygraphs. The bulk of the website is written in PHP. The parameters that determine the data that is graphed are stored in the users session, the .csv is named after the users session and available for download, and then the .csv file is written in a script that passes it to the dygraphs object. And we've found, even with the no-cache headers sent: header("Cache-Control: no-cache, must-revalidate"); header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); Many users experience in the middle of a session, (if enough different graphs are generated) the page displaying an older, static rendering of the page (data they had graphed earlier in the session) as if it were cached and loaded instead of getting a new request. It only gets weirder though: I've checked using developer tools in both Firefox and Chrome and both browsers are receiving the no-cache headers just fine; Even when the problem occurs if you view the page source, the source is the correct content (a table/legend is also dynamically created using php, the source shows the correct table, but what is rendered is older content); the page begins to render correctly until the graph is about to be display, and then shows the older content; the older content displays as if it were a completely static overlay--the cached graph does not have the same dynamic features (roll over data point display, zoom and pan, etc.) And it is as if the correct page were somewhere beneath it (the download button for the csv file moves depending on how large the table is. The older, static page does nothing if you click the download .csv button, but if you can manage to find the one in the page beneath it you can click and still download the .csv. The data in the .csv is correct) It is one of the strangest things I've seen in development thus far. Some other relevant facts are that all the problems I've personally experience occurred while I was using Chrome. Non of these symptoms have been reported by Firefox users. IE users have had the same problems (IE users are forced to use chrome frame). I'm at my wits end at this point. We've sent the php headers; we've tried setting the cache profile for php on IIS as "DisableCache" (or whatever); we've tried sending a random query string to the results page; we've tried all the appropriate meta tags--all with no success.

    Read the article

  • Certificate Trusts Lists in IIS7

    - by BrettRobi
    I am trying to enable mutual authentication for my WebService hosted in IIS7. I have the server side cert setup and working but cannot figure out how to get a Certificate Trust List created and setup in IIS7 so that I can require and validate client side certificates. All of my client side certs are signed by my own root cert so I need to create a CTL that contains just my root cert and then have IIS validate client provided certs against the CTL. Can anyone shed some light on how to do this? IIS6 had a UI for assigning a CTL, but I can find nothing similar in IIS7. Update: I have now successfully used MakeCTL in wizard mode to create a CTL with a Friendly Name. However I don't have adsutil support on my IIS7 box so via other posts elsewhere I am trying to use the 'netsh http add sslcert' command to assign the CTL to my site. Before I could use this command I had to remove the existing SSL cert that was assigned to my site for server authentication. Then in my netsh command I specify the thumbprint of that very same SSL cert I removed, plus a made up appid, plus 'sslctlidentifier=MyCTL sslctlstorename=CA'. The resulting command is: netsh http add sslcert ipport=10.10.10.10:443 certhash=adfdffa988bb50736b8e58a54c1eac26ed005050 appid={ffc3e181-e14b-4a21-b022-59fc669b09ff} sslctlidentifier=MyCTL sslctlstorename=CA (the IP addr is munged), but I am getting this error: SSL Certificate add failed, Error: 1312 A specified logon session does not exist. It may already have been terminated. I am sure the error is related to the CTL options because if I remove them it works (though no CTL is assigned of course). Can anyone help me take this last step and make this work? UPDATE 01-07-2010: I never resolved this with IIS 7.0 and have since migrated our app to IIS 7.5 and am giving this another try. Per the response from Taras Chuhay I installed IIS6 Compatibility on my test server and tried the steps he documented using adsutil.vbs (which can also be found here). I immediately ran into this error: ErrNumber: -2147023584 Error trying to SET the Property: SslCtlIdentifier when running this command: adsutil.vbs set w3svc/1/SslCtlIdentifier MyFriendlyName I then went on to try the next adsutil.vbs command documented and it failed with the same error. I have verified that the CTL I created has a Friendly Name of MyFriendlyName and that it exists in the 'Intermediate Certification Authorities\Certificate Trust List' store of LocalComputer. So once again I am at a dead standstill. I don't know what else to try. Has anyone ever gotten CTL's to work with IIS7 or 7.5? Ever? Am I beating a DEAD horse. Google turns up nothing but my own posts and other similar stories. Update 2/23/10 - I've confirmed with Microsoft that this is a bug with IIS 7.5, but it does work with IIS 7. Check out this link for details: http://viisual.net/configuration/IIS7-CTLs.htm Update 6/08/10 - I can now confirm that KB981506 resolves this issue. There is a patch associated with this KB that must be applied to Server 2008 R2 machines to enable this functionality. Once that is installed all works flawlessly for me.

    Read the article

  • Ubuntu 10.10 Ad-Hoc Setup (from Wireless Router, to Ubuntu Server/Desktop to Wireless Router)

    - by user60375
    Okay, so I know there are different approaches for this, but I will explain my story briefly before getting to the technical stuff. My fiancée and I are going through some financial issues (as I assume a lot of us are). We ended up having to move from our house and stay with some friends/family for 6 months, just to get ourselves caught up. (Medical bills, among other issues,etc). So this is where it gets fun. At our friends house, we are staying in the loft setup which is not near the cable modem and wireless router. I have a "hand-crafted" media center running XBMC, an Ubuntu 10.10 Server/Desktop (multi-purpose, very powerful and tons drive space), two working laptops, a between the two of us we have multiple wireless devices/phones. Now our friends Wireless router doesn't have any options for assigning IP addresses, but my router does. My current setup is: Friends Cable Modem -- Friend's Wireless Router -- Ubuntu 10.10 Server -- My Wireless Router (local-link from Friend's wireless (incoming) to sharing connection on ETH0 (outgoing)) -- to all devices. (Wireless Modem, Ubuntu Server that share's it's wireless incoming connection to the ethernet port my Wireless router share's with the rest of the devices). I setup my router to use default settings from my friend's router, using Google's DNS on my router (disabled DNS setup on Ubuntu Server), everything is assigned nicely and runs smooth. My Ubuntu server was given the address 10.42.43.1 (assuming standard from Network-Manager). (On the Ubuntu machine that shares to my wireless router; I have some server apps installed, but mainly just use Samba/NFS/Tangerine action. My problem/goal is that every device has no problem of accessing the internet from my router, the media-center has an assigned ip address, all services from all devices (ZeroConf, Avanhi, Bonjour, GIT, SSH, FTP, Apache2, etc) all work correctly except from my Ubuntu Server (which serves the wireless connection to ETH0 to another Wireless Router). The Ubuntu 10.10 Server/Desktop is not broadcasting anything (the Zeroconf Service Discovery 0.4 Gnome Applet shows the services from the Ubuntu server but no other computers can see them). I can access it from my Media-Center (Running Xbuntu 10.04) if I direct it to 10.42.43.1, no problem. But I cannot access Tangerine (Daapd) and the Samba shares do not show up on any computers for 10.42.43.1 (not in the WORKGROUP which Samba is setup simple and default but I can direct computers to that address and the shares will add except on a damn Windows 7 parition). Is this an issue with how I have my router setup and possible the gateway? An issue with Network-Manager? And issue with my Ubuntu Server/Desktop? I know there is a lot to that, but it's simpler than I probably have explained? Any help would be appreciated. If you need more details, I can provide them. If there is a better way of my attempting this home-network, please let me know. Thanks in advance for the help.

    Read the article

  • Configuring Cisco 877W router from scratch for DHCP, WiFi, ADSL2+, NAT

    - by David M Williams
    Hi all, I apologise if this is a BIG question but I am quite lost with the Cisco IOS. I know what I want to achieve just not how to do it :( I have a Cisco 877W router with 4 FastEthernet interfaces, 1 ATM interface and 1 802.11 Radio. I want to set it up for a small network and am trying to construct a configuration below. I was using Google to try and flesh it out but I think I need help and guidance from actual experts! If it helps, output from show ver says Cisco IOS software, C870 software (C870-ADVSECURITYK9-M), version 12.4(4)T7, release software (fc1) ROM: System bootstrap, version 12.3(8r)YI4, release software Here's what I have so far, which hopefully outlines clearly enough what I am wanting to do. The bits in angle brackets are placeholders (eg the secret password). ! ! Set router hostname ! hostname Shazam ! ! Set usernames and passwords ! username david privilege 15 secret 0 <PASSWORD> enable secret <SECRETPASSWORD> ! ! Configure SSH and telnet access ! line vty 0 4 privilege level 15 login local transport input telnet ssh ! ! Local logging ! logging buffered 51200 warning ! ! Set date and time for NSW, Australia (GMT +10h) ! ! ! Set router IP address to 192.168.1.1 on FastEthernet0 port ! interface FastEthernet0 ip address 192.168.1.1 255.255.255.0 no shut ip nat inside ! ! Forward any unknown DNS requests to Google ! ip dns server ip name-server 8.8.8.8 ip name-server 8.8.4.4 ! ! Set up DHCP ! DHCP pool covers 192.168.1.100 - .199 ! Set gateway and DNS server to be the router, ie 192.168.1.1 ! service dhcp ip routing ip dhcp excluded-address 192.168.1.1 192.168.1.99 ip dhcp excluded-address 192.168.1.200 192.168.1.255 ip dhcp pool <DHCPPOOLNAME> network 192.168.1.0 255.255.255.0 default-router 192.168.1.1 dns-server 192.168.1.1 lease 7 ! ! DHCP reservations ! ! Assign IP address 192.168.1.105 to MAC address 00-21-5D-2F-58-04 ! ! Configure ADSL2 connection details ! interface atm dsl operating-mode adsl2+ ! ! Set up NAT rules ! ! Forward port 35394 to 192.168.1.105 ! ! Set up WiFi ! ! SSID visible, WPA2 security, Pre-shared key I'm hoping most of this is boiler-plate stuff to you guys. I'm keen to not just get a working script but to actually understand it also. Unfortunately, I'm finding the Cisco reference material online very complex. Thank you!

    Read the article

  • how to install ffmpeg in cpanel

    - by Ajay Chthri
    i'm using dedicated server(linux) so i need to install ffmpeg in cpanel so here ffmpeg i found in Main Software Install a Perl Module but i writing script in php so how can i install ffmpeg phpperl when i'am trying to install ffmpeg in perl module i get this response Checking C compiler....C compiler (/usr/bin/cc) OK (cached Tue Jan 17 19:16:31 2012)....Done CPAN fallback is disabled since /var/cpanel/conserve_memory exists, and cpanm is available. Method: Using Perl Expect, Installer: cpanm You have make /usr/bin/make Falling back to HTTP::Tiny 0.009 You have /bin/tar: tar (GNU tar) 1.15.1 You have /usr/bin/unzip You have Cpanel::HttpRequest 2.1 Testing connection speed...(using fast method)...Done Ping:2 (ticks) Testing connection speed to cpan.knowledgematters.net using pureperl...(28800.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.develooper.com using pureperl...(22233.33 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.schatt.com using pureperl...(32750.00 bytes/s)...Done Ping:3 (ticks) Testing connection speed to cpan.mirror.facebook.net using pureperl...(14050.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.mirrors.hoobly.com using pureperl...(5150.00 bytes/s)...Done Five usable mirrors located Ping:0 (ticks) Testing connection speed to 208.109.109.239 using pureperl...(28950.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to 208.82.118.100 using pureperl...(19300.00 bytes/s)...Done Ping:1 (ticks) Testing connection speed to 69.50.192.73 using pureperl...(19300.00 bytes/s)...Done Three usable fallback mirrors located Mirror Check passed for cpan.schatt.com (/index.html) Searching on cpanmetadb ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:0).......(request attempt 1/12)...Using dns cache file /root/.HttpRequest/cpanmetadb.cpanel.net......searching for mirrors (mirror search attempt 1/3)......5 usable mirrors located. (less then expected)......mirror search success......connecting to 208.74.123.82...@208.74.123.82......connected......receiving...100%......request success......Done Searching Video::FFmpeg on cpanmetadb (http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release) ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:1).......(request attempt 1/12)[email protected]%......request success......Done Source: fastest CPAN mirror ... --> Working on Video::FFmpeg Fetching http://cpan.schatt.com//authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz ... Fetching http://cpan.schatt.com/authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz (connected:1).......(request attempt 1/12)...Resolving cpan.schatt.com...(resolve attempt 1/65)......connecting to 66.249.128.125...@66.249.128.125......connected......receiving...25%...50%...75%...100%......request success......Done OK Unpacking Video-FFmpeg-0.47.tar.gz Video-FFmpeg-0.47/ Video-FFmpeg-0.47/Changes Video-FFmpeg-0.47/FFmpeg.xs Video-FFmpeg-0.47/MANIFEST Video-FFmpeg-0.47/META.yml Video-FFmpeg-0.47/Makefile.PL Video-FFmpeg-0.47/README Video-FFmpeg-0.47/lib/ Video-FFmpeg-0.47/lib/Video/ Video-FFmpeg-0.47/lib/Video/FFmpeg/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVFormat.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Audio.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Subtitle.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Video.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream.pm Video-FFmpeg-0.47/lib/Video/FFmpeg.pm Video-FFmpeg-0.47/ppport.h Video-FFmpeg-0.47/t/ Video-FFmpeg-0.47/t/Video-FFmpeg.t Video-FFmpeg-0.47/test Video-FFmpeg-0.47/test.mp4 Video-FFmpeg-0.47/typemap Entering Video-FFmpeg-0.47 Checking configure dependencies from META.yml META.yml not found or unparsable. Fetching META.yml from search.cpan.org Fetching http://search.cpan.org/meta/Video-FFmpeg-0.47/META.yml (connected:1).......(request attempt 1/12)...Resolving search.cpan.org...(resolve attempt 1/65)......connecting to 199.15.176.161...@199.15.176.161......connected......receiving...100%......request success......Done Configuring Video-FFmpeg-0.47 ... Running Makefile.PL Perl v5.10.0 required--this is only v5.8.8, stopped at Makefile.PL line 1. BEGIN failed--compilation aborted at Makefile.PL line 1. N/A ! Configure failed for Video-FFmpeg-0.47. See /home/.cpanm/build.log for details. Perl Expect failed with non-zero exit status: 256 All available perl module install methods have failed guide me how can i install ffmpeg in cPanel Thanks for advance.

    Read the article

  • SSL certificates and types for securing your websites and applications

    - by Mit Naik
    Need to share few information regarding SSL certificates and there types, which SSL certificates are widely used etc. There are several SSL certificates available in the market today inorder to secure your domains, multiple subdomains, your applications and code too. Few of the details are mentioned below. CheapSSL certificates available today are Standard Rapidssl certificate, Thwate SSL 123 etc certificates which are basic level certificates. Most of these cheap SSL certificates are domain-validated only and don't provide the greatest trust for your customers. This means you shouldn't use cheap SSL certificates on e-commerce stores or other public-facing sites that require people to trust the site. EV certificates I found Geotrust Truebusinessid with EV certificate which is one of the cheapest certificate available in market today, you can also find Thwate, Versign EV version of certificates. Its designed to prevent phishing attacks better than normal SSL certificates. What makes an EV Certificate so special? An SSL Certificate Provider has to do some extensive validation to give you one including: Verifying that your organization is legally registered and active, Verifying the address and phone number of your organization, Verifying that your organization has exclusive right to use the domain specified in the EV Certificate, Verifying that the person ordering the certificate has been authorized by the organization, Verifying that your organization is not on any government blacklists. SSL WILDCARD CERTIFICATES, SSL Wildcard Certificates are big money-savers. An SSL Wildcard Certificate allows you to secure an unlimited number of first-level sub-domains on a single domain name. For example, if you need to secure the following websites: * www.yourdomain.com * secure.yourdomain.com * product.yourdomain.com * info.yourdomain.com * download.yourdomain.com * anything.yourdomain.com and all of these websites are hosted on the multiple server box, you can purchase and install one Wildcard certificate issued to *.yourdomain.com to secure all these sites. SAN CERTIFICATES, are interesting certificates and are helpfull if you want to secure multiple domains by generating single CSR and can install the same certificate on your additional sites without generating new CSRs for all the additional domains. CODE SIGNING CERTIFICATES, A code signing certificate is a file containing a digital signature that can be used to sign executables and scripts in order to verify your identity and ensure that your code has not been tampered with since it was signed. This helps your users to determine whether your software can be trusted. Scroll to the chart below to compare cheap code signing certificates. A code signing certificate allows you to sign code using a private and public key system similar to how an SSL certificate secures a website. When you request a code signing certificate, a public/private key pair is generated. The certificate authority will then issue a code signing certificate that contains the public key. A certificate for code signing needs to be signed by a trusted certificate authority so that the operating system knows that your identity has been validated. You could still use the code signing certificate to sign and distribute malicious software but you will be held legally accountable for it. You can sign many different types of code. The most common types include Windows applications such as .exe, .cab, .dll, .ocx, and .xpi files (using an Authenticode certificate), Apple applications (using an Apple code signing certificate), Microsoft Office VBA objects and macros (using a VBA code signing certificate), .jar files (using a Java code signing certificate), .air or .airi files (using an Adobe AIR certificate), and Windows Vista drivers and other kernel-mode software (using a Vista code certificate). In reality, a code signing certificate can sign almost all types of code as long as you convert the certificate to the correct format first. Also I found the below URL which provides you good suggestion regarding purchasing best SSL certificates for securing your site, as per the Financial institution, Bank, Hosting providers, ISP, Retail Merchants etc. Please vote and provide comments or any additional suggestions regarding SSL certificates.

    Read the article

  • How to use Bonjour?

    - by Roman
    First, what exactly Bonjour does (pleas read my guesses written bellow)? Here I found out that Bonjour enables automatic discovery of computers, devices, and services on IP networks. But I thought that it not only "discovers devices on IP network" it also creates an IP network by assigning IP addresses to devices where Bonjour is running. Am I right? And I still miss the essence. Does it work in the following way? First I connect devices (for example laptops) physically so that they potentially can communicate with each other. Then, let say, on some laptops I have Bonjour running and then, as a consequence, these laptops assign IP addresses to them self in automatic way. So, laptops (where Bonjour is running) build an IP network. Does it work in this way? Or may be a computer running Bonjour is not considered as a service and it does not broadcast itself just because Bonjour is running on this computer. I mean that the applications running on the computers need to use Bonjour to broadcast themself. So, it is applications that broadcast themself (not computers) and it is not done automatically (application needs to broadcast themself explicitly). Is it right? How exactly my application can broadcast itself? Can I use command line to register an service (so that all applications using Bonjour knows that a new service appeared)? Further, I would like to have an application which use the IP network created by Bonjour. For that my application needs to know which devices/services are present in the network. In more details, my application needs to have a list of services. Each service in the list should have a name, the IP address where it is running and the port which is used by the application. Can Bonjour provide this information in some way? If it is the case, how exactly it works. How my program can get this information from Bonjour? Can my program read some file created by Bonjour and containing the above mentioned information? Can I use some commands in command line to retrieve this information? I have a special interest in accessing the information about services from files, environment variables or commands in command line. These options seems to me to be the simplest! Since in these case I do not need to use any additional libraries to communicate with Bonjour from a particular programming language. P.S. Pleas ask questions if something is not clear in my question. I will try to formulate my question in a more clear way. P.P.S. I use Windows 7. ADDED: I plan to write my applications in PHP. Every computer should run a Apache web server. And I want to use Bonjour to help computer discover each other (computers are working in a local network).

    Read the article

  • sudo in Debian squeeze inside linux-vserver always wants password

    - by mark
    Every since I upgraded all my linux-vserver Debian guests from Lenny to Squeeze I've the apparent problem that whenever I want to use sudo it asks me for my password. Every time. I've configured sudo to have a timeout of 30 minutes: Defaults timestamp_timeout=30 . This has been configured when it was still Lenny (note: as suggested by EightBitTony I've also tried without this setting - no change). I've a hard time figuring out what the problem here is, since I think my configuration is right. I thought about it being a problem with the file used to record the timestamp, maybe a permission issue, but was unlucky to find any hard evidence. I've compared the contents of /var/lib/sudo/ between a working and a non-working system but couldn't spot any difference. The version of sudo used in both environments is 1.7.4p4-2.squeeze.3. My non-working system(s): find /var/lib/sudo/ -ls 17319289 4 drwx------ 4 root root 4096 Jan 1 1985 /var/lib/sudo/ 17319286 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 17319312 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 17319361 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 17319490 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 17319326 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/4 17319491 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/2 A working system: find /var/lib/sudo -ls 2598921 4 drwx------ 5 root root 4096 Jan 1 1985 /var/lib/sudo 1999522 4 drwx------ 2 root mark 4096 Jan 1 1985 /var/lib/sudo/mark 2000781 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/8 1998998 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/17 1999459 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/26 1998930 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/24 2000771 4 -rw------- 1 root mark 40 Jun 25 11:39 /var/lib/sudo/mark/4 2000773 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/5 1999223 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/0 1998908 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/14 2000769 4 -rw------- 1 root mark 40 Jul 9 13:30 /var/lib/sudo/mark/2 2000770 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/3 2000782 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/9 2000778 4 -rw------- 1 root mark 40 Jul 8 00:11 /var/lib/sudo/mark/7 1998892 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/19 1999264 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/23 2000789 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/12 1999093 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/25 1998880 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/18 1998853 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/20 2000790 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/15 1998878 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/16 1998874 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/13 2000774 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/6 2000786 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/11 1998893 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/22 2000783 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/10 1998949 4 -rw------- 1 root mark 40 Jan 1 1985 /var/lib/sudo/mark/1 Despite the obvious (some up2date timestamps on the working system) I don't see anything wrong here, so it could be as well be a wrong track. Here's my current /etc/sudoers: # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification User_Alias FULLADMIN = user1, user2, user3 # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL FULLADMIN ALL = (ALL) ALL # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d #Defaults always_set_home,timestamp_timeout=30

    Read the article

  • How do I create a Linked Server in SQL Server 2005 to a password protected Access 95 database?

    - by Brad Knowles
    I need to create a linked server with SQL Server Management Studio 2005 to an Access 95 database, which happens to be password protected at the database level. User level security has not been implemented. I cannot convert the Access database to a newer version. It is being used by a 3rd party application; so modifying it, in any way, is not allowed. I've tried using the Jet 4.0 OLE DB Provider and the ODBC OLE DB Provider. The 3rd party application creates a System DSN (with the proper database password), but I've not had any luck in using either method. If I were using a standard connection string, I think it would look something like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source='C:\Test.mdb';Jet OLEDB:Database Password=####; I'm fairly certain I need to somehow incorporate Jet OLEDB:Database Password into the linked server setup, but haven't figured out how. I've posted the scripts I'm using along with the associated error messages below. Any help is greatly appreciated. I'll provide more details if needed, just ask. Thanks! Method #1 - Using the Jet 4.0 Provider When I try to run these statements to create the linked server: sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'Microsoft.Jet.OLEDB.4.0', @srvproduct = N'Access DB', @datasrc = N'C:\Test.mdb' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error when testing the connection: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" reported an error. Authentication failed. Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test". OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" returned message "Cannot start your application. The workgroup information file is missing or opened exclusively by another user.". (Microsoft SQL Server, Error: 7399) ------------------------------ Method #2 - Using the ODBC Provider... sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'MSDASQL', @srvproduct = N'ODBC', @datasrc = N'Test:DSN' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Test". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.". (Microsoft SQL Server, Error: 7303)

    Read the article

  • Pig_Cassandra integration caused - ERROR 1070: Could not resolve CassandraStorage using imports:

    - by Le Dude
    I'm following basic Pig, Cassandra, Hadoop installation. Everything works just fine as a stand alone. No error. However when I tried to run the example file provided by Pig_cassandra example, I got this error. [root@localhost pig]# /opt/cassandra/apache-cassandra-1.1.6/examples/pig/bin/pig_cassandra -x local -x local /opt/cassandra/apache-cassandra-1.1.6/examples/pig/example-script.pig Using /opt/pig/pig-0.10.0/pig-0.10.0-withouthadoop.jar. 2012-10-24 21:14:58,551 [main] INFO org.apache.pig.Main - Apache Pig version 0.10.0 (r1328203) compiled Apr 19 2012, 22:54:12 2012-10-24 21:14:58,552 [main] INFO org.apache.pig.Main - Logging error messages to: /opt/pig/pig_1351138498539.log 2012-10-24 21:14:59,004 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:/// 2012-10-24 21:14:59,472 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] Details at logfile: /opt/pig/pig_1351138498539.log Here is the log file Pig Stack Trace --------------- ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1597) at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540) at org.apache.pig.PigServer.registerQuery(PigServer.java:540) at org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970) at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:555) at org.apache.pig.Main.main(Main.java:111) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) Caused by: Failed to parse: Cannot instantiate: CassandraStorage at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184) at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589) ... 14 more Caused by: java.lang.RuntimeException: Cannot instantiate: CassandraStorage at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:510) at org.apache.pig.parser.LogicalPlanBuilder.validateFuncSpec(LogicalPlanBuilder.java:791) at org.apache.pig.parser.LogicalPlanBuilder.buildFuncSpec(LogicalPlanBuilder.java:780) at org.apache.pig.parser.LogicalPlanGenerator.func_clause(LogicalPlanGenerator.java:4583) at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3115) at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1291) at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789) at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507) at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382) at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175) ... 15 more Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 1070: Could not resolve CassandraStorage using imports: [, org.apache.pig.builtin., org.apache.pig.impl.builtin.] at org.apache.pig.impl.PigContext.resolveClassName(PigContext.java:495) at org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:507) ... 24 more ================================================================================ I googled around and got to this point from other stackoverflow user that identified the potential problem but not the solution. Cassandra and pig integration cause error during startup I believe my configuration is correct and the path has already been defined properly. I didn't change anything in the pig_cassandra file. I'm not quite sure how to proceed from here. Please help?

    Read the article

  • How can I automatically update Flash Player whenever a new version is released? [closed]

    - by user219950
    Summary: Flash Player Update Service doesn't run on a reliable schedule, and doesn't automatically download and apply updates when it does run. Given the importance of having an up-to-date version of Flash Player installed (for those of us who don't use Chrome with its built-in player), I would like to find a way to ensure that new updates are promptly detected and installed. What follows are the details of my efforts to solve this problem on my own... Appendix A: Flash Player Update Service OK, way back in Flash Player 11.2 (or so?) Adobe added the Flash Player Update Service (FlashPlayerUpdateService.exe), it was supposed to keep the Flash Player updated... Upon installation, FPUS is configured to run as a Windows Service, with Start Type set to Manual. A Scheduled Task (Adobe Flash Player Updater.job) is added to start this service every hour. So far, so good - this set-up avoids having a constantly-running service, but makes sure that the checks are run often enough to catch any updates quickly. Google's software updater is configured in a similar fashion, and that works just fine... ...And yet, when I checked the version of my installed Flash Player, I found it was 11.6.602.180, which, based on looking at the timestamps of the files in C:\Windows\System32\Macromed\Flash was last updated (or installed) on Tue, Mar 12, 2013 --- 3/12/13, 5:00:08pm. I made this observation on Thu, Apr 25, 2013 --- 4/25/13, 7:00:00pm, and upon checking Adobe's website found that the current version of Flash Player was 11.7.700.169. That's over a month since the last update, with a new one clearly available on the website but with no indication that the hourly check running on my machine has noticed it or has any intention of downloading it. Appendix B: running the Flash Player updater manually Once upon a time, running FlashUtil32_<version_Plugin.exe -update plugin would give you a window with an Install button; pressing it would download the installer for the current version (automatically, without opening a browser) and run it, then you'd click thru that installer & be done. It was manual, but it worked! Finding my current installation out of date (see Appendix A), I first tried this manual update process. However... Running FlashUtil32_<version_ActiveX.exe -update activex (in my case, that's FlashUtil32_11_6_602_180_ActiveX.exe -update activex) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/activex. Running FlashUtil32_<version_Plugin.exe -update plugin (in my case, that's FlashUtil32_11_6_602_180_Plugin.exe -update plugin) ...only presents a window with a Download button, clicking that Download button opens my browser to the URL https://get3.adobe.com/flashplayer/update/plugin. I could continue with the Download page it sent me to, uncheck the foistware box ("Free! McAfee Security Scan Plus"), download that installer (ActiveX, no foistware: install_flashplayer11x32axau_mssd_aih.exe, Plugin, no foistware: install_flashplayer11x32au_mssd_aih.exe) & probably have an updated Flash...but then, what is the point of the Flash Player Update Service if I have to manually download & run another exe? Epilogue I've since come to suspect that the update service is intentionally hobbled to drive early adopters to the manual download page. If this is true, there's probably no solution to this short of writing my own updater; hopefully I am wrong.

    Read the article

  • Rsync: how to mount truecrypt on-the-fly on the receiving side?

    - by deepc
    The short version: how can I keep an rsync backup on a truecrypt volume? The hard part is to mount/unmount this volume on the fly when it is needed for rsync. Details This is my current backup configuration (which works fairly well for the most part): backup source is on Win7 64 bit, destination is a remote Linux box (Debian) actual data transfer is done by rsync via ssh (cwRsync with cygwin) rsync daemon is started on demand via ssh On the Linux box the backup is protected by file permissions only. I want to increase security here and put the backup into a truecrypt volume. I can fuse-mount that volume manually in the shell. The question is now how can I make rsync not only open an ssh connection and starting the rsync daemon, but also to mount the truecrypt volume before (and unmount it after)? My money is on option --rsync-path which can be used to pass a command line to ssh - provided that stdin and stdout still work the same. I guess that command would have to be a shell script. Is this possible, and what would the script look like? For reference, here's a quote of that option: --rsync-path=PROGRAM Use this to specify what program is to be run on the remote machine to start-up rsync. Often used when rsync is not in the default remote-shell's path (e.g. --rsync-path=/usr/local/bin/rsync). Note that PROGRAM is run with the help of a shell, so it can be any program, script, or command sequence you'd care to run, so long as it does not corrupt the standard-in & standard-out that rsync is using to communicate. One tricky example is to set a different default directory on the remote machine for use with the --relative option. For instance: rsync -avR --rsync-path="cd /a/b && rsync" host:c/d /e/ This is the full rsync man page. Truecrypt volume auto-mount Solved! Turns out this option is actually key to auto-mounting the truecrypt volume on the remote side. The following command line does the trick (one line!): rsync $options -e "ssh -p $port -i ../.ssh/id_dsa" --rsync-path="/usr/local/bin/truecrypt -d && /usr/local/bin/truecrypt --fs-options=rw,sync,utf8,uid=$UID,umask=0007 --non-interactive -p $password $pathToVolume $remoteMountDir && rsync" $localSourceDir $user:$remoteMountMountDir Truecrypt volume auto-dismount Still open: how can I unmount the volume when rsync is done? Not sure if the following makes sense to anyone but I give it a try... Right now I am unmounting (truecrypt -d), then mounting again, then continuing with rsync. At this time rsync needs to do its thing but I dont know when its done. Adding ... rsync && truecrypt -d to the command line does not work because then the rsync daemon does not start. This is because rsync starts the daemon with parameter --server on the remote side and that parameter would go to the final truecrypt -d.

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • OpenSwan IPSec phase #2 complications

    - by XXL
    Phase #1 (IKE) succeeds without any problems (verified at the target host). Phase #2 (IPSec), however, is erroneous at some point (apparently due to misconfiguration on localhost). This should be an IPSec-only connection. I am using OpenSwan on Debian. The error log reads the following (the actual IP-addr. of the remote endpoint has been modified): pluto[30868]: "x" #2: initiating Quick Mode PSK+ENCRYPT+PFS+UP+IKEv2ALLOW+SAREFTRACK {using isakmp#1 msgid:5ece82ee proposal=AES(12)_256-SHA1(2)_160 pfsgroup=OAKLEY_GROUP_DH22} pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #1: the peer proposed: 0.0.0.0/0:0/0 - 0.0.0.0/0:0/0 pluto[30868]: "x" #3: responding to Quick Mode proposal {msgid:a4f5a81c} pluto[30868]: "x" #3: us: 192.168.1.76<192.168.1.76[+S=C] pluto[30868]: "x" #3: them: 222.222.222.222<222.222.222.222[+S=C]===10.196.0.0/17 pluto[30868]: "x" #3: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 pluto[30868]: "x" #3: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #3: next payload type of ISAKMP Hash Payload has an unknown value: 97 X pluto[30868]: "x" #3: malformed payload in packet pluto[30868]: | payload malformed after IV I am behind NAT and this is all coming from wlan2. Here are the details: default via 192.168.1.254 dev wlan2 proto static 169.254.0.0/16 dev wlan2 scope link metric 1000 192.168.1.0/24 dev wlan2 proto kernel scope link src 192.168.1.76 metric 2 Output of ipsec verify: Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.37/K3.2.0-24-generic (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [OK] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] This is what happens when I run ipsec auto --up x: 104 "x" #1: STATE_MAIN_I1: initiate 003 "x" #1: received Vendor ID payload [RFC 3947] method set to=109 106 "x" #1: STATE_MAIN_I2: sent MI2, expecting MR2 003 "x" #1: received Vendor ID payload [Cisco-Unity] 003 "x" #1: received Vendor ID payload [Dead Peer Detection] 003 "x" #1: ignoring unknown Vendor ID payload [502099ff84bd4373039074cf56649aad] 003 "x" #1: received Vendor ID payload [XAUTH] 003 "x" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed 108 "x" #1: STATE_MAIN_I3: sent MI3, expecting MR3 004 "x" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_128 prf=oakley_sha group=modp1024} 117 "x" #2: STATE_QUICK_I1: initiate 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 20s for response 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 40s for response 031 "x" #2: max number of retransmissions (2) reached STATE_QUICK_I1. No acceptable response to our first Quick Mode message: perhaps peer likes no proposal 000 "x" #2: starting keying attempt 2 of at most 3, but releasing whack I have enabled NAT traversal in ipsec.conf accordingly. Here are the settings relative to the connection in question: version 2.0 config setup plutoopts="--perpeerlog" plutoopts="--interface=wlan2" dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn x authby=secret pfs=yes auto=add phase2alg=aes256-sha1;dh22 keyingtries=3 ikelifetime=8h type=transport left=192.168.1.76 leftsubnet=192.168.1.0/24 leftprotoport=0/0 right=222.222.222.222 rightsubnet=10.196.0.0/17 rightprotoport=0/0 Here are the specs provided by the other end that must be met for Phase #2: encryption algorithm: AES (128 or 256 bit) hash algorithm: SHA local ident1 (addr/mask/prot/port): (10.196.0.0/255.255.128.0/0/0) local ident2 (addr/mask/prot/port): (10.241.0.0/255.255.0.0/0/0) remote ident (addr/mask/prot/port): (x.x.x.x/x.x.x.x/0/0) (internal network or localhost) Security association lifetime: 4608000 kilobytes/3600 seconds PFS: DH group2 So, finally, what might be the cause of the issue that I am experiencing? Thank you.

    Read the article

  • DHCPDISCOVER requests from an off-by-one MAC address

    - by Aleksandr Levchuk
    In a Linux DHCP server I'm getting a bunch of these log lines: dhcpd: DHCPDISCOVER from 00:30:48:fe:5c:9c via eth1: network 192.168.2.0/24: no free leases I don't have any machines with 00:30:48:fe:5c:9c and I don't intend to give out an IP to 00:30:48:fe:5c:9c (whatever that could be). I tracked down the server that this is coming from and killed all the DHCP clients that were running but the DHCPDISCOVER requests do not stop. I can prove that this is the sending server by pulling the Ethernet cable - the requests stop. The strange thing is that the sending server only has 2 interfaces which are: 00:30:48:fe:5c:9a 00:30:48:fe:5c:9b What can be the cause of the off-by-one address? Who could be sending the requests? Details On the DHCP client: root@n34:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 100 link/ether 00:30:48:fe:5c:9a brd ff:ff:ff:ff:ff:ff 3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:30:48:fe:5c:9b brd ff:ff:ff:ff:ff:ff 4: ib0: <BROADCAST,MULTICAST> mtu 2044 qdisc noop state DOWN qlen 256 link/infiniband 80:00:00:48:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:9f brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 5: ib1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc pfifo_fast state UP qlen 256 link/infiniband 80:00:00:49:fe:80:00:00:00:00:00:00:00:02:c9:03:00:08:81:a0 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff Same info: root@n34:~# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9a inet addr:192.168.2.234 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::230:48ff:fefe:5c9a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72544 errors:0 dropped:0 overruns:0 frame:0 TX packets:152773 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:4908592 (4.6 MiB) TX bytes:89815782 (85.6 MiB) Memory:dfd60000-dfd80000 eth1 Link encap:Ethernet HWaddr 00:30:48:fe:5c:9b UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Memory:dfde0000-dfe00000 ib0 Link encap:UNSPEC HWaddr 80-00-00-48-FE-80-00-00-00-00-00-00-00-00-00-00 BROADCAST MULTICAST MTU:2044 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ib1 Link encap:UNSPEC HWaddr 80-00-00-49-FE-80-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.3.234 Bcast:192.168.3.255 Mask:255.255.255.0 inet6 addr: fe80::202:c903:8:81a0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:2044 Metric:1 RX packets:1330 errors:0 dropped:0 overruns:0 frame:0 TX packets:255 errors:0 dropped:5 overruns:0 carrier:0 collisions:0 txqueuelen:256 RX bytes:716415 (699.6 KiB) TX bytes:17584 (17.1 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 B) TX bytes:560 (560.0 B) The nodes were imaged with Perseus which uses kexec instead of rebooting.

    Read the article

  • Gmail and Live are making all messages from my server as spam.

    - by Ryan Kearney
    I'm getting very weird results here. When my server sends an email to my @hotmail or @gmail account, it's marked as spam. When I send email through my server from Outlook to @hotmail, it doesn't get marked as spam, but it still gets marked as spam in gmail. They seem to get through fine on Yahoo though. My servers hostname A record points to an IP address whose PTR record points back to the same domain name. The TXT record has a SPF record in it to allow email to be sent from that servers IP. I moved from a VPS to a Dedicated server when this started to happen. From what I can see, the email headers are identical. Here's one of my email headers that gmail marks as spam. Some fields were repalced. MYGMAILACCOUNT is the email address of the account the email was addressed to. USER is the name of the account on the system it was sent from HOSTNAME is the servers FQDN IPADDR is the IP Address of the Hostname MYDOMAIN is my domain name Delivered-To: MYGMAILACCOUNT Received: by 10.220.77.82 with SMTP id f18cs263483vck; Sat, 27 Feb 2010 23:58:02 -0800 (PST) Received: by 10.150.16.4 with SMTP id 4mr3886702ybp.110.1267343881628; Sat, 27 Feb 2010 23:58:01 -0800 (PST) Return-Path: <USER@HOSTNAME> Received: from HOSTNAME (HOSTNAME [IPADDR]) by mx.google.com with ESMTP id 17si4604419yxe.134.2010.02.27.23.58.01; Sat, 27 Feb 2010 23:58:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of USER@HOSTNAME designates IPADDR as permitted sender) client-ip=IPADDR; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of USER@HOSTNAME designates IPADDR as permitted sender) smtp.mail=USER@HOSTNAME Received: from USER by HOSTNAME with local (Exim 4.69) (envelope-from <USER@HOSTNAME>) id 1Nle2K-0000t8-Bd for MYGMAILACCOUNT; Sun, 28 Feb 2010 02:57:36 -0500 To: Ryan Kearney <MYGMAILACCOUNT> Subject: [Email Subject] MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: webmaster@MYDOMAIN Message-Id: <E1Nle2K-0000t8-Bd@HOSTNAME> Sender: <USER@HOSTNAME> Date: Sun, 28 Feb 2010 02:57:36 -0500 X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - HOSTNAME X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [503 500] / [47 12] X-AntiAbuse: Sender Address Domain - HOSTNAME Anyone have any ideas as to why all mail leaving my server gets marked as spam? EDIT: I already used http://www.mxtoolbox.com/SuperTool.aspx to check if my servers IP's are blacklisted and they are in fact not. That's what I thought at first, but it isn't the case. Update Mar 1, 2010 I received the following email from Microsoft Thank you for writing to Windows Live Hotmail Domain Support. My name is * and I will be assisting you today. We have identified that messages from your IP are being filtered based on the recommendations of the SmartScreen filter. This is the spam filtering technology developed and operated by Microsoft and is built around the technology of machine learning. It learns to recognize what is and isn't spam. In short, we filter incoming emails that look like spam. I am not able to go into any specific details about what these filters specifically entail, as this would render them useless. E-mails from IPs are filtered based upon a combination of IP reputation and the content of individual emails. The reputation of an IP is influenced by a number of factors. Among these factors, which you as a sender can control, are: The IP's Junk Mail Reporting complaint rate The frequency and volume in which email is sent The number of spam trap account hits The RCPT success rate So I'm guessing it has to do with the fact that I got an IP address with little or no history in sending email. I've confirmed that I'm not on any blacklists. I'm guessing it's one of those things that will work itself out in a month or so. I'll post when I hear more.

    Read the article

  • Windows Backup fails with 0x80070002: "The system cannot find the file specified"

    - by James Johnston
    Windows 7 Backup is failing. When backing up even a single insignificant directory (e.g. I chose only the empty "Contacts" directory, leaving all other directories unchecked), I get this error within a few seconds and the backup fails. If I uncheck all files/directories, and just do the system image - then the system image is backed up OK without issue. Backup destination is an external USB hard drive. Steps to reproduce and subsequent failure: Set up backup to go to external hard drive. Don't back up system image. Back up "Contacts" directory only for my profile. Start backup. Immediately view the status of the backup, it stays on "Creating a shadow copy..." for a few seconds, and then the backup fails. Click Options button, and it says "Check your backup / The system cannot find the file specified." - with options to "Try to run backup again" or "Change backup settings". If I click "Show Details", then it says: Backup time: 4/12/2012 04:38 Backup location: My Book (D:) Error code: 0x80070002 An examination of the Event Log shows nothing useful beyond the following: Log Name: Application Source: Windows Backup Date: 4/12/2012 04:38:44 Event ID: 4104 Task Category: None Level: Error Keywords: Classic User: N/A Computer: JTJLaptop Description: The backup was not successful. The error is: The system cannot find the file specified. (0x80070002). Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Windows Backup" /> <EventID Qualifiers="0">4104</EventID> <Level>2</Level> <Task>0</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2012-04-12T04:38:44.000000000Z" /> <EventRecordID>23979</EventRecordID> <Channel>Application</Channel> <Computer>JTJLaptop</Computer> <Security /> </System> <EventData> <Data>The system cannot find the file specified. (0x80070002)</Data> <Binary>02000780E30500003F0900005B090000420ED1665C2BEE174B64529CB14610EA71000000</Binary> </EventData> </Event> What I have tried: ChkDsk on both C: (main drive) and D: (backup drive) doesn't find any errors. Running SFC /SCANNOW to run system file checker Checked the list of profiles at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList and ensured that each profile directory exists. I'm stumped; WHAT file can't be found and why is my backup failing? This is on a Lenovo T420 laptop.

    Read the article

  • Apache outputs all urls of a second domain as a subfolder of the primary domain name

    - by s_rathbone
    Hi all, would anyone be able to possibly give me some guidance.. Basically, i have a 'shared hosting' account with a large internet hosting provider, and my account lets me have multiple seperate domains within this folder structure.(note: not aliased domains and not sub domains). so, my goal is to have 2 domains set up. i have already purchased the two domain names i need: The first domain is the 'primary' domain name for the root folder(eg. www.example1.com) and the second domain name is set for one of its sub folders(eg. www.example2.com is set to the folder www.example1.com/sites/music). The problem is that when apache returns a page of the second domain back to the browser, apache writes the hyperlinks as if it's a sub folder of the first domain ( eg. www.example2.com/index.html. comes out as http://www.example1.com/sites/music/index.html). Now, I have done some reading on this, looking though "Apache: the definitive guide"(o'reilly), and although it was useful, couldn't really find the answer. i'm guessing this issue is most likely an apache setup issue in http.conf, rather than an issue with the hosting company itself (which is why im posting it here) and I have also been to the official documentation for apache site, and i am guessing i might need to use something like the rewritebase directive in htaccess files.. but im really not sure, im more of a java programmer guy, and have been struggling with this for a couple of days. Any guidance would be REALLY appreciated. If it helps, my hosting company is godaddy, and my sites are hosted on linux. My problem was originally with wordpress which i reinstalled a number of times in various ways to correct the problem, but ive just done a test with a very simple static html, and it still has the same issue with relative urls like this: <html> <head></head><body><a href="images/dog.html">Pictures of Dogs</a></body> </html> However, it is fine if i hardcode the urls like this: <html> <head></head><body><a href="http://www.example2.com/images/dog.html">Pictures of Dogs</a></body> </html> Thanks heaps, Steve R NOW FIXED Ok, the problem has now been fixed, and i didn't need to modify any .conf or .htaccess files. The problem was, that when I went to install the second application into a second domain from the godaddy site, one of the setup questions is that it asks you which site you want it installed to. after that it asks for the desired folder path. However, the problem was that the second domain name was already pointing to the correct subfolder of the primary domain. So when I started installing wordpress again and came to the menu to select which site it was for, and it listed only the primary domain as an option, i assumed that this was like a label of "which hosting account?", or "which primary domain will your application will be installed under?" because I already knew that in the next step i was specifiying the folder. In order to correct this, you must make sure that your second domain is added to your domain list so that it will be listed as an option during the installation process. For further details please read tystips.com/archives/52/how2-save-money-host-multiple-wordpress-blogs-on-a-single-godaddy-hosting-account/

    Read the article

  • Why do we get a sudden spike in response times?

    - by Christian Hagelid
    We have an API that is implemented using ServiceStack which is hosted in IIS. While performing load testing of the API we discovered that the response times are good but that they deteriorate rapidly as soon as we hit about 3,500 concurrent users per server. We have two servers and when hitting them with 7,000 users the average response times sit below 500ms for all endpoints. The boxes are behind a load balancer so we get 3,500 concurrents per server. However as soon as we increase the number of total concurrent users we see a significant increase in response times. Increasing the concurrent users to 5,000 per server gives us an average response time per endpoint of around 7 seconds. The memory and CPU on the servers are quite low, both while the response times are good and when after they deteriorate. At peak with 10,000 concurrent users the CPU averages just below 50% and the RAM sits around 3-4 GB out of 16. This leaves us thinking that we are hitting some kind of limit somewhere. The below screenshot shows some key counters in perfmon during a load test with a total of 10,000 concurrent users. The highlighted counter is requests/second. To the right of the screenshot you can see the requests per second graph becoming really erratic. This is the main indicator for slow response times. As soon as we see this pattern we notice slow response times in the load test. How do we go about troubleshooting this performance issue? We are trying to identify if this is a coding issue or a configuration issue. Are there any settings in web.config or IIS that could explain this behaviour? The application pool is running .NET v4.0 and the IIS version is 7.5. The only change we have made from the default settings is to update the application pool Queue Length value from 1,000 to 5,000. We have also added the following config settings to the Aspnet.config file: <system.web> <applicationPool maxConcurrentRequestsPerCPU="5000" maxConcurrentThreadsPerCPU="0" requestQueueLimit="5000" /> </system.web> More details: The purpose of the API is to combine data from various external sources and return as JSON. It is currently using an InMemory cache implementation to cache individual external calls at the data layer. The first request to a resource will fetch all data required and any subsequent requests for the same resource will get results from the cache. We have a 'cache runner' that is implemented as a background process that updates the information in the cache at certain set intervals. We have added locking around the code that fetches data from the external resources. We have also implemented the services to fetch the data from the external sources in an asynchronous fashion so that the endpoint should only be as slow as the slowest external call (unless we have data in the cache of course). This is done using the System.Threading.Tasks.Task class. Could we be hitting a limitation in terms of number of threads available to the process?

    Read the article

  • Unauthorized Access Exception using Web Deploy to Site when the site root is a UNC path

    - by Peter LaComb Jr.
    I am trying to use Web Deploy to deploy a site where the Site is rooted on a UNC path instead of a local drive. This is because I want to have a shared configuration, and have all servers point to the same UNC for content. That would allow me to deploy to one server and have all servers updated at the same time. I've created a share with everyone and users read/write. The NTFS permissions have the ID of the appDomain account as full control, and that is the same account that is configured as the specific user in Management Service Delegation. I can log on to the destination server as that ID, access the share and create/delete files. However, I'm getting the following exception in my Microsoft Web Deploy log on the destination server: User: Client IP: 192.168.62.174 Content-Type: application/msdeploy Version: 9.0.0.0 MSDeploy.VersionMin: 7.1.600.0 MSDeploy.VersionMax: 9.0.1631.0 MSDeploy.Method: Sync MSDeploy.RequestId: c060c823-cdb4-4abe-8294-5ffbdc327d2e MSDeploy.RequestCulture: en-US MSDeploy.RequestUICulture: en-US ServerVersion: 9.0.1631.0 Skip: objectName="^configProtectedData$" Provider: auto, Path: A tracing deployment agent exception occurred that was propagated to the client. Request ID 'c060c823-cdb4-4abe-8294-5ffbdc327d2e'. Request Timestamp: '8/23/2012 11:01:56 AM'. Error Details: ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER Microsoft.Web.Deployment.DeploymentDetailedUnauthorizedAccessException: Unable to perform the operation ("Create Directory") for the specified directory ("\someserver.mydomain.local\sharename\sitename\applicationName"). This can occur if the server administrator has not authorized this operation for the user credentials you are using. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER. --- Microsoft.Web.Deployment.DeploymentException: The error code was 0x80070005. --- System.UnauthorizedAccessException: Access to the path '\someserver.mydomain.local\sharename\sitename\applicationName' is denied. at Microsoft.Web.Deployment.NativeMethods.RaiseIOExceptionFromErrorCode(Win32ErrorCode errorCode, String maybeFullPath) at Microsoft.Web.Deployment.DirectoryEx.CreateDirectory(String path) at Microsoft.Web.Deployment.DirPathProviderBase.CreateDirectory(String fullPath, DeploymentObject source) at Microsoft.Web.Deployment.DirPathProviderBase.Add(DeploymentObject source, Boolean whatIf) --- End of inner exception stack trace --- --- End of inner exception stack trace --- at Microsoft.Web.Deployment.FilePathProviderBase.HandleKnownRetryableExceptions(DeploymentBaseContext baseContext, Int32[] errorsToIgnore, Exception e, String path, String operation) at Microsoft.Web.Deployment.DirPathProviderBase.Add(DeploymentObject source, Boolean whatIf) at Microsoft.Web.Deployment.DeploymentObject.Add(DeploymentObject source, DeploymentSyncContext syncContext) at Microsoft.Web.Deployment.DeploymentSyncContext.HandleAdd(DeploymentObject destObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentSyncContext.HandleUpdate(DeploymentObject destObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenNoOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenNoOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.SyncChildrenOrder(DeploymentObject dest, DeploymentObject source) at Microsoft.Web.Deployment.DeploymentSyncContext.ProcessSync(DeploymentObject destinationObject, DeploymentObject sourceObject) at Microsoft.Web.Deployment.DeploymentObject.SyncToInternal(DeploymentObject destObject, DeploymentSyncOptions syncOptions, PayloadTable payloadTable, ContentRootTable contentRootTable, Nullable1 syncPassId) at Microsoft.Web.Deployment.DeploymentAgent.HandleSync(DeploymentAgentAsyncData asyncData, Nullable1 passId) at Microsoft.Web.Deployment.DeploymentAgent.HandleRequestWorker(DeploymentAgentAsyncData asyncData) at Microsoft.Web.Deployment.DeploymentAgent.HandleRequest(DeploymentAgentAsyncData asyncData) This is shown as the following on the console of the machine where I run the deployment: C:\Users\PLaComb"C:\Program Files (x86)\IIS\Microsoft Web Deploy V3\msdeploy.exe" -source:package='C:\Packages\Deployments\applicationName.zip' -dest:auto,computerName='https://SERVERNAME:8172/msdeploy.axd',includeAcls='True' -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"C:\Packages\Deployments\applicationName.SetParameters.xml" -allowUntrusted Info: Using ID 'c060c823-cdb4-4abe-8294-5ffbdc327d2e' for connections to the remote server. Info: Adding sitemanifest (sitemanifest). Info: Adding virtual path (JMS/admin) Info: Adding directory (JMS/admin). Error Code: ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER More Information: Unable to perform the operation ("Create Directory") for the specified directory ("\someserver.mydomain.local\sharename\sitename\applicationName"). This can occur if the server administrator has not authorized this operation for the user credentials you are using. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_INSUFFICIENT_ACCESS_TO_SITE_FOLDER. Error: The error code was 0x80070005. Error: Access to the path '\someserver.mydomain.local\sharename\sitename\applicationName' is denied. Error count: 1.

    Read the article

  • I want to change DPI with Imagemagick without changing the actual byte-size of the image data

    - by user1694803
    I feel so horribly sorry that I have to ask this question here, but after hours of researching how to do an actually very simple task I'm still failing... In Gimp there is a very simple way to do what I want. I only have the German dialog installed but I'll try to translate it. I'm talking about going to "Picture-PrintingSize" and then adjusting the Values "X-Resolution" and "Y-Resolution" which are known to me as so called DPI values. You can also choose the format which by default is "Pixel/Inch". (In German the dialog is "Bild-Druckgröße" and there "X-Auflösung" and "Y-Auflösung") Ok, the values there are often "72" by default. When I change them to e.g. "300" this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller - it has a higher resolution on the printed paper (but smaller size... which is fine for me). I am often doing that when I am working with LaTeX, or to be exact with the command "pdflatex" on a recent Ubuntu-Machine. When I'm doing the above process with Gimp manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. What I am trying to do is to automate the process of going into Gimp and adjusting the DPI values. Since Imagemagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. When I check the file it stays the same. file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 (Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch?) But when I now render my TeX with "pdflatex" the image is still big and blurry. Also when I open the image with Gimp again the DPI values are set to "72" instead of "300". So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can't be that wrong since everything works just fine with Gimp... Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system...

    Read the article

  • Prevent auto mounting Android sdcard under Linux Mint

    - by BullShark
    I recently obtained an older Android phone, so that I could test Android Apps on it. I've needed it because I have a Nexus 7 but not older Android versions, hardware, etc. to test on. I'm having a problem with it under Linux Mint with Cinnamon. When I plug the phone in, or remove and plug the sdcard from the phone back to it while the phone is plugged in, Linux automatically mounts the sdcard. This is a problem because once it is mounted under Linux, it dismounts from the phone running Android 2.3.5, and I can no longer test Android Apps I write that require the sdcard to be present, writable. I went to Menu System Tools System Settings System Details Removable Media, and it brings up this window. I have changed the settings to always "Ask what to do" on "Select how media should be handled". However, the sdcard still gets mounted and then I am asked how I want to open these files (media players, photo importers, file browser, etc.). If I click the checkbox for "Never prompt or start programs on media insertion", then the sdcard is mounted, and I am not asked how to open these files. Eject is just a noob word for Ubuntu users that means umount (unmount) like "Adminstrator" is another ubuntu noob word for the root user. And if I unmount the sdcard, the phone doesn't recognize it again until I take the sdcard out and plug it back in. The phone sees it for a brief moment until Linux Mint takes it over. There are 2 possible solutions and maybe more: 1) Prevent Linux from automounting sdcards some how 2) Tell Android not to allow the computer it is plugged into to take over the sdcard, HOW? Edit: I found out how to prevent the sdcard from being automatically mounted: Now it gets recognized by Linux: bullshark@beastlinux ~ $ dmesg | tail -n 25 [597212.218323] sd 21:0:0:0: [sde] Attached SCSI removable disk [597212.218639] sr 21:0:0:1: Attached scsi CD-ROM sr2 [597212.218910] sr 21:0:0:1: Attached scsi generic sg7 type 5 [597217.139373] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597217.140726] sd 21:0:0:0: [sde] No Caching mode page present [597217.140735] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.143595] sd 21:0:0:0: [sde] No Caching mode page present [597217.143602] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.152240] sde: sde1 [597389.751008] 4:2:1: cannot get freq at ep 0x84 [597390.238742] 4:2:1: cannot get freq at ep 0x84 [597624.903132] sde: detected capacity change from 1977614336 to 0 [597637.677763] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597637.679616] sd 21:0:0:0: [sde] No Caching mode page present [597637.679626] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.682508] sd 21:0:0:0: [sde] No Caching mode page present [597637.682515] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.692758] sde: sde1 [597661.857979] sde: detected capacity change from 1977614336 to 0 [597688.775455] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597688.776814] sd 21:0:0:0: [sde] No Caching mode page present [597688.776823] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.780055] sd 21:0:0:0: [sde] No Caching mode page present [597688.780062] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.788639] sde: sde1 bullshark@beastlinux ~ $ However, the phone still unmounts the sdcard upon being detected by Linux. Linux detects but does not mount, and a few seconds later: Edit #2 (Solution): I solved this one by changing the usb connection type (was usb mass storage) :

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >