Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 338/618 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Wireless with WEP extremely slow on an Acer Timeline 4810T with a Centrino Wireless-N 1000

    - by noq38
    I've upgraded an Acer Timeline 4810T to Ubuntu 11.10. Everything works fine except for the darn wireless interface (network manager). I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. This was no problem in 11.04 and prior. Is there a simple solution for this? Any suggestions? Here's more hardware information. Hopefully this helps to debug the network issue: sudo lshw -class network *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:3c:5e:e0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-13-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:d2400000-d2401fff lspci 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no Many thanks for your help! I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. Any suggestions?

    Read the article

  • Setting lusca and dansguardian iptables on Ubuntu 12.04 to prevent loop

    - by Heri YT
    I have a server with ubuntu 12:04 operating system, which runs as a proxy cache server lusca and DansGuardian as well as internet content filter. With the following composition: the client browser - lusca - DansGuardian - internet. And all this running only on one machine only, the following is a partial configuration on my server lusca: http_port 3128 transparent cache_peer 192.168.0.1 parent 8080 0 no-query no-digest no-netdb-exchange default which is also only found on the DansGuardian default settings namely: filterip="blank" filterport=8080 proxyip=192.168.0.1 proxyport=3128 The question is: Can all goes well? By simply relying on one machine only? What causes the "WARNING: Forwarding loop detected for:"? is not problematic if we leave? How to solve "WARNING: Forwarding loop detected for:" found in / var / log / lusca / cache.log Thank you.

    Read the article

  • I am unable to find login page for phpmyadmin

    - by Awan
    I was using phpmyadmin(in Wamp) without a password for root. I thought to set a password for root and goto Privileges page and set a password for root. Now whenever I go to localhost/phpmyadmin page it gives me the following error. MySQL said: Documentation #1045 - Access denied for user 'root'@'localhost' (using password: NO) phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. I don't know that what is the problem. It is not showing me a login type page to enter a username and password. Any idea that what is the problem ? Thanks

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Multiple gcc on Mac OS X

    - by snihalani
    I did a port install for gcc version 4.7.1 (MacPorts gcc47 4.7.1_2) I named the executable as g+ and placed it in one my $PATH. I use gcc 4.7.1 when I need c++11 standard. I haven't changed the original g++ so as not messup XCode. I am using eclipse-cdt and running the make all from the window. It's giving me: 20:12:40 **** Build of configuration Default for project 2804-hw2 **** make all g+ -c -Wall -std=c++11 main.cpp -o main.o make: g+: No such file or directory make: *** [main.o] Error 1 20:12:40 Build Finished (took 89ms) Here is my makefile CC=g+ CFLAGS=-c -Wall -std=c++11 LDFLAGS= SOURCES=main.cpp Vector3D.cpp OBJECTS=$(SOURCES:.cpp=.o) EXECUTABLE=exec all: $(SOURCES) $(EXECUTABLE) $(EXECUTABLE): $(OBJECTS) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ .cpp.o: $(CC) $(CFLAGS) $< -o $@ clean: rm $(EXECUTABLE) $(OBJECTS) How do I make eclipse detect my g+?

    Read the article

  • Is there a way to import email from the raw email files?

    - by Chris Schmitz
    I have a client who recently switched hosts. When they switched hosts they didn't backup their email and updated their configuration settings so they lost everything. However, I was able to log in to their old hosting control panel and download their mail folder. I am wondering if there is a way to extract their emails and/or contacts from the files. I'm not sure what type of files they are, there is no extension, but the folder directory is structured like this: mail/ .Drafts/ .Sent/ .Trash/ cur/ new/ theirdomain.com/ tmp/ [email protected] maildir Inside of the theirdomain.com folder, there is a folder for each account and inside of that is a folder called "cur" which has a whole bunch of files with names like 1292945327.H169813P25958.uscentral21.myserverhosts.com,S=10117/2,S and if I preview those files I can see the actual email messages inside of them but I have no idea how to get that information from those files to an email client. Anyone know of a way to work with these files? Thanks in advance for any insight you can share!

    Read the article

  • Ubuntu server failing daily

    - by deanvz
    Symptoms: Server becomes unresponsive - Increase in load, all services stop Loss of connectivity - Ping/SSH Flush MySQL hosts after reboot - As MySQL refuses new connections Intermittent Apache crashes Generally happens early morning hours - 2 days of the week are however excluded Changes made: Updated the OS - to Ubuntu 10.04.4 LTS Not sure if the MySQL server was also updated in the process Current MySQL version - mysql Ver 14.14 Distrib 5.1.63, for debian-linux-gnu (x86_64) using readline 6.1 Updated Plesk from 10.4.4 Update #47 to 11.0.9 Update #23 Rebooted on almost daily basis All crons stopped for the times corresponding to the server crashes Created a MySQL log to monitor the lock times on queries Possible causes: Failing hardware Incorrect software configuration (MySQL, Apache etc) Responsibilities: Small webserver Runs our billing system - WHMCS Responsible for CRONs Bulk-email solution - No delivery times coincide with server crashes Proposed solutions: Move machine over to VM Format and restore the Plesk server backup and take it from there? Side notes: Seems to be a general Apache failure across all our linux servers - Intermittent problem Are we doing something fundamentally wrong in the Apache config? (I understand that this is a secondary question, just making sure that it isnt possibly holding any relevance)

    Read the article

  • DSL PPPoE connection not working?

    - by Mussnoon
    I use a wired PPPoE connection to connect to the Internet. What I need to do on Windows to connect to it is put in static IP address, gateway, subnet mask and DNS servers for my LAN card. Next I have to create a dialer for a PPPoE connection, put in my user name, the service name and the password, and "dial" this connection. And it works fine. On Ubuntu 10.04, however, I have tried setting things up in a similar fashion - put in all static addresses for the "automatic" wired connection, then put in user name, service name, password for a "DSL" connection. It worked for a while, then stopped. I have tried putting in all the details within the DSL configuration dialog, same thing happened - it worked for a while, then stopped. I have tried deleting the ethernet connection and only keeping the DSL one with all the numbers put in place, same thing happened - it worked for a while, then stopped. Each of the times, when it connected, it connected randomly, after trying a few times, and either stopped working within a few minutes, or after I had rebooted. I have deleted and remade the connection dozens of times - even with different names, but nothing seems to be working. I have also tried pppoeconf from the terminal, didn't work. I have checked /var/log/kern.log, but nothing changes in the file when I try to connect. I have also checked /sbin/route, but gedit can't even open it (says it can't figure the character encoding...). The "connection established" notification pops up from the top right corner, the same way as when the computer is actually connected to a network. Can anyone figure what's wrong and how it can be solved?

    Read the article

  • How to route traffic from a subnet 10.0.0.x to a network 200.208.88.17

    - by Guilherme Longo
    I have the following configuration Router : IP: 200.208.88.17 (Internet) MASK: 255.255.255.40 Server 2003 : IP: 10.0.0.1 (with dhcp server ativated) dhcp scope: 10.0.0.11 - 10.0.0.254 MASK: 255.255.255.0 clients : IP: 10.0.0.11 - 10.0.0.254 MASK: 255.255.255.0 At this point I have all computer set-up in one switch. All clients are receiving ip´s from the dhcp server. I need to enable the internet in every client. I am not sure how to route the traffic from the clients to the router that is providing internet access. Could you please point me to the right direction?

    Read the article

  • Problem with Lenovo x200s Wifi under Ubutu Karmic

    - by oneself
    Hi, I have just gotten my Lenovo X200s laptop, and I am install Ubuntu 9.10 Karmic on it. The installation went through without a hitch, but I can't get my wifi to work. lspci | grep Network Produces the following results: 00:19.0 Ethernet controller: Intel Corporation 82567LM Gigabit Network Connection (rev 03) 03:00.0 Network controller: Realtek Semiconductor Co., Ltd. Device 8172 (rev 10) The weird part is that when I turn the wifi hardware stitch on and off on the side of the laptop, I get the following printed in /var/log messages: Dec 30 23:24:48 temp-laptop kernel: [ 213.432302] usb 4-2: USB disconnect, address 2 Dec 30 23:24:52 temp-laptop kernel: [ 217.276310] usb 4-2: new full speed USB device using uhci_hcd and address 3 Dec 30 23:24:52 temp-laptop kernel: [ 217.441759] usb 4-2: configuration #1 chosen from 1 choice Does Ubuntu think my wifi card is a USB device? Am I missing some driver? What can I do to fix this? Please, help!

    Read the article

  • Google Chrome Enterprise - Any Gotchas?

    - by AJ
    Has anyone rolled out Google Chrome to a medium / large organisation? I would like to suggest it to our management (because I think it would work very nicely with some of our intranet applications), and I would like to find out what problems (if any) the rest of the world has been experiencing with it. Have you found any problems? I'm thinking of enterprise-level problems. I'm thinking that we can solve anything that requires a specific configuration / proxy setting / etc. I don't really know what I think might be a problem, but I wonder if there are any usability problems that occur when non-geeks use it? Or problems which only rear their ugly heads when you've got 50 users all doing something unexpected. Any helpful information or suggestions would be appreciated. Thanks. UPDATED: We tend to use Microsoft stuff, so Sharepoint, IIS, SQL Server, are typical building blocks of internal sites. (Thanks, @Jim, for reminding me to mention that).

    Read the article

  • Can't login to just installed view administrator

    - by matarvai81
    Hi, we are starting to test View for our purposes. I have created new test enviroment, new vdidemo active directory, new virtual center etc... I just installed view connection server component to new server and trying to do initial configuration, but when trying to log in I get following error " Error accessing the View Administrator. Contact the system administrator" Log file says following error 08:14:08,925 INFO LoginBean User administrator has failed to authenticate to View Administrator What is causing this problem? How can I log in and start to test VDI?

    Read the article

  • How to enter into BusyBox when booting?

    - by ???
    I have occasionally installed cloud-init package in Ubuntu, which blocks me from booting. Neither recovery mode works. Because cloud-init installed some upstart job configuration. So I want to enter into busybox to remove /etc/init/cloud-init*.conf, but it seems like no way to do it. I can press Ctrl+Alt+SysRq which brings on a rough hack menu, but there is no busybox option. So is it possible? My CDROM is broken so I can't use Live CD too.

    Read the article

  • Exchange 2010 SP2 Not Allowing Logon for Users with Expired Passwords

    - by JJ.
    When we provision users we set the "User must change password at next logon" flag and instruct them to go to OWA to login for the first time and change their password. Using the registry setting ChangeExpiredPasswordEnabled as explained here: http://technet.microsoft.com/en-us/library/bb684904.aspx worked well prior to SP2 installation. This allows users with 'expired' passwords to logon and forces a password change before they can access OWA. We just installed Exchange 2010 Service Pack 2 and now it's no longer working. Users with this flag set ('expired' passwords) can't login in at all unless we clear the flag. FYI here's the registry key configuration as set now with SP2 installed: Any suggestions as to how I might fix this? Or did MS break this feature in Service Pack 2?

    Read the article

  • anyone doing php-fpm and APC? cache files missing in /tmp

    - by Vangel
    I have been using xcache for a long time. Recently I put together php-fpm and nginx. I see apc is installed and enabled in the configuration. I was assuming that apc will automatically opcode the files and store it somewhere. According to config it should be in /tmp/apx.xxxx but there is no such thing there. What am i missing? any clues to investigate would be of much help. Please note I am using php 5.3 fpm. thanks mates. UPDATE: i looked at apc.php it says things are fine. will just have to take its word for it.

    Read the article

  • Team Foundation Server 2012 Build Global List Problems

    - by Bob Hardister
    My experience with the upgrade and use of TFS 2012 has been very positive. I did come across a couple of issues recently that tripped things up for a while. ISSUE 1 The first issue is that 2012 prior to Update 1 published an invalid build list item value to the collection global list. In 2010, the build global list, list item value syntax is an underscore between the build definition and the build number. In the 2012 RTM this underscore was replaced with a backslash, which is invalid.  Specifically, an upload of the global list fails when the backslash is followed at some point by a period. The error when using the API is: <detail ExceptionMessage="TF26204: The account you entered is not recognized. Contact your Team Foundation Server administrator to add your account." BaseExceptionName="Microsoft.TeamFoundation.WorkItemTracking.Server.ValidationException"><details id="600019" http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03"http://schemas.microsoft.com/TeamFoundation/2005/06/WorkItemTracking/faultdetail/03" /></detail> when uploading the global list via the process editor the error is: This issue is corrected in Update1 as the backslash is changed to a forward slash. ISSUE 2 The second issue is that when upgrading from 2010 to 2012, the builds in 2010 are not published to the 2012 global list.  After the upgrade the 2012 global lists doesn’t have any builds and only builds run in 2012 are published to the global list. This was reported to the MSDN forums and Connect. To correct this I wrote a utility to pull all the builds and recreate the builds global list for each project in each collection.  This is a console application with a program.cs, a globallists.cs and a app.config (not published here). The utility connects to TFS 2012, loops through the collections or a target collection as specified in the app.config. Then loops through the projects, the build definitions, and builds.  It creates a global list for each project if that project has at least one build. Then it imports the new list to TFS.  Here’s the code for program and globalists classes. Program.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Server; using System.IO; using System.Xml; using Microsoft.TeamFoundation.WorkItemTracking.Client; using System.Diagnostics; using Utilities; using System.Configuration; namespace TFSProjectUpdater_CLC { class Program { static void Main(string[] args) { DateTime temp_d = System.DateTime.Now; string logName = temp_d.ToShortDateString(); logName = logName.Replace("/", "_"); logName = logName + "_" + temp_d.TimeOfDay; logName = logName.Replace(":", "."); logName = "TFSGlobalListBuildsUpdater_" + logName + ".log"; Trace.Listeners.Add(new TextWriterTraceListener(Path.Combine(ConfigurationManager.AppSettings["logLocation"], logName))); Trace.AutoFlush = true; Trace.WriteLine("Start:" + DateTime.Now.ToString()); Console.WriteLine("Start:" + DateTime.Now.ToString()); string tfsServer = ConfigurationManager.AppSettings["TargetTFS"].ToString(); GlobalLists gl = new GlobalLists(); //replace this with the URL to your TFS instance. Uri tfsUri = new Uri("https://" + tfsServer + "/tfs"); //bool foundLite = false; TfsConfigurationServer config = new TfsConfigurationServer(tfsUri, new UICredentialsProvider()); config.EnsureAuthenticated(); ITeamProjectCollectionService collectionService = config.GetService<ITeamProjectCollectionService>(); IList<TeamProjectCollection> collections = collectionService.GetCollections().OrderBy(collection => collection.Name.ToString()).ToList(); //target Collection string targetCollection = ConfigurationManager.AppSettings["targetCollection"]; foreach (TeamProjectCollection coll in collections) { if (targetCollection.Equals(string.Empty)) { if (!coll.Name.Equals("TFS Archive") && !coll.Name.Equals("DefaultCol") && !coll.Name.Equals("Team Project Template Gallery")) { doWork(coll, tfsServer); } } else { if (coll.Name.Equals(targetCollection)) { doWork(coll, tfsServer); } } } Trace.WriteLine("Finished:" + DateTime.Now.ToString()); Console.WriteLine("Finished:" + DateTime.Now.ToString()); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("\nHit any key to exit..."); Console.ReadKey(); } Trace.Close(); } static void doWork(TeamProjectCollection coll, string tfsServer) { GlobalLists gl = new GlobalLists(); //target Collection string targetProject = ConfigurationManager.AppSettings["targetProject"]; Trace.WriteLine("Collection: " + coll.Name); Uri u = new Uri("https://" + tfsServer + "/tfs/" + coll.Name.ToString()); TfsTeamProjectCollection c = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(u); ICommonStructureService icss = c.GetService<ICommonStructureService>(); try { Trace.WriteLine("\tChecking Collection Global Lists."); gl.RebuildBuildGlobalLists(c); } catch (Exception ex) { Console.WriteLine("Exception! :" + coll.Name); } } } } GlobalLists.CS using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common; using Microsoft.TeamFoundation.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.Build.Client; using System.Configuration; using System.Xml; using System.Xml.Linq; using System.Diagnostics; namespace Utilities { public class GlobalLists { string GL_NewList = @"<gl:GLOBALLISTS xmlns:gl=""http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists""> <GLOBALLIST> </GLOBALLIST> </gl:GLOBALLISTS>"; public void RebuildBuildGlobalLists(TfsTeamProjectCollection _tfs) { WorkItemStore wis = new WorkItemStore(_tfs); //export the current globals lists file for the collection to save as a backup XmlDocument globalListsFile = wis.ExportGlobalLists(); globalListsFile.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_backupGlobalList.xml"); LogExportCurrentCollectionGlobalListsAsBackup(_tfs); //Build a new global build list from each build definition within each team project IBuildServer buildServer = _tfs.GetService<IBuildServer>(); foreach (Project p in wis.Projects) { XmlDocument newProjectGlobalList = new XmlDocument(); newProjectGlobalList.LoadXml(GL_NewList); LogInstanciateNewProjectBuildGlobalList(_tfs, p); BuildNewProjectBuildGlobalList(_tfs, wis, newProjectGlobalList, buildServer, p); LogEndOfProject(_tfs, p); } } // Private Methods private static void BuildNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, WorkItemStore wis, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p) { //locate the template node XmlNamespaceManager nsmgr = new XmlNamespaceManager(newProjectGlobalList.NameTable); nsmgr.AddNamespace("gl", "http://schemas.microsoft.com/VisualStudio/2005/workitemtracking/globallists"); XmlNode node = newProjectGlobalList.SelectSingleNode("//gl:GLOBALLISTS/GLOBALLIST", nsmgr); LogLocatedGlobalListNode(_tfs, p); //add the name attribute for the project build global list XmlElement buildListNode = (XmlElement)node; buildListNode.SetAttribute("name", "Builds - " + p.Name); LogAddedBuildNodeName(_tfs, p); //add new builds to the team project build global list bool buildsExist = false; if (AddNewBuilds(_tfs, newProjectGlobalList, buildServer, p, node, buildsExist)) { //import the new build global list for each project that has builds newProjectGlobalList.Save(@"c:\temp\" + _tfs.Name.Replace("\\", "_") + "_" + p.Name + "_" + "newGlobalList.xml"); //write out temp copy of the global list file to be imported LogImportReady(_tfs, p); wis.ImportGlobalLists(newProjectGlobalList.InnerXml); LogImportComplete(_tfs, p); } } private static bool AddNewBuilds(TfsTeamProjectCollection _tfs, XmlDocument newProjectGlobalList, IBuildServer buildServer, Project p, XmlNode node, bool buildsExist) { var buildDefinitions = buildServer.QueryBuildDefinitions(p.Name); foreach (var buildDefinition in buildDefinitions) { var builds = buildDefinition.QueryBuilds(); foreach (var build in builds) { //insert the builds into the current build list node in the correct 2012 format buildsExist = true; XmlElement listItem = newProjectGlobalList.CreateElement("LISTITEM"); listItem.SetAttribute("value", buildDefinition.Name + "/" + build.BuildNumber.ToString().Replace(buildDefinition.Name + "_", "")); node.AppendChild(listItem); } } if (buildsExist) LogBuildListCreated(_tfs, p); else LogNoBuildsInProject(_tfs, p); return buildsExist; } // Logging Methods private static void LogExportCurrentCollectionGlobalListsAsBackup(TfsTeamProjectCollection _tfs) { Trace.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); Console.WriteLine("\tExported Global List for " + _tfs.Name + " collection."); } private void LogInstanciateNewProjectBuildGlobalList(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tInstanciated the new build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tInstanciated the new build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogLocatedGlobalListNode(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tLocated the build global list node for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tLocated the build global list node for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogAddedBuildNodeName(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded the name attribute to the build global list for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded the name attribute to the build global list for project \n\t\t\t" + p.Name + " in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogBuildListCreated(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tAdded all builds into the " + "Builds - " + p.Name + " list in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tAdded all builds into the " + "Builds - \n\t\t\t" + p.Name + " list in the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogNoBuildsInProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tNo builds found for project " + p.Name + " in the " + _tfs.Name + " collection."); Console.WriteLine("\t\tNo builds found for project " + p.Name + " \n\t\t\tin the " + _tfs.Name + " collection."); } private void LogEndOfProject(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tEND OF PROJECT " + p.Name); Trace.WriteLine(" "); Console.WriteLine("\t\tEND OF PROJECT " + p.Name); Console.WriteLine(); } private static void LogImportReady(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tReady to import the build global list for project " + p.Name + " to the " + _tfs.Name + " collection."); Console.WriteLine("\t\tReady to import the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection."); } private static void LogImportComplete(TfsTeamProjectCollection _tfs, Project p) { Trace.WriteLine("\t\tImport of the build global list for project " + p.Name + " to the " + _tfs.Name + " collection completed."); Console.WriteLine("\t\tImport of the build global list for project \n\t\t\t" + p.Name + " to the \n\t\t\t" + _tfs.Name + " collection completed."); } } }

    Read the article

  • Internet Explorer 10 Crashing With BEX Error (Cannot reset to default)

    - by Abdul Wajid
    I am facing an issue with my Windows Server 2012 IE 10. Every time access local intranet webpage that opens a new windows automatically or any web in new tab or window, it crashes with below error: Problem signature: Problem Event Name: BEX Application Name: IEXPLORE.EXE Application Version: 10.0.9200.16384 Application Timestamp: 50107ee0 Fault Module Name: StackHash_1903 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: PCH_03_FROM_IEFRAME+0x0026B982 Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1033 Additional Information 1: 1903 Additional Information 2: 1903bfd460d0d45dac22ad6eb30cc258 Additional Information 3: 6536 Additional Information 4: 6536faeff1b2d044aae2c2dcb49895a2 I also tried to reset the configuration to default but it is also failing. Any idea how can I resolve the issue? I also ran the ediagcmd.exe and uploaded the CAB file. Please see this link to download that CAB file. Thank You

    Read the article

  • All of ESX4 hosts not responding after rebooting vCenter server

    - by hojyokinmo
    Hello I'm constructing 5ESX+a vCenter serv. I'm testing FT on the system. Today vCenter OS (W2008) demanded system reboot. After rebooting vCenter Serv. All of ESX hosts have alert icon. end indicate "not responding" I tried to connect ESX (version 4) hosts to vCenter again. Once they came back normally, but after few seconds ,They turned red again. All of them have ping connection to vCenter serv. There are no red messages in Tusks and events. Two messages are found in Summary of Cluster ·There aren't sufficient resource for HA failover. ·Can't contact primary HA agent. I'm also not able to reset FT configuration because all of VMS aren't accessed from vCenter What should I do?

    Read the article

  • Problem to install Apache 2.4.2 in Ubuntu 12.04

    - by Michael
    I followed these steps to install Apache 2.4.2 in Ubuntu 12.04, but it seems Apache is not installed, here's what I did (I followed the steps in this site http://www.discusswire.com/apache-2-4-installation-ubuntu/): sudo apt-get install build-essential sudo apt-get build-dep apache2 wget http://apache.mirrors.pair.com/httpd/httpd-2.4.2.tar.gz tar -xzvf httpd-2.4.2.tar.gz && cd httpd-2.4.2 sudo ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install when I tried to start by issuing sudo /usr/local/apache2/bin/apachectl start at terminal, I got the following warning: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message" and when I typed top at terminal, the apache is not there. I also tried to go to http://localhost/ or 127.0.0.1 or even 127.0.1.1 it showed "Can't establish connection to server ..." message. ps: I checked the error log and it showed "[Fri Jul 27 15:49:00.703901 2012] [proxy_balancer:emerg] [pid 20781] AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shm loaded?? [Fri Jul 27 15:49:00.704083 2012] [:emerg] [pid 20781] AH00020: Configuration Failed, exiting" What I'm missing? Thanks Michael

    Read the article

  • Should I consolidate multiple identical VMs into BSD jails?

    - by Josh
    We run a number of Openfire XMPP/Jabber servers. Due to the way Openfire works, we cannot easily run multiple Openfire instances on one server, so I have 5 identical VMware ESXi VMs, each with CentOS, MySQl, Java, and Openfire. They're the exact same, except for their IP addresses, the actual Openfire MySQL database and it's config file. I am wondering if this is the optimal configuration, or if it would be better to move these VMs to a single FreeBSD machine and put each one inside a FreeBSD jail. Specifically, I am wondering if the benefit of VMWare's Transparent Page Sharing (TPS) would outweight the cost of running 5 identical OSes. Would I end up using less memory with one large FreeBSD machine and java running in bsd jails?

    Read the article

  • 4096 and 8192 block size read slower than write? by using lsi 9361-8i RAID10

    - by Min Hong Tan
    is it possible that 1024 and 2048 block size read speed is faster than 4096 and 8192 block? I'm using lsi 9361-8i with RAID 10 , with 8 x Kingston E50 250G. result: 1024 = Write: 2,251 MB/s Read: 2,625 MB/s 2048 = Write: 2,141 MB/s Read: 3,672 MB/s 4096 = Write: 2,147 MB/s Read: 231 MB/s 8192 = Write: 2,147 MB/s Read: 442 MB/s is there any possible? and below is the reading when i simply want to test out the RAID 10 function and disaster test by taking out one of the 250G harddisk. the result is different like below: Result: 1024 = Write: 825 MB/s Read: 1,139 MB/s 2048 = Write: 797 MB/s Read: 1,312 MB/s 4096 = Write: 911 MB/s Read: 1,342 MB/s 8192 = Write: 786 MB/s Read: 1,204 MB/s and the result for 4096 and 8192block are different? can any one explain to me is it normal? or I need to do some tuning/configuration? will it affect my host linux performance?

    Read the article

  • How to skip pushing the default gateway via DHCP in OpenWRT?

    - by francadaval
    I have a router with OpenWRT that I want to resolve IP addressing with DHCP without setting a default gateway. I have added a DHCP-Option parameter with value 3,0.0.0.0 that is supposed to set the default gateway by DHCP. Instead, the router IP is defined as default gateway for DHCP connections. How can I set a null default gateway (0.0.0.0) for connections configuration by DHCP? As said in a comment: I want this router to service a VirtualBox network that doesn't set a default gateway via DHCP.

    Read the article

  • How to re-configure graphics from Intel integrated to Intel / ATI switchable?

    - by Bucic
    There are lots of 'how to get switchable graphics to work' guides but I found none on how to configure a system for switchable graphics operation on Ubuntu from the ground up, nor explaining the current driver situation for particular computer models (integrated+discrete combinations). Example: https://help.ubuntu.com/community/HybridGraphics My system being mature and on Intel integrated card also makes things complicated. System information: Ubuntu 12.04 amd64, installed clean with system configured to use only the integrated Intel card Lenovo Thinkpad T500 Intel GMA 4500MHD / ATI Mobility Radeon HD 3650 Current situation: Mature and up-to-date system with no configuration changes to what's given above. I've made a backup image of the system (Clonezilla) so regardless of what's written below let's assume it's our starting point. If something in What I have already tried is not clear you may as well diregard it. What I have already tried: Configuring BIOS to switchable graphics and: Installing Additional Hardware drivers - returned an error. Installing proprietary amd-driver-installer-12.6-legacy-x86.x86_64.run automatically - system starts to 'low-graphics mode'. Tried fixing as per https://help.ubuntu.com/community/BinaryDriverHowto/ATI#Manually_installing_Catalyst_12.6.2C_special_case_for_Intel.2BAC8-ATI_hybrid_graphics Got lost, gave up. Please note that while configuring BIOS for integrated graphics only is pretty straightforward, configuring for switchable graphics is not.

    Read the article

  • Need help setting up mail DNS records

    - by Dave
    Hi, We are hosting our web site on host monster, but want our email to continue to be hosted at the old site. Our domain points to the hostmonster DNS servers, but I can't figure out the right configuration for the remote email servers. We have one MX entry, which is priority: 0 domain: ourdomain.com And then we have these DNS entries ... name: mail.ourdomain.com ttl: 14400 class: IN type: A record: old.host.ip.address name: mail1.ourdomain.com ttl: 14400 class: IN type: A record: old.host.secondip.address Can someone tell me what I need to add/edit to get mail to correctly route to our old host? Thanks, - Dave

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >