Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 147/393 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How does MSN filter spam?

    - by Marius
    I am trying to create a newsletter for our business. The last few days have been spent testing, and one of things I have noticed is that MSN seemingly randomly filters out some of my test messages. This is super-frustrating. I like the PEAR Mail MIME-package, and have been using that. I may send one email from one of our servers, resulting in the message getting through, and in the next minute, the same message sent from our other server ends up in the junk folder. Then if I add an attachment to the email, and the same message passes though the filter from the server that was previously blocked. I think. What the ####? Is this like throwing a dice, without me having any control over what is trash, and what isn't? I have sent email from several servers, all of which are shared. But I am unsure this is the problem. The problem is that it is seemingly random how MSN filters email. Some emails get through, and some other don't for seemingly irrational reasons. I am running out of ideas, but I am not giving up. Therefore I am writing to you for HARDCORE technical info on how MSN filters spam.

    Read the article

  • How to handle media kept on a separate server (PHP)

    - by Sandman
    So, I have three server, and the idea was to keep all media (images, files, movies) on a media server. I never got around to do it but I think I probably should. So these are the three servers: WWW server DB server Media server Visitors obviously connect to the WWW server and currently image resizing and cache:ing is done on the WWW servers as the original files are kept there. So the idea for me is for image functions I have, that does all the image compositioning, resizing and cahceing would just pie the command over to the media server that would return ther path to the finnished file. What I don't know is how to handle functions such as file_exists() and figuring out image dimensions when needed before even any image management comes into play. Do I pipe all these commands to the other server, via HTTP? I was thinking along the ways of doing it this way: function image(##ARGS##){ if ($GLOBALS["media_host"] != "localhost"){ list ($src, $width, height) = file('http://$GLOBALS[media_host]/imgfunc.php?args=##ARGS##'); return "<img src='$src' height and width >"; } .... do other stuff here } Am I approaching this the wrong way? Is there a better way to do this?

    Read the article

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

  • Apache with JBOSS using AJP (mod_jk) giving spikes in thread count.

    - by Beginner
    We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port. Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know. Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? The versions I used are as follows: Apache 2.0.52 JBOSS: 4.2.2 MOD_JK: 1.2.20 JDK: 1.6 Operating System: RHEL 4 Thanks for the help.

    Read the article

  • SEO: A whois server that work for .SE domains?

    - by Niels Bosma
    I'm developing a small domain checker and I can't get .SE to work: public string Lookup(string domain, RecordType recordType, SeoToolsSettings.Tld tld) { TcpClient tcp = new TcpClient(); tcp.Connect(tld.WhoIsServer, 43); string strDomain = recordType.ToString() + " " + domain + "\r\n"; byte[] bytDomain = Encoding.ASCII.GetBytes(strDomain.ToCharArray()); Stream s = tcp.GetStream(); s.Write(bytDomain, 0, strDomain.Length); StreamReader sr = new StreamReader(tcp.GetStream(), Encoding.ASCII); string strLine = ""; StringBuilder builder = new StringBuilder(); while (null != (strLine = sr.ReadLine())) { builder.AppendLine(strLine); } tcp.Close(); if (tld.WhoIsDelayMs > 0) System.Threading.Thread.Sleep(tld.WhoIsDelayMs); return builder.ToString(); } I've tried whois servers whois.nic-se.se and whois.iis.se put I keep getting: # Copyright (c) 1997- .SE (The Internet Infrastructure Foundation). # All rights reserved. # The information obtained through searches, or otherwise, is protected # by the Swedish Copyright Act (1960:729) and international conventions. # It is also subject to database protection according to the Swedish # Copyright Act. # Any use of this material to target advertising or # similar activities is forbidden and will be prosecuted. # If any of the information below is transferred to a third # party, it must be done in its entirety. This server must # not be used as a backend for a search engine. # Result of search for registered domain names under # the .SE top level domain. # The data is in the UTF-8 character set and the result is # printed with eight bits. "domain google.se" not found. Edit: I've tried changing to UTF8 with no other result. When I try using whois from sysinternals I get the correct result, but not with my code, not even using SE.whois-servers.net. /Niels

    Read the article

  • Yet Another crosdomain.xml question or: "How to interpret documentation correctly"

    - by cboese
    Hi! I have read a lot about the new policy-policy of flash player and also know the master policy file. Now image the following situation: There are two servers with services (http) running at custom ports servera.com:2222/websiteA serverb.com:3333/websiteB Now I open a swf from server a (eg. servera.com:2222/websiteA/A.swf) that wants to access the service of serverb. Of course I need a crossdomain.xml at the right place and there are multiple variations possible. I dont want to use a master policy file, as I might not have control over the root of both servers. One solution I found works with the following crossdomain: <?xml version="1.0"?> <cross-domain-policy> <allow-access-from domain="*"/> </cross-domain-policy> served at serverb.com:3333/websiteB/crossdomain.xml So now for my question: Is it possible to get rid of the "*" and use a proper (not as general as *) domainname in the allow-access-from rule? All my attempts failed, and from what I understand it should be possible.

    Read the article

  • Thin permissions in etc folder (Ubuntu)

    - by Apollo
    I am working on a RoR server setup that uses Thin and Nginx. It works fine, but only if I manually add the folder /etc/thin and set the permissions to 777 in order to use the command below: thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production If I don't set it to 777, I get this error: me@UbuntuRails:/etc$ thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in initialize': Permission denied - /etc/thin/testapp.yml (Errno::EACCES) from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:inopen' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in config' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:187:inrun_command' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:152:in run!' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/bin/thin:6:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in load' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in eval' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in' I don't like to set this folder to a 777, sounds like a rubbish workaround. I run everything from an admin user account, not root. RVM runs from my admin user and gem only works in my admin as well. If I sudo that action, nothing happens because my root doesn't "know" thin. Which is the correct way to handle this? Thanks!

    Read the article

  • Why can't perfmon see instances of my custom performance counter?

    - by spoulson
    I'm creating some custom performance counters for an application. I wrote a simple C# tool to create the categories and counters. For example, the code snippet below is basically what I'm running. Then, I run a separate app that endlessly refreshes the raw value of the counter. While that runs, the counter and dummy instance are seen locally in perfmon. The problem I'm having is that the monitoring system we use can't see the instances in the multi-instance counter I've created when viewing remotely from another server. When using perfmon to browse the counters, I can see the category and counters, but the instances box is grayed out and I can't even select "All instances", nor can I click "Add". Using other access methods, like [typeperf][1] exhibit similar issues. I'm not sure if this is a server or code issue. This is only reproducible in the production environment where I need it. On my desktop and development servers, it works great. I'm a local admin on all servers. CounterCreationDataCollection collection = new CounterCreationDataCollection(); var category_name = "My Application"; var counter_name = "My counter name"; CounterCreationData ccd = new CounterCreationData(); ccd.CounterType = PerformanceCounterType.RateOfCountsPerSecond64; ccd.CounterName = counter_name; ccd.CounterHelp = counter_name; collection.Add(ccd); PerformanceCounterCategory.Create(category_name, category_name, PerformanceCounterCategoryType.MultiInstance, collection); Then, in a separate app, I run this to generate dummy instance data: var pc = new PerformanceCounter(category_name, counter_name, instance_name, false); while (true) { pc.RawValue = 0; Thread.Sleep(1000); }

    Read the article

  • Code Own Socket Server or Use Red5/ElectroServer on Amazon EC2?

    - by Travis
    I've been thinking for a long time about working on a multiplayer game in Flash. I need updates frequently enough that ajax requests won't work so I need to use a socket server. The system will eventually have enough objects/players that I would consider it an MMO. I would like to set up a scalable system on Amazon's EC2. (Which probably effects my choice of server) This architecture would hopefully allow the game to grow without many changes over time. (Using a domain decomposition technique or something similar) Heres my internal debate: Should I a. Code my own socket server in C++ or Java? b. Use the free and open source Red5 socket server for Flash? or c. Pay the licensing fees and go for Electroserver? I consider myself a decent developer, but am at an impasse as to what road to go down. I'm not sure if I, could develop/would need, the features of one of the prepackaged socket servers. I'm also not sure if the prepackaged servers would work well in an Amazon EC2 environment and take full advantage of its features. Any help or guidance would be greatly appreciated.

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • How to deploy RSWebParts.cab manually?

    - by denni
    I'm using the SSRS 2005 Web parts to display my reports in a MOSS 2007 SP1 Portal. I have successfully installed the Web parts in my development, testing, and UAT servers using the following command: stsadm -o addwppack -filename path/to/RSWebParts.cab. But when I tried running the same command in the production server, it will give me the following error: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. I know I usually will get this kind of error message when I tried to deploy my custom solutions having no Web application resources (such as web.config entries) to a specific Web application. But this is not my custom solution, it is an out-of-the-box SSRS Web part and it does have resources scoped to a Web application. I tried to even use different combination of the command by providing the -url, -globalinstall, and -force switches but it still give the same error. The configuration of the 4 servers are exactly the same, both from software and hardware perspectives. All other features are working properly on the production server. I even tried to extract the cab file manually to the bin folder of my Web application, then modify the Web.config manually to include the SafeControl element (copied from the manifest.xml inside the cab file). But it gave me an error saying it couldn't find the resources file. Even though, I extracted the whole file, including the resource files in the bin folder. Is there anyone who can help me resolve the problem? Thanks a lot.

    Read the article

  • Deploying ASP.NET MVC to IIS6: pages are just blank

    - by BryanGrimes
    I have an MVC app that is actually on a couple other servers but I didn't do the deploy. For this deploy I have added the wildcard to aspnet_isapi.dll which has gotten rid of the 404 error. But the pages are not pulling up, rather everything is just blank. I can't seem to find any IIS configuration differences. The Global asax.cs file does have routing defined, but as I've seen on a working server, that file isn't just hanging out in the root or anything so obvious. What could I be missing here? All of the servers are running IIS6 and I have compared the setups and they look the same to me at this point. Thanks... Bryan EDIT for the comments thus far: I've looked in the event logs with no luck, and scoured various IIS logs per David Wang: blogs.msdn.com. Below is the Global.asax.cs file... public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("error.axd"); // for Elmah // For deployment to IIS6 routes.Add(new Route ( "{controller}.mvc/{action}/{id}", new RouteValueDictionary(new { action = "Index", id = (string)null }), new MvcRouteHandler() )); routes.MapRoute( "WeeklyTimeSave", "Time/Save", new { controller = "Time", action = "Save" } ); routes.MapRoute( "WeeklyTimeAdd", "Time/Add", new { controller = "Time", action = "Add" } ); routes.MapRoute( "WeeklyTimeEdit", "Time/Edit/{id}", new { controller = "Time", action = "Edit", id = "" } ); routes.MapRoute( "FromSalesforce", "Home/{id}", new { controller = "Home", action = "Index", id = "" }); routes.MapRoute( "Default2", "{controller}/{id}", new { controller = "Home", action = "Index", id = "" } ); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } Maybe this is as stupid as the asax file not being somewhere it needs to be, but heck if I know at this point.

    Read the article

  • Loading a class file immediately AFTER startup

    - by Striker
    We have a few war files deployed inside an ear file. Some of the war files have a class that caches static data from our PLM system in singletons. Since some of the classes take several minutes to load we use the load-on-startup in the web.xml to load them ahead of time. This all works fine until we attempt to re-deploy the application on our production servers. (WebLogic 10.3) We get an exception from our PLM API about a dll already being loaded. Our PLM vendor has confirmed that this is a problem and stated that they don't support using the load-on-startup. This is also a huge problem on our development boxes where we have redeploy the app all the time. Most of us, when we're not working on one of the apps that uses a cache, have them commented out. Obviously we can't do that for the production servers. Right now we transfer the ear to the production server, deploy it in the console, wait for it to crash, shut the app server instance down and then start it up again. We need to find a way around this... One suggestion was to create a servlet that we can call after the server boots that will load the various caches. While this will work I'm looking for something a bit cleaner. Is there anyway to detect once the server started and then fire off the methods? Thanks.

    Read the article

  • Migrating MOSS 2007 from SQL 2000 to SQL 2005 - Addendum

    - by lunacrescens
    This is a continuation of an earlier question I had about moving the databases for a MOSS 2007 installation from SQL 2000 to SQL 2005. Here's the URL for the original question: http://stackoverflow.com/questions/254517/migrating-moss-2007-from-sql-2000-to-sql-2005 In my test environment, I've successfully moved the databases to the SQL 2005 test machine and things appear to be working fine. But, on the "Servers in Farm" page of the Central Admin | Operations, it still shows the old (i.e. SQL 2000) server as the Configuration Database Server. Also, it shows the old config database as being the Configuration Database. I know that the SQL2000 server and old config database (that are showing on this page) are NOT being used, because we've deactived the SQL instance in SQL2000. I've tried "removing" the server, and get a message about "Uninstalling SharePoint products and technologies" being the better route. So, I disconnected from the test databases, uninstalled SharePoint from the test WFE server, and reinstalled it. That didn't do anything. Before uninstalling/reinstalling I also tried simply rerunning the SharePoint Configuration wizard, and that didn't do anything either. Does anyone know how to update the Config Server and Config Database on the "Servers in Farm" page after having moved the Config and Content DBs? Is there something I'm missing or overlooking? Thanks.

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

  • Jquery add table row after the row which calling the jquery.

    - by marharépa
    Hello! I've got a table. <table id="servers" ...> ... {section name=i loop=$ownsites} <tr id="site_id_{$ownsites[i].id}"> ... <td>{$ownsites[i].phone}</td> <td class="icon"><a id="{$ownsites[i].id}" onClick="return makedeleterow(this.getAttribute('id'));" ...></a></td> </tr> {/section} <tbody> </table> And this java script. <script type="text/javascript"> function makedeleterow(id) { $('#delete').remove(); $('#servers').append($(document.createElement("tr")).attr({id: "delete"})); $('#delete').append($(document.createElement("td")).attr({colspan: "9", id: "deleter"})); $('#deleter').text('Biztosan törölni szeretnéd ezt a weblapod?'); $('#deleter').append($(document.createElement("input")).attr({type: "submit", id: id, onClick: "return truedeleterow(this.getAttribute('id'))"})); $('#deleter').append($(document.createElement("input")).attr({type: "hidden", name: "website_del", value: id})); } </script> It's workin fine, it makes a tr after the table's last tr and put the info to it, and the delete function also works fine. But i'd like to make this append AFTER the tr (with td class="icon") which calling the script. How can i do this?

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • .NET Remoting Connecting to Wrong Host

    - by Dark Falcon
    I have an application I wrote which has been running well for 4 years. Yesterday they moved all their servers around and installed about 60 pending Windows updates, and now it is broken. The application makes use of remoting to update some information on another server (10.0.5.230), but when I try to create my remote object, I get the following exception: Note that it is trying to connect to 127.0.0.1, not the proper server. The server (10.0.5.230) is listening on port 9091 as it should. This same error is happening on all three terminal servers where this application is installed. Here is the code which registers the remoted object: public static void RegisterClient() { string lServer; RegistryKey lKey = Registry.CurrentUser.OpenSubKey("SOFTWARE\\Shoreline Teleworks\\ShoreWare Client"); if (lKey == null) throw new InvalidOperationException("Could not find Shoretel Call Manager"); object lVal = lKey.GetValue("Server"); if (lVal == null) throw new InvalidOperationException("Shoretel Call Manager did not specify a server name"); lServer = lVal.ToString(); IDictionary props = new Hashtable(); props["port"] = 0; string s = System.Guid.NewGuid().ToString(); props["name"] = s; ChannelServices.RegisterChannel(new TcpClientChannel(props, null), false); RemotingConfiguration.RegisterActivatedClientType(typeof(UpdateClient), "tcp://" + lServer + ":" + S_REMOTING_PORT + "/"); RemotingConfiguration.RegisterActivatedClientType(typeof(Playback), "tcp://" + lServer + ":" + S_REMOTING_PORT + "/"); } Here is the code which calls the remoted object: UpdateClient lUpdater = new UpdateClient(Settings.CurrentSettings.Extension.ToString()); lUpdater.SetAgentState(false); I have verified that the following URI is passed to RegisterActivatedClientType: "tcp://10.0.5.230:9091/" Why does this application try to connect to the wrong server?

    Read the article

  • TeamCity output artifacts not published to IIS7 folder

    - by clausas
    I am trying to set up TeamCity to build and deploy an ASP.NET MVC application. I have the setup running successfully on other servers using TeamCity 4.5, but the new server is running TeamCity 6, and I am having trouble getting it to work as expected. TeamCity manages to get the files from source control, and the project (Visual Studio Solution 2008 set to "Build") builds and outputs the necessary files as expected. The problem seems to be with my artifact paths, as the output files are not copied to the website folder. My solution consists of dozen projects, of which the "Web" project is the interesting one in this case. The build checkout directory is C:\TeamCity\buildAgent\work\7da320cebf0ee541, and the "Web"-project is found in C:\TeamCity\buildAgent\work\7da320cebf0ee541\Web I have set up my build configuration with the following artifact paths (relative from checkout directory to the folder containing the website): Web/bin=>../../../../inetpub/wwwroot/staging/bin Web/Content=>../../../../inetpub/wwwroot/staging/Content Web/Views=>../../../../inetpub/wwwroot/staging/Views Web/Media=>../../../../inetpub/wwwroot/staging/Media Web/*.aspx=>../../../../inetpub/wwwroot/staging Web/*.asax=>../../../../inetpub/wwwroot/staging (I've tried with more ../ just in case, but it didn't make a difference). This is the output I get from the log [19:35:29]: Publishing artifacts (1s) [19:35:29]: [Publishing artifacts] Paths to publish: [Web/bin=../../../../inetpub/wwwroot/staging/bin, Web/Content=../../../../inetpub/wwwroot/staging/Content, Web/obj=../../../../inetpub/wwwroot/staging/obj, Web/Views=../../../../inetpub/wwwroot/staging/Views, Web/Media=../../../../inetpub/wwwroot/staging/Media, Web/.aspx=../../../../inetpub/wwwroot/staging, Web/.asax=../../../../inetpub/wwwroot/staging, teamcity-info.xml] [19:35:30]: [Publishing artifacts] Sending files [19:35:32]: Build finished Logs from some of the other servers running TeamCity 4.5 uses a different format, with a line for each of the artifacts being published, I'm not sure if this is relevant or only due to a different logging format. Everything seems to be working, but no files are put in my website folder after a build, am I missing something here? Any help will be much appreciated :)

    Read the article

  • Wireshark Dissector: How to Identify Missing UDP Frames?

    - by John Dibling
    How do you identify missing UDP frames in a custom Wireshark dissector? I have written a custom dissector for the CQS feed (reference page). One of our servers gaps when receiving this feed. According to Wireshark, some UDP frames are never received. I know that the frames were sent because all of our other servers are gap-free. A CQS frame consists of multiple messages, each having its own sequence number. My custom dissector provides the following data to Wireshark: cqs.frame_gaps - the number of gaps within a UDP frame (always zero) cqs.frame_first_seq - the first sequence number in a UDP frame cqs.frame_expected_seq - the first sequence number expected in the next UDP frame cqs.frame_msg_count - the number of messages in this UDP frame And I am displaying each of these values in custom columns, as shown in this screenshot: I tried adding code to my dissector that simply saves the last-processed sequence number (as a local static), and flags gaps when the dissector processes a frame where current_sequence != (previous_sequence + 1). This did not work because the dissector can be called in random-access order, depending on where you click in the GUI. So you could process frame 10, then frame 15, then frame 11, etc. Is there any way for my dissector to know if the frame that came before it (or the frame that follows) is missing? The dissector is written in C. (See also a companion post on serverfault.com)

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • Data synchronization using XMPP

    - by Jason
    Hi: I'm looking for some insight/advice on synchronizing data over XMPP. I've never developed anything for XMPP before so excuse me if some of my questions seem ridiculous. Basically, what I have is a decentralized social network. Each person has it's own Web site (or server) with a unique URI (one domain could host many servers). Each of these servers can have many clients. E.g., a desktop application, mobile application, etc. What I would like to accomplish is near real-time synchronization/communication between client and server, e.g., I update something on my desktop application, I see it change on my Web site. My server and client code is Python. So, I would like to make use of SleekXMPP if possible (it's license seems to have changed to MIT). I was thinking, and here is where I need advice, that each server would register an account at a dedicated XMPP server, e.g., [email protected]. and then I could use different resources for clients [email protected]/client1, [email protected]/client2, etc. If anyone can register any username, then maybe I also need some intermediate service (since it's decentralized, i'm not sure how to control registrations). Another option, I guess, is that each server runs it's own xmpp server. Assuming, that was all worked out, if I want to broadcast messages to all my resources (except the sending one), how do I do that? Do I have to subscribe to myself? This also seems like a good candidate for publish-subscribe, let me know if you think that could work and what the design/flow of that process would be. thanks :)

    Read the article

  • Web CMS That Outputs to Flat Static Pages (.html) via FTP to Remote Server?

    - by Sootah
    I have a web app project that I will be starting to work on shortly. One of the features included is going to be a content management system where users can add content and then that content will be combined with a template and then output as a regular .html file. This .html file would then be FTPed to their own web host. As I've always believed in not reinventing the wheel I figured I'd see if there are any quality customizable CMSes out there that do this already do this. For instance, Blogger.com allows you to post all of your content to your account there; but offers the option to let you use your own hosting. Any time you publish a new article then a new .html page is generated (as well as an updated index page with links to the new article) and then the updated content is FTPed to your own server. What I would like is something like this that I can modify to more closely suit my needs. Required Features: Able to host on my own server Written in PHP Users add content through their account, then when posted it is FTPed as .html to their server Any appropriate pages are also updated to link to the new content (like the index page or whatnot) Templateable Customizable Optional (but very much desired) features: Written in CodeIgniter or a similar PHP framework While CodeIgniter isn't strictly required, I would very much prefer it. It speeds up development time and makes things much easier to implement. So - any suggestions? I've stumbled across a few CMSes that push to remote servers as static pages, but the ones I've found all are hosted on the developers servers which means that I cannot modify it at all. Thanks again fellow StackOverflowians! -Sootah

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >