Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 91/393 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Printer Redirection from 2003 Terminal Server to 2008 Terminal Server

    - by xmaveric
    Our environment is a terminal server cluster (Win2003 servers) that everyone connects to do do their work. I have set up a new Win2008 R2 machine with the intention of using it to publish our main application to the TS farm. The idea was to keep this server dedicated to one application to avoid driver/dll conflicts with other software. I created a RemoteApp on the new server and made an .rdp file and placed it on the desktop of our TS farm servers. The problem I am running into is that when I connect to the RemoteApp, it doesn't show the printers that are installed on the TS server I am connecting from. We have over 20 printers installed on our TS servers, each with different drivers and permissions. I really do not want to reinstall all of these on the RemoteApp server so I was hoping Printer Redirection would handle this. It would appear that because the RDP client for Server 2003 x64 is 6.0, that version doesn't support the Easy Print feature (requires 6.1). I can't find any newer version on the MS site to download for Win2003 x64. How can I get the printers on the TS farm machine to redirect so they are viewed by the RemoteApp machine?

    Read the article

  • Web-based clients vs thick/rich clients?

    - by rudolfv
    My company is a software solutions provider to a major telecommunications company. The environment is currently IBM WebSphere-based with front-end IBM Portal servers talking to a cluster of back-end WebSphere Application Servers providing EJB services. Some of the portlets use our own home-grown MVC-pattern and some are written in JSF. Recently we did a proof-of-concept rich/thick-client application that communicates directly with the EJB's on the back-end servers. It was written in NetBeans Platform and uses the WebSphere application client library to establish communication with the EJB's. The really painful bit was getting the client to use secure JAAS/SSL communications. But, after that was resolved, we've found that the rich client has a number of advantages over the web-based portal client applications we've become accustomed to: Enormous performance advantage (CORBA vs. HTTP, cut out the Portal Server middle man) Development is simplified and faster due to use of NetBeans' visual designer and Swing's generally robust architecture The debug cycle is shortened by not having to deploy your client application to a test server No mishmash of technologies as with web-based development (Struts, JSF, JQuery, HTML, JSTL etc., etc.) After enduring the pain of web-based development (even JSF) for a while now, I've come to the following conclusion: Rich clients aren't right for every situation, but when you're developing an in-house intranet-based solution, then you'd be crazy not to consider NetBeans Platform or Eclipse RCP. Any comments/experiences with rich clients vs. web clients?

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by Petras
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server such as http://www.crystaltech.com/vps.aspx This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily: Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • Rails.cache throws "marshal dump" error when changed from memory store to memcached store

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • Mocking WebResponse's from a WebRequest

    - by Rob Cooper
    I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, on that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples. Thanks Will :)

    Read the article

  • Visual Studio Debugging is not attaching to WebDev.WebServer.EXE

    - by Aaron Daniels
    I have a solution with many projects. On Debug, I have three web projects that I want to start up on their own Cassini ASP.NET Web Development servers. In the Solution Properties - Common Properties - Startup Project, I have Multiple startup projects chosen with the three web applications' Action set to Start. All three web development servers start, and all three web pages load. However, Visual Studio is only attaching to two of the WebDev.WebServer.EXE processes. I have to manually go attach to the third process in order to debug it with the debugger. This behavior just started happening, and I'm at a loss as to how to troubleshoot this. Any help is appreciated. EDIT: Also to note, I have stopped and restarted the development servers several times with no change in behavior. Also, when attaching to the process manually, I see that the Type property of the two automatically attached WebDev.WebServer.EXE processes is Managed, while the Type property of the unattached WebDev.WebServer.EXE process is TSQL, Managed, x86. When looking at the project's properties, however, I am targeting AnyCPU, and do NOT have SQL Server debugging enabled. EDIT: Also to note, the two projects that attach correctly are C# web applications. <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids> The project that is not attaching correctly is a VB.NET web application. <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{F184B08F-C81C-45F6-A57F-5ABD9991F28F}</ProjectTypeGuids> EDIT: Also to note, the behavior is the same on another workstation. So odds are that it's not a machine specific problem.

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • How to know about MySQL 'refused connections'

    - by celalo
    Hello, I am using MONyog to montitor my two mysql servers. I get alert emails from MONyog when something goes wrong. There is an error I could not find out why. It says: Connection History: Percentage of refused connections) - 66.67% the percentage is not important, this is just about having refused connections. I get this email every half an hour. So this is like a constant situation. This must be my mistake, because I just set up those servers and there is no chance somebody else could be interfering the servers. MONyog advices me: Try to isolate users/applications that are using an incorrect password or trying to connect from unauthorized hosts. A client will be disallowed to connect if it takes more than connect_timeout seconds to connect. Set the value of log_warnings system variable to 2. This will force the MySQL server to log further information about the error. I added log_warnings=2 to my.cnf and I enabled logging like this: [mysqld_safe] . . log_warnings=2 log-error = /var/log/mysql/error.log . . . . [mysqld_safe] . log-error=/var/log/mysqld.log . . I cannot see any warnings at /var/log/mysql/error.log I can see some warnings at /var/log/mysqld.log but they are about something else. In sum, my question is how can I detect refused connections? Please let me know if any more info is required. Thanks in advance.

    Read the article

  • How can I disable keep-alive on ASP.NET Web Service client requests?

    - by Matthew Brindley
    I have a few web servers behind an Amazon EC2 load balancer. I'm using TCP balancing on port 80 (rather than HTTP balancing). I have a client polling a Web Service (running on all web servers) for new items every few seconds. However, the client seems to stay connected to one server and polls that same server each time. I've tried using ServicePointManager to disable KeepAlive, but that didn't change anything. The outgoing connection still had its "connection: keep-alive" HTTP header, and the server kept the TCP connection open. I've also tried adding an override of GetWebRequest to the proxy class created by VS, which inherits from SoapHttpClientProtocol, but I still see the keep-alive header. If I kill the client's process and restart, it'll connect to a new server via the load balancer, but it'll continue polling that new server forever. Is there a way to force it to connect to a random server each time? I want the load from the one client to be spread across all of the web servers. The client is written in C# (as is the server) and uses a Web Reference (not a Service Reference), which points to the load balancer.

    Read the article

  • Flow control in a batch file

    - by dboarman-FissureStudios
    Reference Iterating arrays in a batch file I have the following: for /f "tokens=1" %%Q in ('query termserver') do ( if not ERRORLEVEL ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) ) When running query termserver from the command line, the first two lines are: Known ------------------------- ...followed by the list of terminal servers. However, I do not want to include these as part of the query user command. Also, there are about 4 servers I do not wish to include. When I supply UserID with this code, the program is promptly exiting. I know it has something to do with the if statement. Is this not possible to nest flow control inside the for-loop? I had tried setting a variable to exactly the names of the servers I wanted to check, but the iteration would end on the first server: set TermServers=Server1.Server2.Server3.Server7.Server8.Server10 for /f "tokens=2 delims=.=" %%Q in ('set TermServers') do ( echo Checking %%Q for /f "tokens=1" %%U in ('query user %UserID% /server:%%Q') do (echo %%Q) ) I would prefer this second example over the first if nothing else for cleanliness. Any help regarding either of these issues would be greatly appreciated.

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • HELP!!! Upgrading to windows 2008 R2 server has caused major issue with screen scraping remote serve

    - by bobsov534
    I have three servers. One is Windows 2003(A) and another is Windows 2008(B) and third one is also Windows 2008 (C). All of them are web servers. A and B contains classic asp pages and they are 32 bit servers and C contains asp.net pages and is 64 bit server. The asp pages of A and B use the screen scraping technology to render the asp.net pages from C. When A is ran, the asp.net page is rendered fine, meaning there are no broken images or file not found error. When B is ran, the images appears to be broken because its looking for those images in Server B instead of Server C itself. I believe this issue is caused by IIS 7 or 7.5 since IIS 6 has no problem scraping the remote server pages. Can you please help me with solution to this problem? This is sort of urgent since upgrading to windows server 2008 R2 now has been a major show stopper for us at the moment. Thanks in advance.

    Read the article

  • Automatic Deployment to Multiple Production Environments

    - by Brandon Montgomery
    I want to update an ASP .NET web application (including web.config file changes and database scripts) to multiple production environments - ideally with the click of a button. I do not have direct network connectivity to any of them. I think this means the application servers will have to "pull" the information required for updating the application, and run a script to update the application that resides on the server. Basically, I need a way to "publish" an update, and the servers see that update and automatically download and run it. I've thought about possibly setting up an SFTP server for publishing updates, and developing a custom tool which is installed on production environments which looks at the SFTP server every day and downloads application files if they are available. That would at least get the required files onto the servers, and I could use xcopy/robocopy and Migrator.NET to deploy the updates. Still not sure about config file changes, but that at least gets me somewhere. Is there any good solution for this scenario? Are there any tools that do this for you?

    Read the article

  • Commercial Website architecture question

    - by Maxime ARNSTAMM
    Hello everyone, I have to write an architecture case study but there are some things that i don't know, so i'd like some pointers on the following : The website must handle 5k simultaneous users. The backend is composed by a commercial software, some webservices, some message queues, and a database. I want to recommend to use Spring for the backend, to deal with the different elements, and to expose some Rest services. I also want to recommend wicket for the front (not the point here). What i don't know is : must i install the front and the back on the same tomcat server or two different ? and i am tempted to put two servers for the front, with a load balancer (no need for session replication in this case). But if i have two front servers, must i have two back servers ? i don't want to create some kind of bottleneck. Based on what i read on this blog a really huge charge is handle by one tomcat only for the first website mentionned. But i cannot find any info on this, so i can't tell if it seems plausible. If you can enlight me, so i can go on in my case study, that would be really helpful. Thanks :)

    Read the article

  • How to store a file on a server(web container) through a Java EE web application?

    - by Yatendra Goel
    I have developed a Java EE web application. This application allows a user to upload a file with the help of a browser. Once the user has uploaded his file, this application first stores the uploaded file on the server (on which it is running) and then processes it. At present, I am storing the file on the server as follows: try { FormFile formFile = programForm.getTheFile(); // formFile represents the uploaded file String path = getServlet().getServletContext().getRealPath("") + "/" + formFile.getFileName(); System.out.println(path); file = new File(path); outputStream = new FileOutputStream(file); outputStream.write(formFile.getFileData()); } where, the formFile represents the uploaded file. Now, the problem is that it is running fine on some servers but on some servers the getServlet().getServletContext().getRealPath("") is returning null so the final path that I am getting is null/filename and the file doesn't store on the server. When I checked the API for ServletContext.getRealPath() method, I found the following: public java.lang.String getRealPath(java.lang.String path) Returns a String containing the real path for a given virtual path. For example, the path "/index.html" returns the absolute file path on the server's filesystem would be served by a request for "http://host/contextPath/index.html", where contextPath is the context path of this ServletContext. The real path returned will be in a form appropriate to the computer and operating system on which the servlet container is running, including the proper path separators. This method returns null if the servlet container cannot translate the virtual path to a real path for any reason (such as when the content is being made available from a .war archive). So, Is there any other way by which I can store files on those servers also which is returning null for getServlet().getServletContext().getRealPath("")

    Read the article

  • How to display my server's current response time to an average user

    - by Jason
    Sorry, I'm not really sure of the right way to ask this one so bear with me... We have a web application that runs on a set of servers at a data center (not in our offices) We want to be able to somehow 'advertise' to our clients/users that the availability or response time of our servers has met a standard throughout the day. I am being asked to come up with a standard metric that we can easily advertise on our login screen that shows current "standard response time" checked every x minutes. My thinking is that I need to capture something like the results of a traceroute from a server (either in our office, amazon, etc..) to one of the data center servers and come up with a Red/Yellow/Green type of a notifier for the login screen to let the user know that our tests are responding normally and if they are having delay issues it could be their network or connection to the internet. We have lots of clients in rural areas that have poor connectivity and we are trying to let them know any slowness might be on their end, not ours. I've got the LAMP stack to work with, but this could also be some other system all together as long as it can update the main server with the results. I already have pingdom reports that are available, but that's a bit more than people want to read sometimes. Any ideas on what I can do?

    Read the article

  • Analyze log files from many languages using a single tool. And recommendations of logging frameworks

    - by Binary255
    We have a system build on lots of languages. The ones we are interested in logging, in order of priority, are: C/C++ PHP C# Bash Java Wish list: If it is possible, we would like logging to be achieved from the above languages in such a way that we may use a single log viewing tool for all of them. Ideally they would be in the same format, but next to that in as few formats as possible and readable from as many log file viewers as possible. If it is possible logging to a single log file or a set of log files would be nice. With a possibility to filter based on the source language that is being logged. We would like to copy the log files (or should be log to a database and copy it instead?) from multiple servers to a single location. So that we can analyze the log files from many servers at the same time (to see if any of our servers execute a certain piece legacy code for example). Being able to change logging level at runtime would be nice. Thank you for reading! It's quite a complex problem, I hope someone has wrestled with it before and has some valuable information!

    Read the article

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • Random problem connecting to MySQL

    - by CharlesLeaf
    Environment: RHEL 5 servers, MySQL 5.1.43, PHP 5.1.6 (using MySQLi). Currently only available within our internal VPN network. Servers ServerA: Webserver ServerB/C/D: Database server (1 master 2 slaves) The error (on ServerA) [Tue May 25 11:12:17 2010] [error] [client CLIENTIP] PHP Warning: mysqli::real_connect() [function.mysqli-real-connect]: (HY000/2003): Can't connect to MySQL server on 'ServerB' (4) in /home/**/Database.php on line 67, referer: [website] Problem description It appears that at completely random times, our website is unable to connect to one of the MySQL servers - usually the Master. Except for the forementioned error message, there is nothing to be found in any of the logs as far as I can see, and most of the times the connection is succesful and everything works as it should. It's just at completely random times, this error pops up. There's no firewall blocking any internal traffic, timeout value is 3 but it doesn't take 3 seconds before it fails to connect. With the default mysql client I can connect from ServerA to ServerB,C and D and haven't encountered a problem yet. Does anyone have a clue what I might be overlooking / could be the problem? Because I've run out of ideas myself.

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Migrating MachineKey from iis6 on old server to iis7 on new server

    - by MaseBase
    I am migrating our hosting environment to a totally new data center with new boxes and hardware and software... the whole deal. Our website cookies are encrypted using the machineKey, so when I make a request to my domain and point it to the new web server (by overriding the local hosts file), I get an error because the cookie cannot be decrypted, since the Machine Key is different. I'd like to avoid any problems a frequent user might have when they arrive at the new server for the first time. To the best of my knowledge, at this point I think I need to set the same MachineKey from our current servers on our new servers. This way when past visitors with a cookie arrive at our website served by the new server, the cookie will be decrypted properly with the MachineKey it was encrypted with and then log them in properly. My question is where do I find my MachineKey value (in IIS 6 win2k3 server) so I can use that value to set it statically on my new servers? I've pulled up my machine.config file, but it doesn't specify the key, it only specifies a configSection where the key can be defined. It's not in my web.config for the app or elsewhere. I did find this great article on some MachineKey and Web Garden woes (which could explain some other bugs I've been experiencing with regard to the machineKey). Update I am back to this issue and am still faced with a similar problem. I have the MachineKey auto-generated on the IIS6 server but I need to get that exact key so I can set it explicitly and not have it auto-generated anymore. Any help is appreciated...

    Read the article

  • How to skip certain tests with Test::Unit

    - by Daniel Abrahamsson
    In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite. My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test. The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality. As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

    Read the article

  • how to seamlessly integrate subversion and git?

    - by mattv
    I'm looking for tips on how to seamlessly integrate subversion and git, for deploying web sites by a small team of web developers. We each have our own development versions of our sites on our local machines. We also have dev, staging, and live servers. As our team has grown, we haven't updated our revision control and deployment strategies accordingly. We had all been checking into the trunk of a shared Subversion repository. Both the dev & staging servers ran from a checkout of the trunk, so updating them involved running "svn update" while the live server ran as an export from trunk which required an "svn export" to get the latest code. In either case, we would often update just certain files by updating or exporting just those files or directories. That worked okay when there was just one or two developers. However, a big downside was that we couldn't point to an individual tag that represented what was currently on live at any given time. In keeping with corporate policy, we'd like to continue to use Subversion to store what we're now calling our "production branch," which will be what goes onto staging and live. However, we would like to use Git on our local and development sites. We especially like the idea of easier merges and being able to "cherry pick" updates that need to go live. We had initially planned on using git-svn, but it doesn't seem to work well in a shared environment such as our dev or staging servers. Anyone else doing something like this? What's the best way to make it work? Or are we making it more difficult than it should be?

    Read the article

  • mysql: Cannot load from mysql.proc. The table is probably corrupted

    - by Alex
    Mysql was started: /usr/bin/mysqld_safe --datadir=/srv/mysql/myDB --log-error=/srv/mysql/logs/mysqld-myDB.log --pid-file=/srv/mysql/pids/mysqld-myDB.pid --user=mysql --socket=/srv/mysql/sockets/mysql-myDB.sock --port=3700 but when I'm trying to do something: ERROR 1548 (HY000) at line 1: Cannot load from mysql.proc. The table is probably corrupted How to fix it? $ mysql -V mysql Ver 14.14 Distrib 5.1.58, for debian-linux-gnu (x86_64) using readline 6.2 $ lsb_release -a Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric $ sudo mysql_upgrade -uroot -p<password> --force Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysql.columns_priv OK mysql.db OK mysql.event OK mysql.func OK mysql.general_log Error : You can't use locks with log tables. status : OK mysql.help_category OK mysql.help_keyword OK mysql.help_relation OK mysql.help_topic OK mysql.host OK mysql.ndb_binlog_index OK mysql.plugin OK mysql.proc OK mysql.procs_priv OK mysql.servers OK mysql.slow_log Error : You can't use locks with log tables. status : OK mysql.tables_priv OK mysql.time_zone OK mysql.time_zone_leap_second OK mysql.time_zone_name OK mysql.time_zone_transition OK mysql.time_zone_transition_type OK mysql.user OK Running 'mysql_fix_privilege_tables'... OK $ mysqlcheck --port=3700 --socket=/srv/mysql/sockets/mysql-my-env.sock -A -udata_owner -pdata_owner <all tables> OK UPD1: for example I'm trying to remove procedure: mysql> DROP PROCEDURE IF EXISTS mysql.myproc; ERROR 1548 (HY000): Cannot load from mysql.proc. The table is probably corrupted mysql> UPD2: mysql> REPAIR TABLE mysql.proc; +------------+--------+----------+-----------------------------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +------------+--------+----------+-----------------------------------------------------------------------------------------+ | mysql.proc | repair | error | 1 when fixing table | | mysql.proc | repair | Error | Can't change permissions of the file '/srv/mysql/myDB/mysql/proc.MYD' (Errcode: 1) | | mysql.proc | repair | status | Operation failed | +------------+--------+----------+-----------------------------------------------------------------------------------------+ 3 rows in set (0.04 sec) This is strange, because: $ ls -l /srv/mysql/myDB/mysql/proc.MYD -rwxrwxrwx 1 mysql root 3983252 2012-02-03 22:51 /srv/mysql/myDB/mysql/proc.MYD UPD3: $ ls -la /srv/mysql/myDB/mysql total 8930 drwxrwxrwx 2 mysql root 2480 2012-02-21 13:13 . drwxrwxrwx 13 mysql root 504 2012-02-21 19:01 .. -rwxrwxrwx 1 mysql root 8820 2012-02-20 15:50 columns_priv.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 columns_priv.MYD -rwxrwxrwx 1 mysql root 4096 2012-02-20 15:50 columns_priv.MYI -rwxrwxrwx 1 mysql root 9582 2012-02-20 15:50 db.frm -rwxrwxrwx 1 mysql root 8360 2011-12-08 02:14 db.MYD -rwxrwxrwx 1 mysql root 5120 2012-02-20 15:50 db.MYI -rwxrwxrwx 1 mysql root 54 2011-11-12 15:42 db.opt -rwxrwxrwx 1 mysql root 10223 2012-02-20 15:50 event.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 event.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 event.MYI -rwxrwxrwx 1 mysql root 8665 2012-02-20 15:50 func.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 func.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 func.MYI -rwxrwxrwx 1 mysql root 8700 2012-02-20 15:50 help_category.frm -rwxrwxrwx 1 mysql root 21497 2011-11-12 15:42 help_category.MYD -rwxrwxrwx 1 mysql root 3072 2012-02-20 15:50 help_category.MYI -rwxrwxrwx 1 mysql root 8612 2012-02-20 15:50 help_keyword.frm -rwxrwxrwx 1 mysql root 88650 2011-11-12 15:42 help_keyword.MYD -rwxrwxrwx 1 mysql root 16384 2012-02-20 15:50 help_keyword.MYI -rwxrwxrwx 1 mysql root 8630 2012-02-20 15:50 help_relation.frm -rwxrwxrwx 1 mysql root 8874 2011-11-12 15:42 help_relation.MYD -rwxrwxrwx 1 mysql root 16384 2012-02-20 15:50 help_relation.MYI -rwxrwxrwx 1 mysql root 8770 2012-02-20 15:50 help_topic.frm -rwxrwxrwx 1 mysql root 414320 2011-11-12 15:42 help_topic.MYD -rwxrwxrwx 1 mysql root 20480 2012-02-20 15:50 help_topic.MYI -rwxrwxrwx 1 mysql root 9510 2012-02-20 15:50 host.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 host.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 host.MYI -rwxrwxrwx 1 mysql root 8554 2011-11-12 15:42 innodb_monitor.frm -rwxrwxrwx 1 mysql root 98304 2011-11-12 15:55 innodb_monitor.ibd -rwxrwxrwx 1 mysql root 8592 2012-02-20 15:50 inventory.frm -rwxrwxrwx 1 mysql root 76 2011-11-12 15:42 inventory.MYD -rwxrwxrwx 1 mysql root 2048 2012-02-20 15:50 inventory.MYI -rwxrwxrwx 1 mysql root 8778 2012-02-20 15:50 ndb_binlog_index.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 ndb_binlog_index.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 ndb_binlog_index.MYI -rwxrwxrwx 1 mysql root 8586 2012-02-20 15:50 plugin.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 plugin.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 plugin.MYI -rwxrwxrwx 1 mysql root 9996 2012-02-20 15:50 proc.frm -rwxrwxrwx 1 mysql root 3983252 2012-02-03 22:51 proc.MYD -rwxrwxrwx 1 mysql root 36864 2012-02-21 13:23 proc.MYI -rwxrwxrwx 1 mysql root 8875 2012-02-20 15:50 procs_priv.frm -rwxrwxrwx 1 mysql root 1700 2011-11-12 15:42 procs_priv.MYD -rwxrwxrwx 1 mysql root 8192 2012-02-20 15:50 procs_priv.MYI -rwxrwxrwx 1 mysql root 3977704 2012-02-21 13:23 proc.TMD -rwxrwxrwx 1 mysql root 8800 2012-02-20 15:50 proxies_priv.frm -rwxrwxrwx 1 mysql root 693 2011-11-12 15:42 proxies_priv.MYD -rwxrwxrwx 1 mysql root 5120 2012-02-20 15:50 proxies_priv.MYI -rwxrwxrwx 1 mysql root 8838 2012-02-20 15:50 servers.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 servers.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 servers.MYI -rwxrwxrwx 1 mysql root 8955 2012-02-20 15:50 tables_priv.frm -rwxrwxrwx 1 mysql root 5957 2011-11-12 15:42 tables_priv.MYD -rwxrwxrwx 1 mysql root 8192 2012-02-20 15:50 tables_priv.MYI -rwxrwxrwx 1 mysql root 8636 2012-02-20 15:50 time_zone.frm -rwxrwxrwx 1 mysql root 8624 2012-02-20 15:50 time_zone_leap_second.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_leap_second.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_leap_second.MYI -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone.MYI -rwxrwxrwx 1 mysql root 8606 2012-02-20 15:50 time_zone_name.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_name.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_name.MYI -rwxrwxrwx 1 mysql root 8686 2012-02-20 15:50 time_zone_transition.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_transition.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_transition.MYI -rwxrwxrwx 1 mysql root 8748 2012-02-20 15:50 time_zone_transition_type.frm -rwxrwxrwx 1 mysql root 0 2011-11-12 15:42 time_zone_transition_type.MYD -rwxrwxrwx 1 mysql root 1024 2012-02-20 15:50 time_zone_transition_type.MYI -rwxrwxrwx 1 mysql root 10630 2012-02-20 15:50 user.frm -rwxrwxrwx 1 mysql root 5456 2011-11-12 21:01 user.MYD -rwxrwxrwx 1 mysql root 4096 2012-02-20 15:50 user.MYI

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >