Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 452/500 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • How to deploy a number of disparate project types?

    - by niteice
    This question is similar to http://stackoverflow.com/questions/1900269/whats-the-best-way-to-deploy-an-executable-process-on-a-web-server. The situation is this: I'm developing a product that needs to be deployed to a web server. It consists of 4 website projects, a background service, a couple of command-line tools, and two assemblies shared by all of these components. Now, I also happen to administer the server that this product will be deployed on. So I'm familiar with everything that may need to be done to perform an update: Copy website files Replace the service binary Install updated components in the GAC Configure IIS Update database schema After some research it seems that, to reduce deployment time and to be able to let the other sysadmins handle deployment, I want to deploy all of these as an MSI, except that I don't know a thing about installers. I know VS can generate web deployment projects, but where do I go from there? Being able to simply click Next a few times on an installer is my goal for deploying updates. It would also be nice to modularize it, so for example, I could distribute the four websites among multiple servers and have everything appear as individual components in the installer, and as one entity in Add/Remove Programs. Is all of this too much to ask in a single package?

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • ASP.NET Webforms site using HTTPCookie with 100 year timeout times out after 20 minutes

    - by Rob
    I have a site that is using Forms Auth. The client does not want the site session to expire at all for users. In the login page codebehind, the following code is used: // user passed validation FormsAuthentication.Initialize(); // grab the user's roles out of the database String strRole = AssignRoles(UserName.Text); // creates forms auth ticket with expiration date of 100 years from now and make it persistent FormsAuthenticationTicket fat = new FormsAuthenticationTicket(1, UserName.Text, DateTime.Now, DateTime.Now.AddYears(100), true, strRole, FormsAuthentication.FormsCookiePath); // create a cookie and throw the ticket in there, set expiration date to 100 years from now HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(fat)) { Expires = DateTime.Now.AddYears(100) }; // add the cookie to the response queue Response.Cookies.Add(cookie); Response.Redirect(FormsAuthentication.GetRedirectUrl(UserName.Text, false)); The web.config file auth section looks like this: <authentication mode="Forms"> <forms name="APLOnlineCompliance" loginUrl="~/Login.aspx" defaultUrl="~/Course/CourseViewer.aspx" /> </authentication> When I log into the site I do see the cookie correctly being sent to the browser and passed back up: However, when I walk away for 20 minutes or so, come back and try to do anything on the site, the login window reappears. This solution was working for a while on our servers - now it's back. The problem doesn't occur on my local dev box running Cassini in VS2008. Any ideas on how to fix this?

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • What's a way for a client to automatically resolve the ip address of a server?

    - by zooropa
    The project I am working on is a client/server architecture. In a LAN environment, I want the client's to be able to automatically determine the server's address. I want to avoid having to manually configure each client with the ip address of the server. What is the best way to do this? Some alternatives I have thought about doing are: The server could listen for broadcast packets from the clients. The message from the client would be a request for the IP address of the server. The server would respond with its address. The machine running my project's server could also have a bind server running. The LAN's router could be configured to use it as one of its DNS servers. I think I saw that there is a bind library. Does that mean I can build the bind service into my server so that bind doesn't have to be installed on the server? Any other ideas? What have you done in the past? What are the pros/cons of these approaches and others that might be suggested? Thanks for your help!

    Read the article

  • Google Analytics Script inside XML File

    - by mnml
    Hi, I would like to know If I can add my ecommerce analytics tags inside an xml formated file? <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); _gaq.push(['_addTrans', '1234', // order ID - required 'Acme Clothing', // affiliation or store name '11.99', // total - required '1.29', // tax '5', // shipping 'San Jose', // city 'California', // state or province 'USA' // country ]); // add item might be called for every item in the shopping cart // where your ecommerce engine loops through each item in the cart and // prints out _addItem for each _gaq.push(['_addItem', '1234', // order ID - required 'DD44', // SKU/code 'T-Shirt', // product name 'Green Medium', // category or variation '11.99', // unit price - required '1' // quantity - required ]); _gaq.push(['_trackTrans']); //submits transaction to the Analytics servers (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script>

    Read the article

  • How do I test is storage-conf is being loaded in Cassandra 0.7.3?

    - by user657253
    I have installed Cassandra and gotten it working on two machines. I have followed the instructions to hook them up to each other by configuring the storage-conf.xml files. Both machines respond well to thrift and to command line cassandra. This is tutorial I used to setup the storage-conf.xml files. The tutorial says that if I run netstat, I should NOT see Cassandra bound to 127.0.0.1 on my seed node. I should see it bound to my internal IP, which I have configured in the storage-conf.xml file. I have rebooted the servers and relaunched cassandra. Still, I see the localhost address insead of the correct internal IP address. Is it that my .yaml file is overriding the storage-conf.xml file? If so, how do I delete the appropriate things in the .yaml? Or how do I tell Cassandra to look for my storage-conf.xml file? A few things I have tried: renaming the cassandra.yaml file. What happens is that cassandra will not load. If i rename the storage-conf.xml, cassandra does load. When I installed Cassandra, it did not come with a storage-conf.xml file. I had to grab it off the apache wiki.

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • Free solution for automatic updates with .NET/C#?

    - by a2h
    Yes, from searching I can see this has been asked time and time again. Here's a backstory. I'm an individual hobbyist developer with zero budget. A program I've been developing has been in need of constant bugfixes, and me and users are getting tired of having to manually update. Me, because my current solution of Manually FTP to my website Update a file "newest.txt" with the newest version Update index.html with a link to the newest version Hope for people to see the "there's an update" message Have them manually download the update sucks, and whenever I screw up an update, I get pitchforks. Users, because, well, "Are you ever going to implement auto-update?" "Will there ever be an auto-update feature?" Over the past few hours I have looked into: http://windowsclient.net/articles/appupdater.aspx - I can't comprehend the documentation http://www.codeproject.com/KB/vb/Auto_Update_Revisited.aspx - Doesn't appear to support anything other than working with files that aren't in use http://wyday.com/wyupdate/ - wyBuild isn't free, and the file specification is simply too complex. Maybe if I was under a company paying me I could spend the time, but then I may as well pay for wyBuild. http://www.kineticjump.com/update/default.aspx - Ditto. ClickOnce - Workarounds for implementing launching on startup are massive, horrendous and not worth it for such a simple feature. Publishing is a pain; manual FTP and replace of all files is required for servers without FrontPage Extensions. I'm pretty much ready to throw in the towel right now and strangle myself. And then I think about Sparkle...

    Read the article

  • Identify server that made call to web service

    - by sleepybobos
    I am working within an intranet environment. We have both a production and development sharepoint server (WSS 3). We have a 3rd party workflow product which runs on top of sharepoint. It is installed on both the production and development sharepoint servers. The workflow product can call web services I have written which are hosted on our web server. How would I have the web services determine which sharepoint server made the call to the web service, be it the production or development server? I would then use this information to server specific information from web.config or database etc. Currently the site hosting web services is setup to allow anonymous access so code such as System.Web.HttpContext.Current.User.Identity.Name; returns and empty string. If windows authenticaion is used it returns the identity of the currently logged in user, which is no user in identifying the server the call was made from. I need a push in the right direction to address what I believe is probably a common scenario please.

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Pros and cons of each JEE server for developing within Eclipse

    - by Thorbjørn Ravn Andersen
    Eclipse JEE has a lot of server adapters allowing development against many different application servers like JBoss, Glassfish and WebSphere. Frequently you can benefit from using another server for developing new features than for production, simply because it may be able to deploy changes much faster and when the functionality is in place, you can iron out bugs for the production platform. Unfortunately finding that server is a time consuming process, where others experience is invaluable. If you have experience with any server with an Eclipse Server Adapter, please add your findings and your recommendation. I believe that the following is of interest: Does saving a file trigger an update in the server, giving save edit+reload browser functionality? How fast is a deployment? (Saved a JSP? Java class? Static file?) Can the actual server be downloaded by the Server Adapter Wizard allowing for easy installation? Are there known bugs and issues with suitable work-arounds? Is debugging fully supported? Is profiling? Would you recommend this server? Note: Eclipse can also work with Tomcat but that is a web container, which cannot deploy EAR files.

    Read the article

  • Stale connection with Pheanstalk

    - by token47
    I'm using beanstalkd to offload some work to other machines. The setup is a bit unusual, the server is on the internet (public ip) but the consumers are behind adsl lines on some peoples homes. So there is a linux server as client going out through a dynamic ip and connecting to the server to get a job. It's all PHP and I'm using pheanstalk library. Everything runs smoothly for some time, but then the adsl changes the IP (every 24h hours the provider forces a disconnect-reconnect) the client just hangs, never to go out of "reserve". I thought that putting a timeout on the reserve would help it, but it didn't. As it seems, the client issues a command and blocks, it never checks the timeout. It just issues a reserve-with-timeout (instead of a simple reserve) and it is the servers responsibility to return a TIME_OUT as the timeout occurs. The problem is, the connection is broken (but the TCP/IP doesn't know about that yet until any of the sides try to talk to the other side) and if the client blocked reading, it will never return. The library seems to have support for some kind of timeouts locally (for example when trying to connect to server), but it does not seem to contemplate this scenario. How could I detect the stale connection and force a reconnect? Is there some kind of keepalive on the protocol (and on the pheanstalk itself)? Thanks!

    Read the article

  • Can't create new "enterprise" project in netbeans

    - by Danny
    I'm running netbeans 6.7.1 on Ubuntu Karmic. On the services tab I added a new glassfish v3 prelude server, I installed it to my home directory using the download button. I started the server and opened localhost:4848 to verify I can get into the admin panel. Then I did file-new projct and created a new java web-web application. On the configuration step of the wizard it preselected glassfish v3 prelude and java ee 5. I accepted and did a test run. I ran the project just fine. So now I did file-new projecct and attempted to create a Java EE-ejb module. When I arrive to the server configuration stage of the wizard, it doesn't show any servers on the server dropdown list (so it's empty), it also doesn't see any version of java on the "java ee version" dropdown list. This also happens for the other "Java EE" project types. I can't seem to get my head around why I can make a new web application but not an ejb module. Can anyone provide any insight to why it might not be seeing that I have java or glassfish installed when I try to make a new java ee project but I see it when I try to make a java web project?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Multi-account sync with Dropbox API

    - by Dan
    I'm trying to create a web app that lets users share files with each other through Dropbox. At the moment, Dropbox handles all the sharing, and there's one central Dropbox account running on the web server that shares the folder with the people who want it. I'm trying to change it so people don't have to accept a new folder invitation each time. I'd like to have them authorize my app to access an app folder in their Dropbox account, and all their shared folders would go inside there. Any changes they make would get noticed by the app on the server and synced to everyone else's folders. There's a couple things I'm having trouble figuring out to make this work: Do I need to make repeated calls to /delta for every account? I can't think how else I'd do this, but that sounds like it would quickly turn into thousands of requests a minute just polling for updates. When someone adds a file, do I have to upload it once for each account? That seems like a huge waste of bandwidth. I've looked into using /copy_ref, which I think would add a file to another user's account without my app ever touching it, but my app's web interface also allows users to upload files directly to my server, which would then need to be synced with everyone else's folders. That file isn't on Dropbox's servers yet, so /copy_ref obviously wouldn't work. For a little extra context, my app is written in node.js, and I've been playing with this library to interface with Dropbox, which uses their REST API.

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Getting registry information using Python

    - by Willy
    I am trying to pull registry info from many servers and put them all into one txt file. I got the code working fine in a .bat file. I hear that there is a way simpler way to do this in Python. I am intrigued and delighted to hear this. Can anyone help finish my code: My working bat file: echo rfsqlcl01app >> foo.txt reg query "\\rfsqlcl01app\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGSQL01 >> foo.txt reg query "\\GLADGSQL01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo GLADGWEB01 >> foo.txt reg query "\\GLADGWEB01\HKEY_LOCAL_MACHINE\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\McShield\Configuration\Default" >> foo.txt echo PAPERVISION >> foo.txt My python code structure: >>> server_list = open('server_test.txt', 'r') >>> for line in server_list: print r'reg query \\%s\blah\blah\blah' % line.strip() reg query \\foo\blah\blah\blah reg query \\moo\blah\blah\blah reg query \\boo\blah\blah\blah >>> server_list.close()

    Read the article

  • Checking whether images loaded after page load

    - by johkar
    Determining whether an image has loaded reliably seems to be one of the great JavaScript mysteries. I have tried various scripts/script libraries which check the onload and onerror events but I have had mixed and unreliable results. Can I reliably just check the complete property (IE 6-8 and Firefox) as I have done in the script below? I simply have a page wich lists out servers and I link to an on.gif on each server. If it doesn't load I just want to load an off.gif instead. This is just for internal use...I just need it to be reliable in showing the status!!! <script type="text/javascript"> var allimgs = document.getElementsByTagName('img'); function checkImages(){ for (i = 0; i < allimgs.length; i++){ var result = Math.random(); allimgs[i].src = allimgs[i].src + '?' + result; } serverDown(); setInterval('serverDown()',5000); } window.onload=checkImages; function serverDown(){ for (i = 0; i < allimgs.length; i++){ var imgholder=new Image(); imgholder.src=allimgs[i].src; if(!allimgs[i].complete){ allimgs[i].src='off.gif'; } } } </script>

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • What's a good Java-based Master-Slave communication mechanism?

    - by plecong
    I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a JEE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication. I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth. I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at: RMI JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server. JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication. JGroups/Hazelcast: Use shared distributed data structures to facilitate communication. Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency. Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like. Is there any technology I'm missing? Edit: Also looked at: JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.

    Read the article

  • Why is JavaMail Transport.send() a static method?

    - by skiphoppy
    I'm revising code I did not write that uses JavaMail, and having a little trouble understanding why the JavaMail API is designed the way it is. I have the feeling that if I understood, I could be doing a better job. We call: transport = session.getTransport("smtp"); transport.connect(hostName, port, user, password); So why is Eclipse warning me that this: transport.send(message, message.getAllRecipients()); is a call to a static method? Why am I getting a Transport object and providing settings that are specific to it if I can't use that object to send the message? How does the Transport class even know what server and other settings to use to send the message? It's working fine, which is hard to believe. What if I had instantiated Transport objects for two different servers; how would it know which one to use? In the course of writing this question, I've discovered that I should really be calling: transport.sendMessage(message, message.getAllRecipients()); So what is the purpose of the static Transport.send() method? Is this just poor design, or is there a reason it is this way?

    Read the article

  • Flex : providing data with a PHP Class

    - by Tristan
    Hello, i'm a very new user to flex (never use flex, nor flashbuilder, nor action script before), but i want to learn this langage because of the beautiful RIA and chart it can do. I watched the video on adobe : 1 hour to build your first program but i'm stuck : On the video it says that we have to provide a PHP class for accessing data and i used the example that flash builder gave (with zend framework and mysqli). I never used those ones and it makes a lot to learn if i count zen + mysqli. My question is : can i use a PHP class like this one ? What does flash builder except in return ? i hear that was automatic. example it may be wrong, i'm not very familiar with classes when acessing to database : <?php class DBConnection { protected $server = "localhost"; protected $username = "root"; protected $password = "root"; protected $dbname = "something"; protected $connection; function __construct() { $this->connection = mysql_connect($this->server, $this->username, $this->password); mysql_select_db($this->dbname,$this->connection); mysql_query("SET NAMES 'utf8'", $this->connection); } function query($query) { $result = mysql_query($query, $this->connection); if (!$result) { echo 'request error ' . mysql_error($this->connection); exit; } return $result; } function getAll() { $req = "select * from servers"; $result = query($req) return $result } function num_rows() { return mysql_num_rows($result); } function end() { mysql_close($this->connection); } } ?> Thank you,

    Read the article

  • What image processing Library should I use

    - by Swippen
    I have been reading What is the best image manipulation library? And tried a few libraries and are now looking for inputs on what is the best for our need. I will start by describing our current setting and problems. We have a system that needs to resize and crop a large amount of images from big original images. We handle 50 000+ images every day on 2 powerfull servers. Today we use ImageGlue from WebSupergoo but we don't like it at all, it is slow and hangs the service now and then (Its in another unanswered stack overflow question). We have a threaded windows service that uses Microsoft ThreadPool to resize as much as possible on the 8 core machines. I have tried AForge and it went very well it was loads faster and never crashed or anything. But I had problems with quality on a few images. This due to what algorithms I used ofc so can be tweaked. But want to widen our eyes to see if thats the right way to go. so: It needs to be c# .net and run in a windows service. (Since we wont change the rest of the service only image handling) It needs to handle threaded environment well. We have a great need of it being fast since today its too slow. But we also want good quality and small filesize since the images are later displayed on webpage with loads of visitors and needs good quality. So we have a lot of demands on ability to get god quality at a fast pace, and also secondary keep filesizes lowered even if that can be adjusted with compression a bit. Any comments or suggestions on what library to use?

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >