Search Results

Search found 89681 results on 3588 pages for 'cross server'.

Page 810/3588 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • Application/Server dependency mapping

    - by David Stratton
    I'm just curious as to whether such as tool exists (free, open source, or commercial but for a reasonable price) before I build it myself. We're looking for a simple solution to simplify taking web apps online and offline when a server is undergoing maintenance. The idea is that we be able to mark a server as unavailable, and then mark all dependent (direct and indirect) as offline. Our first proof-of-concept is running, and we created an aspx page that lists various applications that have an App_Offline.html file with a friendly "Down for Maintenance" message in a GridView. In the GridView, each app has a LinkButton that, when clicked, either renames the App_Offline.htm to App_Offline.html or vice-versa to take the app online and offline. The next step is to set up all of our dependencies. For example, our store locater would be dependent on our web services, which in turn are dependent on our SQL Server. (that's a simple example. We can easily have several layers, or one app dependent on multiple servers, etc.) In this example, if the SQL server goes down, we would need to drill down recursively to find all apps that depend on it, and then turn them off and on by renaming the App_Offline file appropriately. I realize this will be relatively simple to build, but could be complex to manage. I'm sure we're not the first team to think of this concept, and I'm wondering if there are any open source tools, or if any of you have done something similar and can help us avoid pitfalls. Edit - Update I found the category of software I'm looking for. it's called CMDB - (Configuration Management Database), and it's generally more of a Network Admin type tool than a developer tool. I found some open source products in this category, but none written in .NET. I had considered moving this question to ServerFault.com when I realized I was looking for a netowrk Admin type tool, but since I'm looking for code and a modifiable solution I'll keep the question here.

    Read the article

  • passing timezone from client (GWT) to server (Joda Time)

    - by Caffeine Coma
    I'm using GWT on the client (browser) and Joda Time on the server. I'd like to perform some DB lookups bounded by the day (i.e. 00:00:00 until 23:59:59) that a request comes in, with the time boundaries based on the user's (i.e. browser) timezone. So I have the GWT code do a new java.util.Date() to get the time of the request, and send that to the server. Then I use Joda Time like so: new DateTime(clientDate).toDateMidnight().toDateTime() The trouble of course is that toDateMidnight(), in the absence of a specified TimeZone, will use the system's (i.e. the server's) TimeZone. I've been trying to find a simple way to pass the TimeZone from the browser to the server without much luck. In GWT I can get the GMT offset with: DateTimeFormat.getFormat("Z").fmt(new Date()) which results in something like "-0400". But Joda Time's DateTimeZone.forID() wants strings formatted like "America/New_York", or an integer argument of hours and minutes. Of course I can parse "-0400" into -4 hours and 0 minutes, but I'm wondering if there is not a more straightforward way of doing this.

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. - Apache 2.2.11 - MySQL 5.1.36 (INNODB engine) - PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios 1) IF I use a Low end PC ( low processor speed and low RAM) 2) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. 3) Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions 1) Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? 2) excessive file reading as in every 10 seconds ? 3) unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? 4) Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • How to check for server side update in iPhone application Web service request

    - by Nirmal
    Hello All.... I have kind of Listing application for iPhone application, which always calling a web service to my php server and fetching the data and displaying it onto iPhone screen. Now, the thing to consider in this scenario is, my iPhone application everytime requesting on the server and fetching the data. But now my requirement is like I want to replace following set of actions with above one : - Everytime when my application is launched in iPhone, it should check for the new data at the server`. - And if server replies "true" then only my iPhone application will made a request to fetch the data. - In case of "false", my iPhone application will display the data which is already cached in local phone memory. Now to implement this scenario at server side (which has php, mysql), I am planning with the following solution : Table : tblNewerData id newDataFlag == ============ 1 true Trigger : tgrUpdateNewData Above trigger will update the tblNewerData - newDataFlag field on Insert case of my main table. And every time my iPhone app will request for tblNewerData-newDataFlag field, and if it found true then only it will create new request, and if it founds false then the cached version of data will be displayed. So, I want to know that, is it the correct way to do so ? or else any other smart option available ? Thanks in advance.

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • php.ini file creation in server

    - by tibin mathew
    Hi, I am developing a php website. I cant see php.ini file in my server. my host will not provide it. So i'm now going to create a copy of that php.ini file. so i have tried system() i searched in google and i got this link. http://drupal.org/node/290592 Here they are using like this system("cp /usr/local/php5/lib/php.ini /home/YOURUSERNAME/php.ini"); in my server when i looked the phpinfo my php.ini location is /ms/svc/php5/conf/php.ini and when i looked for the current directory of my server using this $dir_path = str_replace( $_SERVER['DOCUMENT_ROOT'], "", dirname(realpath(FILE)) ) . DIRECTORY_SEPARATOR; i got my current directory as /netapp/whnas-swamp/s11/s11/01712/www.sample.com/webdocs/ so my system command will now look lke this system("/ms/svc/php5/conf/php.ini /netapp/whnas-swamp/s11/s11/01712/www.sample.com/webdocs/php.ini"); i have named this page as getthephpini.php and when i b rowse this page i got a blank page but no new php.ini file created in my server. Is any mistake in that code??? Can any one show me the correct way Please help me Thanks

    Read the article

  • C#, Asp.net Uploading files to file server...

    - by Imcl
    Using the link below, I wrote a code for my application. I am not able to get it right though, Please refer the link and help me ot with it... http://stackoverflow.com/questions/263518/c-uploading-files-to-file-server The following is my code:- protected void Button1_Click(object sender, EventArgs e) { filePath = FileUpload1.FileName; try { WebClient client = new WebClient(); NetworkCredential nc = new NetworkCredential(uName, password); Uri addy = new Uri("\\\\192.168.1.3\\upload\\"); client.Credentials = nc; byte[] arrReturn = client.UploadFile(addy, filePath); arrReturn = client.UploadFile(addy, filePath); Console.WriteLine(arrReturn.ToString()); } catch (Exception ex) { Console.WriteLine(ex.Message); } } I also used:- File.Copy(filePath, "\\192.168.1.3\upload\"); The following line doesnt execute... byte[] arrReturn = client.UploadFile(addy, filePath); tried changing it to:- byte[] arrReturn = client.UploadFile("\\192.168.1.3\upload\", filePath); IT still doesnt work...Any solution to it?? I basically want to transfer a file from the client to the file storage server without actually loggin into the server so that the client cannot access the storage location on the server directly...

    Read the article

  • Rails apps blew up on mediatemple's (dv) server

    - by BandsOnABudget
    i managed to fix this issue but i wanted to document it here for any others whom might have similar problems. I'm running a mediatemple (dv) rage server. monit started sending me alerts that i was having resource limitations on the server. logged into plesk and the CPU was pinned at 99.9%. Rebooted the server, catastrophe avoided... Not quite - all my rails apps were not loading My Setup ruby 1.8.6 Rails 2.3.5 w/ passenger installed as an apache module. I noticed a defunct ruby process so i killed and rebooted the server but runby continued to come back as defunct. Started trolling thru the apache log and i saw errors irt updating rubygems. i updated to the latest but then continued to get gem errors. Basically, I had to go thru all my apps and update any gems manually, reboot apache, and all was restored. Not really sure the cause of the issue but wanted to note it for posterity. Anybody else in the community ever have similar issues???

    Read the article

  • Using IsolatedStorage on a IIS server

    - by JoeBilly
    I'am a bit confusing about the use of Isolated Storage on an IIS server. I understand the goal of Isolated Storage : provides a safe place to store data with no worry about how and where is this place. Since Isolated Storage has a by-user and by-assembly approach, I'am not to wild about using it on a IIS server where applications have almost their own identity. I haven't really seen the interest of impersonating a web application and almost never seen impersonated web applications myself but this is my point of view. Using Isolated Storage on a server mean : Using Isolated stores in \Documents and Settings\<user>\ Which mean \Documents and Settings\Default User\ when the application pool is owned by Local System or Network Services I guess Which also mean Write rights on this folder for Local System or Network Services Using of impersonation Regarding a web application (logic), these ideas are confusing me... Document and Settings ? Default User ? Enable impersonation just for storage ? No control about storage on server ? Uh ? And then I'am a front of a dilema : use System.IO.Packaging (with Isolated Storage inside) on web applications or find an alternative ? Am I wrong in my approach ? Did I miss something ? Any point of view is appreciated and an explanation about the Isolated Storage with IIS philosophy could be an anwser. Thanks !

    Read the article

  • Calling server side method as filling value inside a textbox from calendar extender

    - by Dharmendra Singh
    I have a text box which i m filling of date from the calendar extender and the code is as below:- <label for="input-one" class="float"><strong>Date</strong></label><br /> <asp:TextBox ID="txtDate" runat="server" CssClass="inp-text" Enabled="false" AutoPostBack="true" Width="300px" ontextchanged="txtDate_TextChanged"></asp:TextBox> <asp:ImageButton ID="btnDate2" runat="server" AlternateText="cal2" ImageUrl="~/App_Themes/Images/icon_calendar.jpg" style="margin-top:auto;" CausesValidation="false" onclick="btnDate2_Click" /> <ajaxToolkit:CalendarExtender ID="calExtender2" runat="server" Format="dddd, MMMM dd, yyyy" OnClientDateSelectionChanged="CheckDateEalier" PopupButtonID="btnDate2" TargetControlID="txtDate" /> <asp:RequiredFieldValidator ID="RequiredFieldValidator7" runat="server" ControlToValidate="txtDate" ErrorMessage="Select a Date" Font-Bold="True" Font-Size="X-Small" ForeColor="Red"></asp:RequiredFieldValidator><br /> Javascript code is :- function CheckDateEalier(sender, args) { sender._textbox.set_Value(sender._selectedDate.format(sender._format)) } My requirement is that as the date is entered in to the textbox, I want to call this method: public void TimeSpentDisplay() { string date = txtDate.Text.ToString(); DateTime dateparsed = DateTime.ParseExact(date, "dddd, MMMM dd, yyyy", null); DateTime currentDate = System.DateTime.Now; if (dateparsed.Date > currentDate.Date) { divtimeSpent.Visible = true; } if (dateparsed.Date < currentDate.Date) { divtimeSpent.Visible = true; } if (dateparsed.Date == currentDate.Date) { divtimeSpent.Visible = false; } } Please help me that how i achieve this as i m calling this method inside txtDate_TextChanged method but the event is not firing as the text is changed inside the textbox. Please suggest how I can achieve this or give me an alternate idea to fulfill my requirement.

    Read the article

  • Smack API giving error while logging into Tigase Server setup locally

    - by Ameya Phadke
    Hi, I am currently developing android XMPP client to communicate with the Tigase server setup locally.Before starting development on Android I am writing a simple java code on PC to test connectivity with XMPP server.My XMPP domain is my pc name "mwbn43-1" and administrator username and passwords are admin and tigase respectively. Following is the snippet of the code I am using class Test { public static void main(String args[])throws Exception { System.setProperty("smack.debugEnabled", "true"); XMPPConnection.DEBUG_ENABLED = true; ConnectionConfiguration config = new ConnectionConfiguration("mwbn43-1", 5222); config.setCompressionEnabled(true); config.setSASLAuthenticationEnabled(true); XMPPConnection con = new XMPPConnection(config); // Connect to the server con.connect(); con.login("admin", "tigase"); Chat chat = con.getChatManager().createChat("aaphadke@mwbn43-1", new MessageListener() { public void processMessage(Chat chat, Message message) { // Print out any messages we get back to standard out. System.out.println("Received message: " + message); } }); try { chat.sendMessage("Hi!"); } catch (XMPPException e) { System.out.println("Error Delivering block"); } String host = con.getHost(); String user = con.getUser(); String id = con.getConnectionID(); int port = con.getPort(); boolean i = false; i = con.isConnected(); if (i) System.out.println("Connected to host " + host + " via port " + port + " connection id is " + id); System.out.println("User is " + user); con.disconnect(); } } When I run this code I get following error Exception in thread "main" Resource binding not offered by server: at org.jivesoftware.smack.SASLAuthentication.bindResourceAndEstablishSession(SASLAuthenticatio n.java:416) at org.jivesoftware.smack.SASLAuthentication.authenticate(SASLAuthentication.java:331) at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:395) at org.jivesoftware.smack.XMPPConnection.login(XMPPConnection.java:349) at Test.main(Test.java:26) I found this articles on the same problem but no concrete solution here Could anyone please tell me the solution for this problem.I checked the XMPPConnection.java file in the Smack API and it looks the same as given in the link solution. Thanks, Ameya

    Read the article

  • Losing 'post' requests sent to Pylons paster server

    - by Philip McDermott
    I'm sending post requests to a Pylons server (served by paster serve), and if I send them with any frequency many don't arrive at the server. One at a time is ok, but if I fire off a few (or more) within seconds, only a small number get dealt with. If I send with no post data, or with get, it works fine, but putting just one character of data in the post fields causes massive losses. For example, sending 200, 2 will come back. Sending 100 more slowly, 10 will come back. I'm making the requests form inside a Qt application. Tis will work ok (no data): QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); And this will result in only a fraction of the requests being dealt with: QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); I've played around with turning on use_threadpool, and other options (threadpool_workers, threadpool_max_requests = 300), of which some combinations can alter the results slightly (best case 10 responses in 200). If I send similar requests to other (non paster) servers, the replies come back ok, so I'm almost certain its'a paster serve config issue. Any help or advice greatly appreciated. Thanks Philip

    Read the article

  • Best Practice for Uploading Many (2000+) Images to A Server

    - by bob
    Hello, I have a general question about this. When you have a gallery, sometimes people need to upload 1000's of images at once. Most likely, it would be done through a .zip file. What is the best way to go about uploading this sort of thing to a server. Many times, server have timeouts etc. that need to be accounted for. I am wondering what kinds of things should I be looking out for and what is the best way to handle a large amount of images being uploaded. I'm guessing that you would allow a user to upload a zip file (assuming the timeout does not effect you), and this zip file is uploaded to a specific directory, lets assume in this case a directory is created for each user in the system. You would then unzip the directory on the server and scan the user's folder for any directories containing .jpg or .png or .gif files (etc.) and then import them into a table accordingly. I'm guessing labeled by folder name. What kind of server side troubles could I run into? I'm aware that there may be many issues. Even general ideas would be could so I can then research further. Thanks! Also, I would be programming in Ruby on Rails but I think this question applies accross any language.

    Read the article

  • Question about registering COM server and Add Reference to it in a C# project

    - by smwikipedia
    I build a COM server in raw C++, here is the procedure: (1) write an IDL file to define the interface and library. (2) use msidl.exe to compile the IDL file to necessary .h, .c, .tlb files. (3) implement the COM server in C++ and build a .dll file. (4) add the following registry entris: [HKEY_CLASSES_ROOT\RawComCarLib.ComCar.1\CurVer] @="RawComCarLib.ComCar.1" ;CLSID [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\InprocServer32] @="D:\com\Project01.dll" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\ProgID] @="RawComCarLib.ComCar.1" [HKEY_CLASSES_ROOT\CLSID{6CC26343-167B-4CF2-9EDF-99368A62E91C}\TypeLib] @="{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}" ;TypeLib [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0] @="Car Server Type Lib" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0] [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="D:\com\Project01.tlb" [HKEY_CLASSES_ROOT\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\FLAGS] @="0" [HKEY_LOCAL_MACHINE\SOFTWARE\Classes\TypeLib{E5C0EE8F-8806-4FE3-BC0E-3A56CFB38BEE}\1.0\0\win32] @="C:\Windows\System32\msdatsrc.tlb" (5) I try to add reference to the COM by click the Add Reference in the C# project. (6) In the COM tab, I saw my "Car Server Type Lib", it's ok until now. I try to use the Object Browser to browse my COM lib, but the Visual Studio said "the following components could not be browsed", and I noticed that there's no new reference added to the list in the C# project Reference. I can use the tlbimp.exe to generate a interop.CarCom.dll, and then use the COM through this interop dll, but I want this interop assembly to be generated automatically when I just add reference to the COM. Could someone tell me what's wrong? Many thanks.

    Read the article

  • Rails 2.3.11 Server Crashing After 4 Requests

    - by Taka
    I have a Rails 2.3.11 application running on my local Windows machine using InstantRails. I cd to my application directory, run ruby script/server to start the server running, and point my browser to localhost:3000. I get the page I expect, and am able to click a few links to other pages (all of them are static). The problem starts when I load the 4th page or so. My server crashes, with this message: Processing HomeController#index (for 127.0.0.1 at 2012-06-23 15:48:40) [GET] Rendering template within layouts/application Rendering home/index Completed in 11ms (View: 9, DB: 1) | 200 OK [http://localhost/index] C:/rails/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.11/lib/active_support/memoizable.rb:46: [BUG] Segmentation fault ruby 1.8.7 (2012-02-08 patchlevel 358) [i386-mingw32] This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. I've uninstalled this gem and reinstalled it, which didn't help. It doesn't seem to be the gem though, because the segmentation fault sometimes occurs in C:/rails/ruby/lib/ruby/gems/1.8/gems/mongrel-1.1.2-x86-mswin32/lib/mongrel.rb:114 or C:/rails/ruby/lib/ruby/1.8/benchmark.rb:306 Versions: >ruby -v ruby 1.8.7 (2012-02-08 patchlevel 358) [i386-mingw32] >rails -v Rails 2.3.11 I'd like to get this fixed so while I'm developing I don't have to keep restarting my server. Any suggestions?

    Read the article

  • Best way to pass image to server?

    - by Chris
    I have an SL3 application that needs to be able to pass an image to the server, and then the server will generate a PDF file with the image in it, and display it to the user. What I already have in place are the following: (1) Code to convert image to byte array (2) Code to generate PDF file with image The main problem that I am running into is the following: In order to bypass the pop-up blocker, which is a requirement for my application, I am using the following code: var button = new NavigationButton(); button.NavigateUri = new Uri("http://localhost:3616/PrintReport.aspx?ReportIndex=11&ActionType=Get&ReportIdentifier=" + reportIdentifier.ToString() + ""); button.TargetName = "_blank"; button.PerformClick(); Initially, I would pass the image to a WCF web service (as a byte array), and then "navigate" to the ASP.NET page that would display the report. However, if I do this, then I can not use my custom HyperlinkButton class, and, certain browsers, including Safari, will block a new window from opening up. Therefore, it appears that the only option is to use the HyperlinkButton class. What I need to be able to do is to somehow pass the image, as a byte array or some other data type, to the server, such that it can temporarily store the image, even if it is in a server variable, and then immediately retrieve it when I navigate to the PrintReport.aspx page. If I upload the image to an ASP.NET form and then use the HyperlinkButton class to navigate to the PrintReport page, it doesn't work, as the app navigates to the PrintReport page before the system has finished uploading the image. I can't pass it to a web service, as that would require that I navigate to the PrintReport.aspx page in the callback code of the web method that I would be passing the image to, and the HyperlinkButton will not allow that, based on security rules. Any help or ideas would be appreciated. Thanks. Chris

    Read the article

  • Execute javascript on IIS server

    - by James Westgate
    I have the following situation. A customer uses JavaScript with jQuery to create a complex website. We would like to use JavaScript and jQuery on the server (IIS) for the following reasons: Skills transfer - we would like to use JavaScript and jQuery on the server and not have to use eg VB Script. / classic asp. .Net framework/Java etc is ruled out because of this. Improved options for search/accessibility. We would like to be able to use jQuery as a templating system, but this isn't viable for search engines and users with js turned off - unless we can selectively run this code on the server. There is significant investment in IIS and Windows Server, so changing that is not an option. I know you can run jScript on IIS using windows Script host, but am unsure of the scalability and the process surrounding this. I am also unsure whether this would have access to the DOM. Here is a diagram that hopefully explains the situation. I was wondering if anyone has done anything similar?

    Read the article

  • ruby on rails: undefined method "version_requirements' when attempting to start server after new install

    - by ezabak
    Hi there, I had to newly install ruby on rails recently. When I attempted to start the server for a project I had already been working on previous to this new install, I received the following error: $ ruby script/server => Booting WEBrick... ./script/../config/../vendor/rails/railties/lib/rails/gem_dependency.rb:107:in `requirement': undefined method `version_requirements' for #<Gem::Dependency:0xb74bf764> (NoMethodError) from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `map' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:292:in `check_gem_dependencies' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:165:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:112:in `run' from /media/78C0-455B/bidmc/schedule/config/environment.rb:13 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/servers/webrick.rb:59 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /media/78C0-455B/bidmc/schedule/vendor/rails/activesupport/lib/active_support/dependencies.rb:153:in `require' from /media/78C0-455B/bidmc/schedule/vendor/rails/railties/lib/commands/server.rb:49 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' from script/server:3 I have the latest versions of ruby, rubygems, and rails. Any suggestions? Thanks.

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • production vs dev server content-disposition filename encoding

    - by rgripper
    I am using asp.net mvc3, download file in the same browser (Chrome 22). Here is the controller code: [HttpPost] public ActionResult Uploadfile(HttpPostedFileBase file)//HttpPostedFileBase file, string excelSumInfoId) { ... return File( result.Output, "application/vnd.ms-excel", String.Format("{0}_{1:yyyy.MM.dd-HH.mm.ss}.xls", "????????????", DateTime.Now)); } On my dev machine I download a programmatically created file with the correct name "????????????_2012.10.18-13.36.06.xls". Response: Content-Disposition:attachment; filename*=UTF-8''%D0%A1%D1%83%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_2012.10.18-13.36.06.xls Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:06 GMT Server:ASP.NET Development Server/10.0.0.0 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 And from production server I download a file with the name of the controller's action + correct extension "Uploadfile.xls", which is wrong. Response: Content-Disposition:attachment; filename="=?utf-8?B?0KHRg9C80LzQuNGA0L7QstCw0L3QuNC1XzIwMTIuMTAuMTgtMTMuMzYu?=%0d%0a =?utf-8?B?NTUueGxz?=" Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:55 GMT Server:Microsoft-IIS/7.5 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 X-Powered-By:ASP.NET Web.config files are the same on both machines. Why does filename gets encoded differently for the same browser? Are there any kinds of default settings in web.config that are different on machines that I am missing?

    Read the article

  • Using authsmtp from a Grails server

    - by Simon
    This is quite a specific question, and I have had no luck on the grails nabble forum, so I thought I would post here. I am using the grails mail plug-in, but I think my question is a general one about using authsmtp as an email gateway from my server. I am having trouble sending mail from my app using authsmtp. I have installed and configured the mail plugin and was originally using my ISP's SMTP server to send mails. However when I deployed to AWS EC2 this failed because my elastic IP was blocked by the SMTP host. So I bought myself an authsmtp account and set up my server email address as an accepted one at authsmtp. I then changed my configuration in SecurityConfig.groovy to point to the authsmtp server that I had been designated... mailHost = "mail.authsmtp.com" mailUsername = "myusername" mailPassword = "mypassword" mailProtocol = "smtp" mailFrom = "[email protected]" mailPort = 2525 ...and I'm just trying to get this to work locally before I deploy back up to AWS. Sending mail fails and in my log I have this exception: 2010-02-13 10:59:44,218 [http-8080-1] ERROR service.EmailerService - Failed to send emails: Failed messages: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. org.springframework.mail.MailSendException; nested exception details (1) are: Failed message 1: com.sun.mail.smtp.SMTPSendFailedException: 513 5.0.0 Your email system must authenticate before sending mail. at com.sun.mail.smtp.SMTPTransport.issueSendCommand(SMTPTransport.java:1388) at com.sun.mail.smtp.SMTPTransport.mailFrom(SMTPTransport.java:959) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:583) I'm a bit lost since the username and password I provide in the configuration are definitely correct. A terse and not very helpful conversation with authsmtp support suggests that I need to MD5 and/or base64 encode my credentials before sending, so my question is in three parts... 1) any idea what's going on with the failure and why that message is appearing? 2) how would I encode the credentials to pass to authsmtp and how would I configure that for the mail plugin 3) has anyone successfully connected and sent mail through authsmtp from the mail plugin and specifically from AWS EC2?

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • Apache attack on compromised server, iframe injected by string replace

    - by Quang-Tuan Luong
    My server has been compromised recently. This morning, I have discovered that the intruder is injecting an iframe into each of my HTML pages. After testing, I have found out that the way he does that is by getting Apache (?) to replace every instance of <body> by <iframe link to malware></iframe></body> For example if I browse a file residing on the server consisting of: </body> </body> Then my browser sees a file consisting of: <iframe link to malware></iframe></body> <iframe link to malware></iframe></body> I have immediately stopped Apache to protect my visitors, but so far I have not been able to find what the intruder has changed on the server to perform the attack. I presume he has modified an Apache config file, but I have no idea which one. In particular, I have looked for recently modified files by time-stamp, but did not find anything noteworthy. Thanks for any help. Tuan. PS: I am in the process of rebuilding a new server from scratch, but in the while, I would like to keep the old one running, since this is a business site.

    Read the article

  • Windows Server TCP Client application in c# Stops working after a while

    - by user1692494
    I am developping an application in C# Net framework 2.0. It is basicly a service application with a tcp client class. Class for Tcp Client Part public TelnetConnection(string Hostname, int Port) { host = Hostname; prt = Port; try { tcpSocket = new TcpClient(Hostname, Port); } catch (Exception e) { //Console.WriteLine(e.Message); } } The application is connecting to the server and revieves update's and more information. It works for about 20 minutes then stops recieving information from the server. -- Server side Client is still connected and the client side its still connected but stops receiving information from the server. I've been searching Stack Overflow and Google but no luck. try { if (!tcpSocket.Connected) return null; StringBuilder sb = new StringBuilder(); do { Parse(sb); System.Threading.Thread.Sleep(TimeOutMs); } while (tcpSocket.Available > 0); return sb.ToString(); } Application works perfect when it runs as console application but when running as service. It just stops.

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 embedded connection error

    - by Japster
    0 down vote favorite 1 share [fb] share [tw] I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; This cannot be done this way and the only way I see you can set the hostname is with the object inspector, but that is not available at runtime and I need to set the hostname on the client when the program is running the first time, so how do you set the hostname/IP address at runtime? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >