Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 1011/1211 | < Previous Page | 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018  | Next Page >

  • Dot Net Nuke app_offline randomly being generated

    - by chelfers
    We have had multiple DNN sites running for quite a few months now without any issues. Twice in the last 3 days our sites have gone offline by the addition of the app_offline.htm file in the root dir. There is only one developer with access to the sites at a coding / directory viewing level and the file is generated at weird times times when he is NOT accessing our network. We are not publishing anything to the server ( and have not published any .net code in days ), upgrading, changing code, or even modifying content. Has anyone run into this issue?

    Read the article

  • Will there be IQueryable-like additions to IObservable? (.NET Rx)

    - by Jason
    The new IObservable/IObserver frameworks in the System.Reactive library coming in .NET 4.0 are very exciting (see this and this link). It may be too early to speculate, but will there also be a (for lack of a better term) IQueryable-like framework built for these new interfaces as well? One particular use case would be to assist in pre-processing events at the source, rather than in the chain of the receiving calls. For example, if you have a very 'chatty' event interface, using the Subscribe().Where(...) will receive all events through the pipeline and the client does the filtering. What I am wondering is if there will be something akin to IQueryableObservable, whereby these LINQ methods will be 'compiled' into some 'smart' Subscribe implementation in a source. I can imagine certain network server architectures that could use such a framework. Or how about an add-on to SQL Server (or any RDBMS for that matter) that would allow .NET code to receive new data notifications (triggers in code) and would need those notifications filtered server-side.

    Read the article

  • Fastest way to compress a database or .bak file and transfer it

    - by Nai
    As per the question title. I wonder if there are special programmes or commands that makes zipping up a .bak file and transferring it super quick. I read abour xp_cmdshell here but I'm not sure about the speed. My .bak file is about 12 gigs at the moment. Related to this is the possibility of using Red Gate's SQL Data Compare to just transfer the differential data across the network pipeline but I have never used SQL Data Compare before and I'm not sure how it goes about doing INSERTS on tables with Primary Keys and such. Also, not sure about the speed. Does anyone have any experience with this programme or similar programmes? Cheers!

    Read the article

  • Measuring daemon CPU utilization over a portion of it's wall clock run time

    - by WhirlWind
    I am dealing with a network-related daemon: it takes data in, processes it, and spits it out. I would like to increase the performance of this daemon by profiling it and reducing it's CPU utilization. I can do this easily on Linux with gprof. However, I would also like to use something like "time" to measure it's total CPU utilization over a period of time. If possible, I would like to time it over a period that is less than its total run time: thus, I would like to start the daemon, wait awhile, generate CPU statistics, stop generating them, then stop the daemon at some later time. The "time" command would work well for me, but it seems to require that I start and stop the daemon as a child of time. Is there a way to measure CPU utilization for only a portion of the daemon's wall clock time?

    Read the article

  • WCF - (Custom) binary serialisation.

    - by Barguast
    I want to be able to query my database over the web, and I am wanting to use a WCF service to handle the requests and results. The problem is that due to the amount of data that can potentially be returned from these queries, I'm worried about how these results will be serialised over the network. For example, I can imagine the XML serialisation looking like: <Results> <Person Name="Adam" DateOfBirth="01/02/1985" /> <Person Name="Bob" DateOfBirth="04/07/1986" /> </Results> And the binary serialisation containing types names and other (unnecessary) metadata. Perhaps even the type name for each element in a collection? o_o Ideally, I'd like to perform the serialisation of certain 'DataContract'-s myself so I can make it super-compact. Does anyone know if this is possible, or of any articles which explain how to do custom serialisation with WCF? Thanks in advance

    Read the article

  • How to implement blocking request-reply using Java concurrency primitives?

    - by Uri
    My system consists of a "proxy" class that receives "request" packets, marshals them and sends them over the network to a server, which unmarshals them, processes, and returns some "response packet". My "submit" method on the proxy side should block until a reply is received to the request (packets have ids for identification and referencing purposes) or until a timeout is reached. If I was building this in early versions of Java, I would likely implement in my proxy a collection of "pending messages ids", where I would submit a message, and wait() on the corresponding id (with a timeout). When a reply was received, the handling thread would notify() on the corresponding id. Is there a better way to achieve this using an existing library class, perhaps in java.util.concurrency? If I went with the solution described above, what is the correct way to deal with the potential race condition where a reply arrives before wait() is invoked?

    Read the article

  • Multiple Servers with identical services

    - by Jerry Bailey
    I have a dozen servers in different locations all running the same web service application but each going against their own SQL Server DB. I am writing a desktop application that consumes the web services. I want to present the user with a drop down of all servers in the network that are running the same wweb service application. Do I have to add a ServiceReference for each of the servers running the web service app and thereby having as many proxies as there are servers? Or can a define a single instance of the services and dynamically build a list of endpoints to select from a drop down?

    Read the article

  • Buffer management for socket application best practice

    - by Poni
    Having a Windows IOCP app............ I understand that for async i/o operation (on network) the buffer must remain valid for the duration of the send/read operation. So for each connection I have one buffer for the reading. For sending I use buffers to which I copy the data to be sent. When the sending operation completes I release the buffer so it can be reused. So far it's nice and not of a big issue. What remains unclear is how do you guys do this? Another thing is that even when having things this way, I mean multi-buffers, the receiver side might be flooded (talking from experience) with data. Even setting SO_RCVBUF to 25MB didn't help in my testings. So what should I do? Have a to-be-sent queue?

    Read the article

  • How small is *too small* for an opensource project?

    - by Adam Lewis
    I have a fair number of smaller projects / libraries that I have been using over the past 2 years. I am thinking about moving them to Google Code to make it easier to share with co-workers and easier to import them into new projects on my own environments. The are things like a simple FSMs, CAN (Controller Area Network) drivers, and GPIB drivers. Most of them are small (less than 500 lines), so it makes me wonder are these types of things too small for a stand alone open-source project? Note that I would like to make it opensource because it does not give me, or my company, any real advantage.

    Read the article

  • Record locking problem between linux and Windows

    - by PabloG
    I need to run a bunch of old DOS FoxPro / Clipper applications in linux under DOSEMU. The programs access their "databases" located on a network server (could be a Windows or Linux server) Actually, the programs ran fine, but I cannot manage to make the record locking work as supposed: I can run a program in two terminals (or the server and any terminal for instance) and lock the same record in both. Now, I'm using Tiny Core Linux as terminal and Windows XP as server, accesing the shared files via CIFS and the latest DOSEMU (1.4.0), but I tried with various combinations of server (Ubuntu 7 to 9, Damn Small Linux, XP) <- protocol (CIFS, samba, various versions of smbclient) <- client (same as server) with no luck I tried to configure the server part to work without oplocks in samba (after reading the entire O'Reilly Samba book locking chapter in http://oreilly.com/catalog/samba/chapter/book/ch05_05.html ) and in XP (\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\UseOpportunisticLocking = 0) but the problem persist. Any ideas? TIA, Pablo

    Read the article

  • Why does Perl's Crypt::SSLeay timeout on Intel Mac OS X machines?

    - by Joe
    A have a Perl cron job that recently started having its HTTPS connections start failing with an error of "500 SSL read timeout". I've tracked that the error is being thrown as part of an alarm in Crypt::SSLeay, but I don't know if this is simply something taking too long to respond. So far, I've adjusted the timeout from the default 30 seconds to 10 minutes and it still times out. I've moved the script to other machines, and those on Intel Mac OS X systems all time out, while those under Linux, or on PPC Mac OS X systems run fine, so I don't think it's changes on the network or remote server. When the process started having problems does not coincide with any software updates or reboots on the machine, and I've contacted the server I'm connecting to, and everyone claims that they haven't changed anything. Does anyone have recommendations on trying to debug HTTPS, or have you ever seen this behavior and give recommendations on something I might've overlooked at that could've caused this problem?

    Read the article

  • Powershell 4 compatibility with Windows 2008 r2

    - by Acerbity
    In my environment I have a single server that has access to pretty much my entire network. That server is running Windows 2008 r2, and I have upgraded Powershell to version 4.0. The question I have is this... Can I run cmdlets from that machine on other machines that are version 4 specific? For instance, when I am using Powershell, even though it is version 4, it doesn't give me an intellisense autocomplete for "Get-Volume" like it would on a 2012 r2 machine. I understand that it won't run on that machine because the infrastructure won't allow for it, but what about a 2012 r2 machine remotely? I am looking to run batch scripts from there for various purposes.

    Read the article

  • using ruby test and selenium grid how can I keep the same browser window for multiple tests?

    - by George Horlacher
    Each of my tests start a new selenium client browser and tear it down so they can run stand alone with this code: def setup if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver.new("#$sell_server", 4444, "#$browser", "http://#$network.#$host:2086", 10000); @selenium.start end @selenium.set_context("test_login") end def teardown @selenium.stop unless $selenium assert_equal [], @verification_errors end What I'd like is to run a suite of tests that all use the same browser and don't keep opening and closing new browsers for every test. I've tried using $selenium as a global object / browser but each test still opens up a new browser and closes it. How should this be done?

    Read the article

  • Why should I reuse XmlHttpRequest objects?

    - by Xavi
    From what I understand, it's a best practice to reuse XmlHttpRequest objects whenever possible. Unfortunately, I'm having a hard time understanding why. It seems like trying to reuse XHR objects can increase code complexity, introduce possible browser incompatibilities, and lead to other subtle bugs. After researching this question for a while, I did come up with a list of possible explanations: Fewer objects created means less garbage collecting Reusing XHR objects reduces the chance of memory leaks The overhead of creating a new XHR object is high The browser is able to perform some sort of network optimization under hood But I'm not sure if any of these reasons are actually valid. Any light you can shed on this question would be much appreciated.

    Read the article

  • PHP Download Script

    - by KA_lin
    [edited] I am trying to make a script that downloads a file, the problem is that i am accesing a page(from a server on the network) that generates that file (opens a download window). Is there a way I can get that file witought that pop-up in php? NOTE: I cannot modify the generating page... It is an excel file. The application is called Cognos. I managed with opera to see page variables parsed so I can get to the download page but I must make that download in a folder without the download pop-up

    Read the article

  • connecting to a webservice from android - AsyncTask or Service?

    - by football
    I'm writing an android app that will connect to a REST/JSON webservice. Users will be retrieving information, uploading comments, downloading and uploading images etc. I know that I shouldn't keep all this network communication in the Activity/UI thread, as it will cause ANRs. What I'm confused about is whether I should use an AsyncTask or a Service with "manual" threading to accomplish this; With a Service, I'd simply have a public method for each method in the webservice's API. I'd then implement threading within each of these methods. If I used an AsyncTask, I would create a helper class that defined AsyncTasks for each method in the webservice's API. Which method is preferred? Interaction with the webservice will only happen while the user is in the Activity. Once they switch to another application, or exit the program, there is no need for communication with the webservice.

    Read the article

  • cURL PHP Proper SSL between private servers with self-signed certificate

    - by PolishHurricane
    I originally had a connection between my 2 servers running with CURLOPT_SSL_VERIFYPEER set to "false" with no Common Name in the SSL cert to avoid errors. The following is the client code that connected to the server with the certificate: curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,FALSE); curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,2); However, I recently changed this code (set it to true) and specified the computers certificate in PEM format. curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,TRUE); curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,2); curl_setopt($ch,CURLOPT_CAINFO,getcwd().'/includes/hostcert/Hostname.crt'); This worked great on the local network from a test machine, as the certificate is signed with it's hostname for a CN. How can I setup the PHP code so it only trusts the hostname computer and maintains a secure connection. I'm well aware you can just set CURLOPT_SSL_VERIFYHOST to "0" or "1" and CURLOPT_SSL_VERIFYPEER to "false", but these are not valid solutions as they break the SSL security.

    Read the article

  • CPU/GPU Monitering. (Tempature, current speed, etc) in C#

    - by Tommy
    Im looking for a way to moniter system statistics, here are my main points of interest: CPU Tempature CPU speed, (Cycles per second) CPU Load (Idle percent) GPU Tempature Some other points of interest: Memory useage Network Load (Traffic Up/Down) My ultimate goal is to write an application that can be used for easily running in the backround, and allow setting many events fr certain actions, for instance When processer temp gets to 56C -> Do _Blank_ etc. So this leaves me two main points. Is there a framework already out there for this sort of thing? If No to #1, How can i go about doing this?

    Read the article

  • Is there any "standard" htonl-like function for 64 bits integers in C++ ?

    - by ereOn
    Hi, I'm working on an implementation of the memcache protocol which, at some points, uses 64 bits integer values. These values must be stored in "network byte order". I wish there was some uint64_t htonll(uint64_t value) function to do the change, but unfortunately, if it exist, I couldn't find it. So I have 1 or 2 questions: Is there any portable (Windows, Linux, AIX) standard function to do this ? If there is no such function, how would you implement it ? I have in mind a basic implementation but I don't know how to check the endianness at compile-time to make the code portable. So your help is more than welcome here ;) Thank you.

    Read the article

  • Unable to connect to sql database only from asp

    - by Brian
    In a VB6 program: Dim conn As Object Set conn = CreateObject("ADODB.Connection") conn.Open "DRIVER={SQL Server}; Server=(local)\aaa; Database=bbb; UID=ccc; PWD=ddd" In an ASP program: Sub ProcessSqlServer(conn) Set conn = Server.CreateObject("ADODB.Connection") conn.Open "DRIVER={SQL Server}; Server=(local)\aaa; Database=bbb; UID=ccc; PWD=ddd" The VB6 program works, the ASP program does not (see error below). I tried checking the event log for errors, but found nothing. Or more precisely, I did find local an activation permission error, but this was fixed once I added local launch/activation permission for Network Service to the Machine Debug Manager via the component services tool. Error: Microsoft OLE DB Provider for ODBC Drivers error '80004005' [Microsoft][ODBC SQL Server Driver]Timeout expired

    Read the article

  • Limitation of Attachment size when using SMTP

    - by Gas Gemba
    Hi, I wrote a C++ program to send a mail using SMTP. But when I attach any files I notices that a single file's size always is limited to 808 bytes. As an example if I send a text file with 10 KBs, when I download the attachment it has only text worth 808 bytes. If the large file is a zip file, it gets corrupted in unzipping obviously due to CRC failure. I used a MAPI library to send larger files without a problem. Is this a network limitation of SMTP? Can someone please explain why this is happening?? Thank You!!!

    Read the article

  • Remote DocumentRoot in Apache gives a 404

    - by kshouler
    I have the following specified in my httpd.conf, but I get a 404 when attempting to connect to the server from another machine. If I set the docroot to the default htdocs directory, everything works fine. (note.. I've also tried replacing the "//storage/data1" part of the path with the network drive letter "U:") ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" DocumentRoot "//storage/data1/Engineering/Product Development" <Directory "//storage/data1/Engineering/Product Development"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>

    Read the article

  • Rewrite content served by apache

    - by Andy
    Hi guys, I have an internal app (Jira) that i want to use internally and externally, now there might be another way of doing this in which case i'm open to it, but this is what i have so far: URL one: https://domainname.com/jira - external domain name for it URL two: https://domainname.local/jira - internal network name for it. I am running Apache as a reverse proxy and I have this: <Location /jira> ProxyPass http://127.0.0.1:8080/jira ProxyPassReverse http://127.0.0.1:8080/jira </Location> Jira creates all of its links using a base url, which in this case is set to 'https://domainname.local/jira', so obviously when pages get served to the outside world they have .local on them. The question is, is there a way to have the content rewritten as it is served in order to change the .local addresses within the HTML to be the .com ones? Or am i being a retard and there is an easier way to solve this? Cheers for any help.... Andy

    Read the article

  • Numpy modify array in place?

    - by User
    I have the following code which is attempting to normalize the values of an m x n array (It will be used as input to a neural network, where m is the number of training examples and n is the number of features). However, when I inspect the array in the interpreter after the script runs, I see that the values are not normalized; that is, they still have the original values. I guess this is because the assignment to the array variable inside the function is only seen within the function. How can I do this normalization in place? Or do I have to return a new array from the normalize function? import numpy def normalize(array, imin = -1, imax = 1): """I = Imin + (Imax-Imin)*(D-Dmin)/(Dmax-Dmin)""" dmin = array.min() dmax = array.max() array = imin + (imax - imin)*(array - dmin)/(dmax - dmin) print array[0] def main(): array = numpy.loadtxt('test.csv', delimiter=',', skiprows=1) for column in array.T: normalize(column) return array if __name__ == "__main__": a = main()

    Read the article

  • Find directory on different server C#

    - by rainhider
    I have a website on Server A and it needs to find a directory on Server B. On my aspx page, I have: <mb:FileSystemDataSource ID="fileSystemDataSource" runat="server" RootPath="\\servername\Folder\Subfolder" FoldersOnly="true" /> mb is assembly name alias. On my aspx.cs page, I have: protected void Page_Load(object sender, EventArgs e) { DataTable gridviewSource = DisplayFilesInGridView(); DataRow gridviewRow; //sets the server path so it does not default to current server string ServerPath = System.IO.Path.Combine( "//", this.fileSystemDataSource.RootPath); //Get All Folders Or Directories and add in table DirectoryInfo directory = new DirectoryInfo(Server.MapPath(ServerPath)); DirectoryInfo[] subDirectories = directory.GetDirectories(); } Well, it throws an error on the Server.MapPath because it cannot find the server. I am connected to the network. I looked at IIS, and I am pretty sure that is not the problem. If so, I would really need specifics. Any help would be appreciated.

    Read the article

< Previous Page | 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018  | Next Page >