Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 723/778 | < Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • How do I secure a .NET Web Service for use by an iPhone application?

    - by David A Gibson
    Hello, The title says it all, I have a Web Service written in .NET that provides data for an iPhone application. It will also allow the application make a "reservation." Currently it's all internal to the corporate network but obviously when the iPhone application is published I will need ensure the Web Service is available externally. How would I go about securing the Web Service? There are two aspects I'm looking into: Authentication for accessing the web service Protection for the data being transferred I'm no so bothered about the data being passed back and forth as it will be viewable in the application anyway (which will be free). The key issue for me is preventing users from accessing the Web Service and making reservations themselves. At the moment I am considering encrypting any strings in the XML data passed back and forth so only the client can effectively use the web service sidestepping the need for authentication and providing protection for the data. This is the only model I have seen but I think the overheads on the iPhone and even for the web service make for a poor user experience. Any solutions at all would be most welcome? Thanks

    Read the article

  • ColdFusion Session issue - multiple users behind one proxy IP -- cftoken and cfid seems to be shared

    - by smoothoperator
    Hi Everyone, I have an application that uses coldfusion's session management (instead of the J2EE) session management. We have one client, who has recently switched their company's traffic to us to come viaa proxy server in their network. So, to our Coldfusion server, it appears that all traffic is coming from this one IP Address, for all of the accounts of this one company.. Of the session variables, Part 1 is kept in a cflock, and Part 2 is kept in editable session variables. I may be misundestanding, but we have done it this way as we modify some values as needed throughout the application's usage. We are now running into an issue of this client having their session variables mixed up (?). We have one case where we set a timestamp.. and when it comes time to look it up, it's empty. From the looks of it this is happening because of another user on the same token. My initial thoughts are to look into modifying our existing session management to somehow generate a unique cftoken/cfid, or to start using jsession_ID, if this solves the problem at all. I have done some basic research on this issue and couldn't find anything similar, so I thought I'd ask here. Thanks!

    Read the article

  • select from multiple tables but ordering by a datetime field

    - by Chris Mccabe
    I have 3 tables that are unrelated (related that each contains data for a different social network). Each has a datetime field dated- I'm already grouping by hour as you can see below (this one below for linked_in) SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."' GROUP BY hour I would like to know how to do a total across all 3 networks- the tables for the three are CREATE TABLE IF NOT EXISTS `upd8r_facebook_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `fb_id` bigint(30) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=80 ; CREATE TABLE IF NOT EXISTS `upd8r_linked_in_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `linked_in` varchar(200) NOT NULL, `oauth_secret` varchar(100) NOT NULL, `first_count` int(11) NOT NULL, `second_count` int(11) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=200 ; CREATE TABLE IF NOT EXISTS `upd8r_twitter_accts` ( `id` int(11) NOT NULL AUTO_INCREMENT, `owner_id` varchar(50) NOT NULL, `user_id` int(11) NOT NULL, `twitter` varchar(200) NOT NULL, `twitter_secret` varchar(100) NOT NULL, `dated` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=9 ; something like this ? (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_linked_in_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_facebook_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL (SELECT count(*), date_format(dated, '%Y:%m:%d %H') as hour FROM upd8r_twitter_accts WHERE CAST(dated AS DATE) = '".$start_date."') UNION ALL GROUP BY hour

    Read the article

  • Suggestions on writing a TCP IP messaging system (Client/Server) using Delphi 2010

    - by Shane
    I would like to write a messaging system using TCP IP in Delphi 2010. I would like to hear what my best options are for using the standard delphi 2010 components/indy components for doing this. I would like to write a server which does the listening and forwarding of messages to all machines on the network running a client. 1.) a.) clients can send a message to server to be forwarded to all other clients b.) clients listen for messages from other senders (via server) and displays messages. 2.) a.) Server can send a message to all clients b.) Server forwards any messages from clients to all other clients thanks for any suggestions NOTE: I am not writing a instant messaging or chat program. This is merely a system where users can send alerts/messages to other users - they can not reply to each other! NO commercial, shareware, etc links - please! I would like to hear about how you would go about writing this type of system and what approachs you would take, and possibly the TCP IP messaging architecture you would use. Whether it be straight Winows API, Indy components, etc, etc.

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Cross-platform general purpose C++ RPC library

    - by iUm
    Here's the task: Imagine, we have an applications and a plug-in for it (dynamic library). Interface between the application and the plug-in is completely defined. Now I need to run the application and the plug-in on different computers. I wrote a stub for the plug-in on a computer where the real applications is running. And the application loads it and calls its functions like if it were a native plug-in. On the other computer there's a stub instead of the real application, which loads the native plug-in. Now I need to organize RPCs between my stubs over the network, regardless the very transport. Usually, it's not difficult. But there're some restrictions: Application-plug-in interaction can be reenterable (e.g. application calls f1() from plug-in, in f1() plug-in calls g1() from application, in g1() application calls f2() from plug-in and so on...) Any such reenteration should be executed exactly by the same thread, which started the sequence Where can I find a cross-platform C++ RPC library with such features?

    Read the article

  • Copy folder contents using VBScript

    - by LukeR
    I am trying to copy the contents of certain folders to another folder using VBScript. The goal is to enumerate a user's AD groups and then copy specific folder content based on those groups. I have code, which is currently not working. Dim Group,User,objFSO,objFolder,source,target,StrDomain StrDomain = "domain.local" FolderBase = "\\domain.local\netlogon\workgrps\icons" Set net = CreateObject("wscript.network") Struser = net.username target = "\\fs1\users\"&net.username&"\Desktop\AppIcons\" DispUserInWhichGroup() Function DispUserInWhichGroup() On Error Resume Next Set objFSO=CreateObject("Scripting.FileSystemObject") Set User = GetObject("WinNT://" & strDomain & "/" & strUser & ",user") For Each Group In User.Groups source = FolderBase & Group.name Set objFolder = GetFolder(source) For Each file in objFolder.Files objFSO.CopyFile source &"\"& file.name, target&"\"&file.name Next Next End Function This has been cobbled together from various sources, and I'm sure most of it is right, I just can't get it working completely. Any assistance would be great. Cheers.

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • Can Win32_NetworkAdapterConfiguration.EnableStatic() be used to set more than one IP address?

    - by Andrew J. Brehm
    I ran into this problem in a Visual Basic program that uses WMI but could confirm it in PowerShell. Apparently the EnableStatic() method can only be used to set one IP address, despite taking two parameters IP address(es) and subnetmask(s) that are arrays. I.e. $a=get-wmiobject win32_networkadapterconfiguration -computername myserver This gets me an array of all network adapters on "myserver". After selecting a specific one ($a=$a[14] in this case), I can run $a.EnableStatic() which has this signature System.Management.ManagementBaseObject EnableStatic(System.String[] IPAddress, System.String[] SubnetMask) I thought this implies that I could set several IP addresses like this: $ips="192.168.1.42","192.168.1.43" $a.EnableStatic($ips,"255.255.255.0") But this call fails. However, this call works: $a.EnableStatic($ips[0],"255.255.255.0") It looks to me as if EnableStatic() really takes two strings rather than two arrays of strings as parameters. In Visual Basic it's more complicated and arrays must be passed but the method appears to take into account only the first element of each array. Am I confused again or is there some logic here?

    Read the article

  • Silverlight Business Application template with WCF is throwing warning.

    - by Manoj
    Hi, I am using the Silvelight Business Application template. I wrote a function which uses Membership.getUserList function to return the user list. I tried exposing it as Service using WCF. But when I try to compile the client side code it throws a warning saying "Client Proxy Generation for user_authentication.Web.Service1 failed'. Why does it happen? The complete warning message is: Warning 4 Client proxy generation for service 'user_authentication.Web.Service1' failed: Generating metadata files... Warning: Unable to load a service with configName 'user_authentication.Web.Service1'. To export a service provide both the assembly containing the service type and an executable with configuration for this service. Details:Either none of the assemblies passed were executables with configuration files or none of the configuration files contained services with the config name 'user_authentication.Web.Service1'. Warning: No metadata files were generated. No service contracts were exported. To export a service, use the /serviceName option. To export data contracts, specify the /dataContractOnly option. This can sometimes occur in certain security contexts, such as when the assembly is loaded over a UNC network file share. If this is the case, try copying the assembly into a trusted environment and running it.

    Read the article

  • asp.net 3.5 app - can not load asemblies, "Strong name signature could not be verified", only when d

    - by hitsolutions
    Have developed an asp.net 3.5 application which consists of a we-site, some developed assemblies and some 3rd party assembles such as Telerik, Jayrock etc, all very much standard 3rd party apps. Created and built this app, tested on Win 2008 Eval running on a VM, all fine. Imagine my frustration when after installing on clients production Win 2008 server, that the app could not run and the error message was the "Strong name signature could not be verified. The assembly may have been tampered with, or it was delay signed ..." one. This was for all assembles in app (removed one and this kept popping up for a different assembly). Attempted to install on a machine on the network and received the same error. I am fairly baffled and a little freaked as I can not figure this out and time is rapidly running out. Have inspected all parts of server I know about (.NET, IIS7) but all seems fine. What could cause this? It sounds like there is a stricter security manifest on the production server - but where would I look and for what? It must be a group policy. only other item is that the machines are running Symantec ante-virus. The IT head is on hols so can't quiz him which is also frustrating - but as they say time waits for no man!

    Read the article

  • C# Type conversion between two similar Datatable objects

    - by Ali
    I have .NET project with sync framework and two separate Datasets for MS SQL and Compact SQL. in my base class I have a generic DataTable object. in my derived classed I assign Typed DataTable to the generic object based on whether the application is operating online or offline: example: if (online) _dataTable = new MSSQLDataSet.Customer; else _dataTable = new CompactSQLDataSet.Customer; Now every where in my code i have to check and do a cast based on the current network mode like this: public void changeCustomerID(int ID) { if (online) (MSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; else (CompactMSSQLDataSet.CustomerDataTable)_dataTable)[i].CustomerID = value; } but I don't think this is very efficient and I believe it can be done in a smarter way to only use one line of code by dynamically getting the Type of _dataTable on the run time. my problem is at the design time, in order to acess datatable porperties such as "CustomerID" it has to be casted to either MSSQLDataSet.CustomerDataTable or CompactMSSQLDataSet.CustomerDataTable. Is there a way to have a function or a operator to convert the _datatable to its runtime type but still be able to use it's design time properties which are the same between the two types? something like: ((aType)_dataTable)[i].CustomerID = value; //or GetRuntimeType(_dataTable)[i].CustomerID = value;

    Read the article

  • Creating Binary Block from struct

    - by MOnsDaR
    I hope the title is describing the problem, i'll change it if anyone has a better idea. I'm storing information in a struct like this: struct AnyStruct { AnyStruct : testInt(20), testDouble(100.01), testBool1(true), testBool2(false), testBool3(true), testChar('x') {} int testInt; double testDouble; bool testBool1; bool testBool2; bool testBool3; char testChar; std::vector<char> getBinaryBlock() { //how to build that? } } The struct should be sent via network in a binary byte-buffer with the following structure: Bit 00- 31: testInt Bit 32- 61: testDouble most significant portion Bit 62- 93: testDouble least significant portion Bit 94: testBool1 Bit 95: testBool2 Bit 96: testBool3 Bit 97-104: testChar According to this definition the resulting std::vector should have a size of 13 bytes (char == byte) My question now is how I can form such a packet out of the different datatypes I've got. I've already read through a lot of pages and found datatypes like std::bitset or boost::dynamic_bitset, but neither seems to solve my problem. I think it is easy to see, that the above code is just an example, the original standard is far more complex and contains more different datatypes. Solving the above example should solve my problems with the complex structures too i think. One last point: The problem should be solved just by using standard, portable language-features of C++ like STL or Boost (

    Read the article

  • HTML relative links on various domains

    - by Adam Kiss
    I have quickie: When you code/develop themes, how do you link to various files in your html/css code? Example: We at our firm use mostly <base target="http://whatever"> in our main template and then just <img src="./images/file.png"> in our html, "/category/page" as links and something alike in our css. However, when testing on different machines, we use ip address rather than localhost on main dev station of coder, so all base links don't work (because localhost goes to viewing machine, not coder's, in our network). Same thing happens when updating pages - on dev server, we have to edit base target, so browsing site won't take us to live site - this part is actually rather simple PHP (if ... echo else echo something else), but it still not solve problem of more coding-testing problems. So, my question is, how do YOU solve it? How do you use relative links, which basically don't care for what domain is the page on and don't care for url rewrite? (because ../images/ is different for / and different for /something/somethingElse/page)?

    Read the article

  • Why is search functionality not working on this page?

    - by DaveDev
    we deliver micro-site content for our client. Our content is injected into a wrapper that is supplied by another developer. To deliver our content we host the wrapper as well as the content. The user can access this at http://fundcentre.newireland.ie/ (try a search for 'bloxham') For the other content that is not ours, the other developer hosts a similar (though slightly different) wrapper and delivers the content. the user accesses this here: http://www.newireland.ie/ (try a search for 'bloxham') The wrapper contains a search box, which does not work for us but it works for the other developer. I took a look at the network traffic with FireBug but it appears that when I do the search from the wrapper that we're hosting, I'm getting a "407 Proxy Access Denied" error. My guess is their proxy has a problem with the fact that the search is being conducted from a page hosted outside the scope of their proxy. It was also suggested that there were javascript errors on the page that were preventing the search from executing but I can't see any. Also, I don't think I'd get as far as the proxy error if that was the case. I don't really understand this stuff too well though, so could somebody with a bit more experience please take a look and maybe shed some light on this for me? Thanks.

    Read the article

  • How to check if new version of Chrome is available?

    - by serg
    I am trying to build an extension that would notify a user when new version of Chrome is available. I tried to inspect network traffic when Chrome is checking for an update and it is sending a request to http://74.125.95.113/service/update2?w=3:{long_encoded_string} page that returns XML with information I need: <?xml version="1.0" encoding="UTF-8"?> <gupdate xmlns="http://www.google.com/update2/response" protocol="2.0" server="prod"> <daystart elapsed_seconds="31272"/> <app appid="{8A69D345-D564-463C-AFF1-A69D9E530F96}" status="ok"> <updatecheck status="noupdate"/> <ping status="ok"/> </app> </gupdate> Besides sending {long_encoded_string} as URL parameter it is also sending some encoded cookie. Maybe someone familiar with Chrome build process can shed some light on those encoded strings and how to build them? Maybe there is another easier way (I have a feeling that string encoding is a dead end for me)?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • UnauthorizedAccessException when running desktop application from shared folder

    - by Atara
    I created a desktop application using VS 2008. When I run it locally, all works well. I shared my output folder (WITHOUT allowing network users to change my files) and ran my exe from another Vista computer on our intranet. When running the shared exe, I receive "System.UnauthorizedAccessException" when trying to read a file. How can I give permission to allow reading the file? Should I change the code? Should I grant permission to the application\folder on the Vista computer? how? Notes: I do not use ClickOnce. the application should be distributed using xcopy. My application target framework is ".Net Framework 2.0" On the Vista computer, "controlPanel | UninstallOrChangePrograms" it says it has "Microsoft .Net Framework 3.5 SP1" I also tried to map the folder drive, but got the same errors, only now the fileName is "T:\my.ocx" ' ---------------------------------------------------------------------- ' my code: Dim src As String = mcGlobals.cmcFiles.mcGetFileNameOcx() Dim ioStream As New System.IO.FileStream(src, IO.FileMode.Open) ' ---------------------------------------------------------------------- Public Shared Function mcGetFileNameOcx() As String ' ---------------------------------------------------------------------- Dim dirName As String = Application.StartupPath & "\" Dim sFiles() As String = System.IO.Directory.GetFiles(dirName, "*.ocx") Dim i As Integer For i = 0 To UBound(sFiles) Debug.WriteLine(System.IO.Path.GetFullPath(sFiles(i))) ' if found any - return the first: Return System.IO.Path.GetFullPath(sFiles(i)) Next Return "" End Function ' ---------------------------------------------------------------------- ' The Exception I receive: System.UnauthorizedAccessException: Access to the path '\\computerName\sharedFolderName\my.ocx' is denied. at System.IO._Error(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(...) at System.IO.FileStream..ctor(...) at System.IO.FileStream..ctor(String path, FileMode mode) ' ----------------------------------------------------------------------

    Read the article

  • Delphi: Transporting Objects to remote computers

    - by pr0wl
    Hallo. I am writing a tier2 ordering software for network usage. So we have client and server. On the client I create Objects of TBest in which the Product ID, the amount and the user who orders it are saved. (So this is a item of an Order). An order can have multiple items and those are saved in an array to later send the created order to the server. The class that holds the array is called TBestellung. So i created both TBest.toString: string; and TBest.fromString(source: string): TBest; Now, I send the toString result to the server via socket and on the server I create the object using fromString (its parsing the attributes received). This works as intended. Question: Is there a better and more elegant way to do that? Serialisation is a keyword, yes, but isn't that awful / difficult when you serialize an object (TBestellung in this case) that contains an Array of other Objects (TBest in this case)? //Small amendment: Before it gets asked. Yes I should create an extra (static) class for toString and fromString because otherwise the server needs to create an "empty" TBest in order to be able to use fromString.

    Read the article

  • .NET Remoting: Getting underlying socket?

    - by Alan
    Hi, I'm writing a light remoting app to assist in debugging a problem with remoting communication. This app mimics much of what a larger application does: Periodically sends a heartbeat to another peer application, and periodically verifies that a heartbeat has been received within some time threshold. What we're seeing is in our big application, the heartbeats seem to get dropped. One peer will go for long periods of time without seeing heartbeats from another peer, until the peer that is "dead" is restarted. The big application is responsive in all other ways. We believe it has something to do with the network setup. We were able to repro the problem locally, and fixed it by making some configuration changes to our test environment. To help our customer diagnose the issue, the mini-remoting app needs to log as much information as possible. So, is there a way to get the underlying socket for the remoting connection? I'm aware that I could write a custom sink for this, but I'd like to keep the actual remoting process as close to what is implemented in the big app as possible. Also as an aside, any ideas why the big-app might be "dropping" heartbeats?

    Read the article

  • Accurately and securely measure the time spent viewing a web page

    - by balpha
    Suppose the following: You have a web page that presents a simple game to a user (e.g. a quiz, a puzzle, etc). The user solves the puzzle, submits the result, and you want to measure as precisely as possible how long they took to solve it. Assume it's quite simple, so we're talking seconds, not hours. Also assume JavaScript is required anyway, so there's no need to think of JS-disabled browsers. Finally, assume we don't want to use anything like Flash, Silverlight, or the like. I can think of several techniques: Simply take the time between the points when the data was sent from the server and when the submission arrives. Since this is exclusively server-side, there's no chance for cheating. However, issues like network latency and page rendering time might make this unfair for users with slow computers / browsers / internet connections. On the first request, just send the page without the actual game data. When everything is loaded so far, retrieve the game data through an AJAX call and populate it into the page. This is similar to 1., but reduces some of the caveats introduced through time spent on overhead. Have the time measured on the client side using JavaScript and submitted alongside with the solution. This would theoretically be the most accurate, but it introduces the possibility of cheating, because you're relying on client data. Use the request time headers of a "ready to play" AJAX call and the result submission request. Same caveat as 3., as it is still client data. A combination of server side and client side measuring with some kind of plausibility analysis. I can't think of a good way, but maybe you can. Thoughts? Other ideas?

    Read the article

  • C# & SQL Server Authentication

    - by Peter
    Hello, I'm currently developing a C# app with an SQL Server DB back-end. I'm approaching the point of deployment and hitting a problem. The applicaiton will be deployed within an active directory network. As far as SQL authentication goes, I understand that I have 2 options - Windows Authenticaiton or Server Authenticaiton. If I use Server Authentication, I'm concerned that the username and password for the account will be stored in plain text in the app.config file, and therefore leave the database vulnerable. Using Windows Authenticaiton will avoid this issue, however it would mean giving every member of staff within our organisation read/write access to the database in order to run the app correctly. Whilst this is ok, it also means that they can easily connect to the database themselves via other means and directly alter the data outside of the app. I'm guessing there is someting really obvious I'm missing here, but I've been googling all evening to no avail. Any advice/guidance would be much appreciated! Peter Addition - my project is Windows Form based not ASP.NET - is encrypting the app.config file still the right answer? If it is, does anyone have any examples that are not ASP.NET based?

    Read the article

< Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >