Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 28/307 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Can generated OpenVPN keys be used on multiple clients?

    - by Jakobud
    We are experimenting with running an OpenVPN server for our business. One question I can't seem to find the answer to is this: When we generate keys for one of our users for them to use at home, can their use the same keys on their home laptop as well as their home desktop? Or do we need to generate separate keys for each user's client machine?

    Read the article

  • Have to run auto-negotiate between clients and switch - "old" switch works fine - "new" switch results in "port flapping"?

    - by ConfusedAboutSwitching
    I need some help understanding a problem we're having at work: We run Altiris/Deployment Solution and have to use auto-negotiate between client systems and our switches (Altiris apparently requires this for imaging, PXE boot and other functions). We have several areas with old wiring (Cat 3 & Cat 5) that have old 10/100 Cisco switches in them - and we can set these systems up to "auto/auto" (auto-negotiate on both the NIC and the switch port), and everything has been working fine. But - our networking crew changed out a couple of old switches for 10/100/1000 Cisco switches, and now - they are claiming that "auto/auto" won't work because the switches can't auto-negotiate the way the old 10/100 switches did - and that if we try to set the new gig switches to auto-negotiate, the switch port starts "port flapping", and shuts the port down. But - if we put the old switch back in - they work using "auto/auto" just fine - no port flapping. The networking crew is telling me that the problem is that we're putting "new switches" on "old wire", and that the old cabling can't/won't support the auto-negotiation with these new switches....??? There's something about this that doesn't make sense to me - can someone explain this to me? Or is our networking crew just doing something wrong in the configuration of these new switches? While will the old switches work "auto/auto", but the new switches won't?? HELP!!....and Thanks!! M

    Read the article

  • How can we implement network search for Windows AND OS X clients?

    - by michielvoo
    We have a network with Windows 7 and OS X (10.5 and 10.6) computers. Our servers run on Windows Server 2003 (1 Small Business Server, 2 Standard). We need to be able to search through about 15.000 - 30.000 documents in our archives. The best solution would be if users can search directly from the Windows menu (on Windows 7) or the Spotlight menu (on OS X 10.5 and 10.6). Also good would be if users can search directly from the search bar in their browsers, or by first visiting a site with the search form. In case the users search through the browser, it's important they they are able to open a file in the search results just by clicking on it. I have tested Microsoft Search Server Express, but it doesn't meet the requirements (no OS X support, results in the browser can't be opened by clicking in anything but Internet Explorer). I have looked at Spotlight server, but that only supports OS X. Thanks!

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • Does Exchange Cache Mode affect email markers refresh time in other Outlook clients?

    - by David
    We have users who share a single email account by using the Additional Email option under their accounts. Now, they want to assign emails to one another using the markers alongside the emails. We noticed that when changing the color of a marker, one Outlook client updated immediately, but another Outlook client did not. It looked like they were both set to "Cached Mode". Is it likely that caching effected the refresh of the client? Would it be better to turn off cached mode if we are using Outlook this way?

    Read the article

  • Internal Outlook clients prompted for OWA login when only accessing local internal Exchange server?

    - by TallGuy
    Hope someone can help with this one. Scenario is an internal Exchange 2003 server. OWA front end server in the DMZ. OWA logins work fine, with SSL configured. Over the last week (3 times so far) when an internal person opens their Outlook and then tries to open an email with JPG attachments they are prompted for the webmail login. Why? Even if they enter their valid webmail OWA login it fails and reprompts once for each attachment. Once they get through the multiple login prompts, they can double-click to open the attachments, but they are all blank. Any ideas on what could cause this? Why would someone accessing an email from an internal Outlook client get prompted for details of the OWA/webmail server login?

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

  • Why can't email clients create rules for moving dates like "yesterday"?

    - by Morgan
    I've never seen an email client that I could easily create a rule to do something like "Move messages from yesterday to a folder?" Is there some esoteric reason why this would be difficult? I know I can easily create rules around specific dates, but that isn't the same thing by a long shot; am I missing something? In Outlook 2010 I can create search folders that do sort of this type of thing, but you can't create rules around a search folder... seems like either I am missing something major, or this is terribly short-sided.

    Read the article

  • Router behind Router--second router (and its clients) cannot be "seen" even after both routers are D

    - by Trioke
    Couple of terminology I guess I should get out of the way for consistency's sake throughout the post: External Router/Modem - SMC 8014WG - External IP 173.32.144.134 - Internal IP 192.168.0.1 Internal Router - LinkSys WRT120N - "External" IP of 192.168.0.175 - Internal IP 192.168.1.1 - Connected via Ethernet Cable (a really long one, from the basement to the second floor) PC - IP 192.168.200 - Connected Wirelessly via WAP2 Personal. Laptop - Used to try and diagnose the problem, a 4th machine to the setup which won't be part of the final setup once everything works. The actual problem: I've tried setting the LinkySys router as a DMZ'd client on the SMC router, and then DMZ'd the actual PC on the LinkSys. So the DMZ looks like this: On the SMZ, client with IP 192.168.0.175 is DMZ'd. On the LinkSys, client with IP 192.168.1.200 is DMZ'd. No dice. I then tried port forwarding the necessary port on the SMC to the LinkSys (lets just say, port 80). Then port forwarded Port 80 on the LinkSys to the PC. Same as the DMZ scenario above, but change DMZ with port forwarding. No dice, still :(. Now here's where I went stupid--and tell me if one should never do this--I enabled both DMZ and port forwarding at the same time. I fired up Opera--my browser of choice ;)--typed in 173.32.144.134:6333 and... ... Third time is the charm they say? Well, clearly not. Otherwise I wouldn't be here ;). To diagnose the problem, I enabled "Allow remote access to the Admin panel" on the LinkSys router, and specified port 6333 as the port to use. I port forwarded port 6333 on the SMC to 192.168.0.175, and access my external IP of 173.32.144.134:6333 in hopes of seeing the Admin panel... No dice (I think I've ran out of dice by now ;)). So to see where the problem was, I connected a laptop to the SMC via LAN cable, and typed in 192.168.0.175:6333, and viola, Admin Panel access! So the problem looks like it lies with the SMC--But that's as far as I've got, I've done the port forwarding, the DMZ'ing, and I've even disabled the built-in firewall for safe measures, but nothing worked. So, here I am. Unable to connect to the PC behind the Internal router externally, and without anything to go on other than to come here and ask for the wisdom of the the superuser folks :). If any more detail is required, just ask. (Apologies in advance, if questions should never be this long winded!)

    Read the article

  • How do I make stunnel verify a clients certificate?

    - by unixman83
    NOTE: The title is misleading. Please correct it if you know a better title. What I want to know is how do I create the SSL keys / certificates needed for this. Hi. I am using stunnel to authenticate RDP (Remote Desktop) and I need to verify that a client possesses the proper credentials. So people cannot brute force into the machine. I am also using a bad (outdated) version of RDP that has security vulnerabilities, so stunnel is a must. I will preshare the necessary .pem's between machines. What are the openssl commands I need to create the right .pem files on both the client and on the server? What files need to be shared?

    Read the article

  • Summer daylight time not changing on some active directory domain clients.

    - by Nick Gorbikoff
    We just had a summer daylight change in US. and pc's on my network are behaving strange, some of them change time and some didn't. My network: 2 locations both in Midwest, same time zone. Location 1: 120 pcs (windows xp & windows 200) , with 1 Active Direcotry Domain Controller on Windows 2003 Standard. A couple of windows 2000 servers (they up to date) the rest of the servers are Xen or Debian machines (all up to date) , Second location connected through OpenVPN link all pc's are running fine - but they are all connecting to our AD domain controller. Locaiton 2: 10 pcs, and a shared LAN NAS. Both of the routers/firewalls in both locations are pFsense boxes with ntp service running - but it's up to date. Tried all the usual suspects: I have all the latest updates installed restarted them domain controller is running fine most computers are running fine I have only one domain controller on my network also my firewall serves as ntp server (pfsense) but it's up to date. all of the linux machines are fine since they are querying firewall / router for the time. about 1/3 of my pcs are 1 hour behind. If I change them manually they just change back ( the way domain pc's are supposed to). I've tried everything but I can't think of anything else to try.

    Read the article

  • What PowerShell/WSMan clients or queries are consuming more than 1000 requests per 2 seconds?

    - by makerofthings7
    Exchange 2010 remote administration tools are complaining with the following error [txexmb02.ibm.com] Connecting to remote server failed with the following error message : The WS-Management service cannot process the request. The system load quota of 1000 requests per 2 seconds has been exceeded. Send future requests at a slower rate or raise the system quota. The next request from this user will not be approved for at least 558475776 milliseconds. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [], PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenFailed VERBOSE: Connecting to TXEXHC02.ibm.com The help document this error referrers to says this is a WS-Man error. We're running SCOM 2007 R2 and am thinking that is increasing the query count, but I need to prove it.

    Read the article

  • Recommendation for hardware upgrade: thin clients? Or...?

    - by Alex C.
    I work for an animal shelter in Upstate New York. We have about 50 machines running XP Pro. They're connected to a Windows network with a domain. About half of these computers are used for nothing more than using two web-based apps -- one to keep track of our animals, the other to process credit cards. Having a full-blown desktop PC seems like overkill for this purpose. The PCs are three-to-five years old, and I'd like to come up with a plan to upgrade the hardware. Our donations are down (not surprising, given the economy), so cost is a big factor. Can people recommend some options? Some sort of thin client, maybe?

    Read the article

  • Running Magento for multiple clients - single Installaton vs. multiple installations

    - by Chris Hopkins
    Hi There I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now. I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription. It seems there are a few options available to be but I am worried about the performance I will get out of the various options. Option 1) Single install using AITOC advanced permissions module So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness. Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html Option 2) Multiple installations of Magento on one server for each shop So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own? Option 3) Use lots of servers and lots of Magento installations I just so do not want to do this. Option 4) Buy Magento Enterprise I do not have the money to do this. So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module? Thanks for reading and thanks in advance for any help - Chris Hopkins

    Read the article

  • How to refactor this database to allow sync. with smart clients?

    - by Anton Zvgny
    Howdy, I have inherited the following database: http://i46.tinypic.com/einvjr.png (EDIT: I cannot post images, so here goes the link) This currently runs 100% online thru an Asp.Net web app , but some employees will need to have offline access to this database to either get documents or to insert new documents for later synchronization when back to the office. What changes need to be made to this database scheme to support such scenario (sync smart clients with central web database)? Thanks! ps. I'm not an developer/dba, i'm just an mechanic engineer tasked with changing this app, so, take it easy on me :D ps1. We're using MSSQL Server for the online central database and MSSQL Compact for the smart client. ps2. the ImageGuid field of the images table is used to save the images to the file system. The web app takes the guid and create folders according to the first 3 initial letters to persist the image to the file system. ps3. The users of the smart client not always get server data first and then go modifying to sync later the changes. Most of the time they start working completely offline (with a blank local database) and later sync the data to the server. ps4. All the users of the smart client app have an account at the web app.

    Read the article

  • How to distinguish between two different UDP clients on the same IP address?

    - by Ricket
    I'm writing a UDP server, which is a first for me; I've only done a bit of TCP communications. And I'm having trouble figuring out exactly how to distinguish which user is which, since UDP deals only with packets rather than connections and I therefore cannot tell exactly who I'm communicating with. Here is pseudocode of my current server loop: DatagramPacket p; socket.receive(p); // now p contains the user's IP and port, and the data int key = getKey(p); if(key == 0) { // connection request key = makeKey(p); clients.add(key, p.ip); send(p.ip, p.port, key); // give the user his key } else { // user has a key // verify key belongs to that IP address // lookup the user's session data based on the key // react to the packet in the context of the session } When designing this, I kept in mind these points: Multiple users may exist on the same IP address, due to the presence of routers, therefore users must have a separate identification key. Packets can be spoofed, so the key should be checked against its original IP address and ignored if a different IP tries to use the key. The outbound port on the client side might change among packets. Is that third assumption correct, or can I simply assume that one user = one IP+port combination? Is this commonly done, or should I continue to create a special key like I am currently doing? I'm not completely clear on how TCP negotiates a connection so if you think I should model it off of TCP then please link me to a good tutorial or something on TCP's SYN/SYNACK/ACK mess. Also note, I do have a provision to resend a key, if an IP sends a 0 and that IP already has a pending key; I omitted it to keep the snippet simple. I understand that UDP is not guaranteed to arrive, and I plan to add reliability to the main packet handling code later as well.

    Read the article

  • I'm confused, how do I control cache so my clients can see website edits.

    - by Jared Christensen
    I host about 10 websites for clients. Every so often a client will ask for an update to their website. It may be a simple image change, new PDF or a simple text change. I make the change and then send them a link to the web page with the update. About an hour later I will get an email back from the client telling me they still see the old page. I will then explaining to them how to empty their browsers cache. What I'm trying to figure out is if there is a way I can tell their browser that I made an update to the website and that it should reload the page and update the cache. I thought about trying a meta tag but I read that they are not very reliable. Also I would still like the page to cache I just want to be able to clear it when I make an update. Is this possible? I'm an advanced front end web developer (HTML, CSS, Javascript) and know some PHP. Cache is just one of those things I don't really understand that well.

    Read the article

  • Why are certain folders in my XP network share really, really slow?

    - by bikefixxer
    I have a workgroup set up with Windows XP. My file "server" is running XP Pro and the clients are running XP home. I've turned simple file sharing off on the server because certain clients need access to certain folders and not to others, and I want to keep it that way. Therefore, I've used the granular sharing/security settings to enable certain clients access to certain folders. I'm using the net use command in a batch file on the clients to add the share when they logon so it's always available via a mapped drive or a shortcut. On some clients "My Documents" points to the mapped drive, but all of the local and application settings stay local. Everything works well except for accessing a certain folder on the network. It contains a lot of random batch files and self-executable programs I use for diagnostics and what not, and nearly every time I open the folder the computer hangs for 15-60 seconds. This happens on every machine, including the server (but not nearly as often as the clients). I've searched high and low and cannot figure it out and it's driving me crazy. Here are all the things I've tried to no avail: Disabled firewall (XP) and anti-virus (ESET NOD32) Deleted any desktop.ini file I can find in the share Disabled "automatically search for network folders and printers" Disabled "remember each folder's view settings" Set HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer NoRecentDocsNetHood = 1 Tried with mapped drives and with UNC shortcuts Ran CHKDSK Removed Read-Only attribute from all folders (well, tried to remove, it always came back on with a half check) Added the server's static IP to the hosts file on the clients I've tried monitoring the server's performance to see if anything makes sense. Occasionally the issue coincides with a spike in pages/sec (memory) but not always. Other than that, everything else seems normal. The anti-virus would seem to be the most likely cause to me considering the batch files and what not, but it still hangs when it is completely disabled. I'm at a loss and if anyone can help me with this I'd greatly appreciate it!

    Read the article

  • Can/should one disable namespace validation in Axis2 clients using ADB databinding?

    - by RJCantrell
    I have an document/literal Axis 1 service which I'm consuming with an Axis 2 client (ADB databinding, generated from WSDL2Java). It receives a valid XML response, but in parsing that XML, I get the error "Unexpected Subelement" for any type which doesn't have a namespace defined in the response. I can resolve the error by manually changing the (auto-generated by Axis2) type-validation line to only check the type name and not the namespace as well, but is there a non-manual way to skip this? This service is consumed properly by other Axis 1 clients. Here's the response, with bits in ALL CAPS having been anonymized. Some types have namespaces, some have empty-string namespaces, and some have none at all. Can I control this via the service's WSDL, or in Axis2's WSDL2Java handling of the WSDL? Is there a fundamental mismatch between Axis 1 and Axis2? The response below looks correct, except perhaps for where that type (anonymized as TWO below) is nested within itself. <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <SERVICENAMEResponse xmlns="http://service.PROJECT.COMPANY.com"> <SERVICENAMEReturn xmlns=""> <ONE></ONE> <TWO><TWO> <THREE>2</THREE> <FOUR>-10</FOUR> <FIVE>6</FIVE> <SIX>1</SIX> </TWO></TWO> <fileName></fileName> </SERVICENAMEReturn></SERVICENAMEResponse> </soapenv:Body></soapenv:Envelope> I did not have to modify the validation of the "SERVICENAMEResponse" object because it has the correct namespace specified, but I have to so for all its subelements. The manual modification that I can make (one per type) to fix this is to change: if (reader.isStartElement() && new javax.xml.namespace.QName("http://dto.PROJECT.COMPANY.com","ONE").equals(reader.getName())){ to: if (reader.isStartElement() && new javax.xml.namespace.QName("ONE").equals(reader.getName())){

    Read the article

  • Does the ulkJSON library have limitations when dealing with base64 in Delphi 7?

    - by Da Gopherboy
    I'm working on a project that is using Delphi 7 to consume RESTful services. We are creating and decoding JSON with the ulkJSON library. Up to this point I've been able to successfully build and send JSON containing a base64 string that exceed 5,160kb. I can verify that the base64 is being received by the services and verify the integrity of the base64 once its there. In addition to sending, I can also receive and successfully decode JSON with a smaller (~ 256KB or less) base64. However I am experiencing some issues on the return trip when larger (~1,024KB+) base64 is involved for some reason. Specifically when attempting to use the following JSON format and function combination: JSON: { "message" : "/9j/4AAQSkZJRgABAQEAYABgAAD...." } Function: function checkResults(JSONFormattedString: String): String; var jsonObject : TlkJSONObject; iteration : Integer; i : Integer; x : Integer; begin jsonObject := TlkJSONobject.Create; // Validate that the JSONFormatted string is not empty. // If it is empty, inform the user/programmer, and exit from this routine. if JSONFormattedString = '' then begin result := 'Error: JSON returned is Null'; jsonObject.Free; exit; end; // Now that we can validate that this string is not empty, we are going to // assume that the string is a JSONFormatted string and attempt to parse it. // // If the string is not a valid JSON object (such as an http status code) // throw an exception informing the user/programmer that an unexpected value // has been passed. And exit from this routine. try jsonObject := TlkJSON.ParseText(JSONFormattedString) as TlkJSONobject; except on e:Exception do begin result := 'Error: No JSON was received from web services'; jsonObject.Free; exit; end; end; // Now that the object has been parsed, lets check the contents. try result := jsonObject.Field['message'].value; jsonObject.Free; exit; except on e:Exception do begin result := 'Error: No Message received from Web Services '+e.message; jsonObject.Free; exit; end; end; end; As mentioned above when using the above function, I am able to get small (256KB and less) base64 strings out of the 'message' field of a JSON object. But for some reason if the received JSON is larger than say 1,024kb the following line seems to just stop in its tracks: jsonObject := TlkJSON.ParseText(JSONFormattedString) as TlkJSONobject; No errors, no results. Following the debugger, I can go into the library, and see that the JSON string being passed is not considered to be JSON despite being in the format listed above. The only difference I can find between calls that work as expected and calls that do not work as expect appears to be the size of base64 being transmitted. Am I missing something completely obvious and should be shot for my code implementation (very possible)? Have I missed some notation regarding the limitations of the ulkJSON library? Any input would be extremely helpful. Thanks in advance stack!

    Read the article

  • How to fix basicHttpBinding in WCF when using multiple proxy clients?

    - by Hemant
    [Question seems a little long but please have patience. It has sample source to explain the problem.] Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method on separate threads. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • What are the basics of dealing with user input events in Android?

    - by user279112
    Hello. I thought I had understood this question, but something is quite wrong here. When the user (me, so far) tries to press keys, nothing really happens, and I am having a lot of trouble understanding what it is that I've missed. Consider this before I present some code to help clarify my problem: I am using Android's Lunar Lander example to make my first "real" Android program. In that example, of course, there exist a class LunarView, and class nested therein LunarThread. In my code the equivalents of these classes are Graphics and GraphicsThread, respectively. Also I can make sprite animations in 2D just fine on Android. I have a Player class, and let's say GraphicsThread has a Player member referred to as "player". This class has four coordinates - x1, y1, x2, and y2 - and they define a rectangle in which the sprite is to be drawn. I've worked it out so that I can handle that perfectly. Whenever the doDraw(Canvas canvas) method is invoked, it'll just look at the values of those coordinates and draw the sprite accordingly. Now let's say - and this isn't really what I'm trying to do with the program - I'm trying to make the program where all it does is display the Player sprite at one location of the screen UNTIL the FIRST time the user presses the Dpad's left button. Then the location will be changed to another set position on the screen, and the sprite will be drawn at that position for the rest of the program invariably. Also note that the GraphicsThread member in Graphics is called "thread", and that the SurfaceHolder member in GraphicsThread is called "mSurfaceHolder". So consider this method in class Graphics: @Override public boolean onKeyDown(int keyCode, KeyEvent msg) { return thread.keyDownHandler(keyCode, msg); } Also please consider this method in class GraphicsThread: boolean keyDownHandler(int keyCode, KeyEvent msg) { synchronized (mSurfaceHolder) { if (keyCode == KeyEvent.KEYCODE_DPAD_LEFT) { player.x1 = 100; player.y1 = 100; player.x2 = 120; player.y2 = 150; } } return true; } Now then assuming that player's coordinates start off as (200, 200, 220, 250), why won't he do anything different when I press Dpad: Left? Thanks!

    Read the article

  • Dealing with ArrayLists in java...all members of the arraylist updating themselves?

    - by Charlie
    I have a function that shrinks the size of a "Food bank" (represented by a rectangle in my GUI) once some of the food has been taken. I have the following function check for this: public boolean tryToPickUpFood(Ant a) { int xCoord = a.getLocation().x; int yCoord = a.getLocation().y; for (int i = 0; i < foodPiles.size(); i++) { if (foodPiles.get(i).containsPoint(xCoord, yCoord)) { foodPiles.get(i).decreaseFood(); return true; } } return false; } Where decreaseFood shrinks the rectangle.. public void decreaseFood() { foodAmount -= 1; shrinkPile(); } private void shrinkPile() { WIDTH -=1; HEIGHT = WIDTH; } However, whenever one rectangle shrinks, ALL of the rectangles shrink. Why would this be?

    Read the article

  • Dealing with Try/Catch Exceptions in Java bytecode? ("stack height inconsistent")

    - by Cogwirrel
    I am trying to do some error handling in java bytecode. I first tried to implement some catch-like subroutines, where I would check for the error condition, and jump to the appropriate subroutine, a little like: iconst_1 iconst_0 dup ifeq calldiverr goto enddivtest calldiverr: jsr divError enddivtest: idiv ...More instructions... divError: getstatic java/lang/System/out Ljava/io/PrintStream; ldc "Oh dear you divided by 0!" invokevirtual java/io/PrintStream/print(Ljava/lang/String;)V The problem with the above is that when I have multiple instructions that jump to this subroutine, I get an error message when running the bytecode, saying that the stack height is inconsistent. Perhaps using exceptions is the best way to get around this? From some googling I have found that you can create instances of Exception classes and initialise them with something like: new java/lang/Exception dup ldc "exception message!" invokespecial java/lang/Exception/<init>(Ljava/lang/String;)V I have also found that you can throw them with athrow and this seems ok. What is confusing me however is exactly how exceptions are caught. There seems to be a magical "Exception table" which glues the throwing and catching of exceptions together, but I do not know how to define one of these when writing bytecode from scratch (and assembling using Jasmin). Can somebody tell me the secret of creating an exception table? And possibly give me an example of exception handling that will assemble with jasmin?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >