Search Results

Search found 217 results on 9 pages for 'faults'.

Page 4/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • ADSIEdit freezes gettings properties of a group with hundred of thousands members

    - by ixe013
    Doing performance testing on an AD-LDS (Server 2008 R2 64 bits), we created a milion user in a single OU. We also created a single group object and made those milion users member of that group. When we try to list the milion of users ADSIEdit times out with an error message saying it cannot display that many users. Fine. But if we open the properties for the group, ADSIEdit freezes, eating up all available memory and CPU trashing (nearly 60M page faults in under an hour). AD-LDS (running on another computer) is barely hitting the 1% CPU mark, servicing other ldap requests as if nothing were. We can throw more memory at the problem, but more users will have to be managed one day and we will be back at square one. Is there a way to set a limit in ADSIEdit so that it will not hang the computer when retreving a very large multi-value object ?

    Read the article

  • Improving terminal server performance for a specfic app

    - by Matt
    We have a windows 2003 terminal server running 2X application load balancign that is hosting a client's application that is accessed by around 50 users. Each user has there own database. The database is a file based database. The application is developed under Delphi so I think the database may be BDE based. As you can imagine, there is probably quite a lot of disk i/o. Here are some of the perfmon settings. Logged in users (average) 20 - 25 CPU Utilization (average) 80 - 100% Disk Queue Length (average) 1.6 % Disk time (average) 111 Page faults/sec (average) 1400 The application takes on average about a minute to load up. As usual, the budget is tight. Is there basic windows performance tuning tips that people can recommend to improve things before we fork out on more RAM etc. Server is a 2.8GHz Xeon with 3GB of RAM.

    Read the article

  • Online backup solution

    - by Petah
    I am looking for a backup solution to backup all my data (about 3-4TB). I have look at many services out there, such as: http://www.backblaze.com/ http://www.crashplan.com/ Those services look very good, and a reasonable price. But I am worried about them because of incidents like this: http://jeffreydonenfeld.com/blog/2011/12/crashplan-online-backup-lost-my-entire-backup-archive/ I am wondering if there is any online back solution that offers a service level agreement (SLA) with compensation for data loss at a reasonable price (under $30 per month). Or is there a good solution that offers a high enough level of redundancy to mitigate the risk? Required: Offsite backup to prevent data loss in terms of fire/theft. Redundancy to protect the backup from corruption. A reasonable cost (< $30 per month). A SLA in case the service provider faults on its agreements.

    Read the article

  • Awstats messaging non existant user causing exim4 to go nuts

    - by Chris
    I've taken over managing a server set up by someone else now uncontactable, while managing to work out most faults / changes needed this one is stumping me. Awstats is running on the machine and sending messages via exim4 to a user every time it runs an update. The user account has been deleted and so the exim4 main log files are filling up with message delivery errors, which firstly hinders meaningful log analysis for anything else and secondly uses up quite a lot of space (it grew to 22GB unattended, panic!) I've been through all the conf files in /etc/awstats and can't seem to find any mention of this user account. Google just turns up results about how to use awstats to parse exim4 log files. So the questions is where is this setting (on debian) likely to be? Cheers in advance

    Read the article

  • Windows 7 System process reading/writing like crazy

    - by Mats Ekberg
    I have a problem that my windows 7 computer sometimes starts accessing the disk like crazy for maybe 10 minutes at a time. The process in question is the "system" process. I have disabled superfetch and hibernation on my computer, if that makes any difference. I disabled those to see if they were the cause of the problems, but no change. I have 6 GB of RAM and only the web browser was started when I took the screenshot, so I don't think it was thrashing due to page faults. Any ideas on how to find the cause of this?

    Read the article

  • best practice with memcache/php - multi memcache nodes

    - by user62835
    So I am working on a web app - that has to be built for scalability. It stores frequent MySQL querys into the cache. I have pretty much everything built and ready to go - but I am concerned on best practices on handling where to cache the data. I've talked to a few people and one of them suggested to split each key/value across all the memcache nodes. Meaning if i store the example: 'somekey','this is the value' it will be split across lets say 3 memcache servers. Is that a better way? or is memcache more built on a 1 to 1 relationship?. For example. store value on server A till it faults out - go to server B and store there. that is my current understanding from the research I have done and past experience working with memcache. Could someone please point me in the right direction in this and let me know which way is best or if I completely have this mixxed up. Thanks

    Read the article

  • How to avoid Memory "Hard Fault/sec"

    - by Flavio Oliveira
    i've a problem on my windows 2008 server x64, and i cannot understand how can i solve it. i'm looking to Resource Monitor and see about 100 to 200 hard faults/sec. and generally the machine is slow. As i've readed a bit it is caused by a "memory Page" that is no longer available on physical memory and causes a io operations (disk) and it is a problem. The current hardware is a intel core2duo E8400 (3.0GHz) with 6GB RAM on a Windows Server Web 64-bit. Actually the machine have about 2GB Ram used what having 4Gb available to use, Why is the machine requires that high level of Disk operations? what can i do to increase the performance? Im experiencing a memory issues? what should be my starting point?

    Read the article

  • Monitoring remote laptops

    - by kaerast
    We're looking for something to monitor around 30 remote laptops that are constantly out on the road, never returning to base except for when there are serious hardware faults that need repairing. These laptops won't always be connected to the internet, they'll have mobile broadband and may work offline most of the time. They will be running a mixture of Windows XP, Vista and 7 and there is currently no server setup. We're primarily interested in making sure that Windows Updates and antivirus updates are happening, and I guess we should also be monitoring remaining disk space, what software is installed and ideally hardware health. It might also be nice if we could gain remote access to perform work on them. My main reason for wanting to monitor them is that it's going to be a real pain to get them back to base if anything goes wrong, so I want to be proactive in ensuring they last as long as possible. Can you recommend what I should be monitoring to ensure a long life? What tools would you use to monitor and maintain these computers?

    Read the article

  • Reasonable Location to Install Web Service on Server

    - by Mr. Disappointment
    Firstly, I'm a software developer and not qualified as any kind of system or server expert so I'm looking for advice in order to help me prevent faults on our server. I've written a modular system to carry out certain tasks for us autonomously to prevent us from writing the same old code over and over again. This consists of a Windows Service (.NET), a Web Service (WCF), a shared Class Library, and a Database which will run on a Windows Server 2003. The problem comes, for me, in deployment. Specifically the web service - naturally the local service (and required shared library) are persisted (by default and convention) in the Program Files folder, but storing the web service here just seems absurd to me (even though we'd lock it down to appropriate use only). Should the files be stored some place else all together? Or split them up and store the web service elsewhere?

    Read the article

  • How do you debug why Windows is slow?

    - by aaron
    I've got Vista Biz and when my machine chugs I think it is because of paging, but I never know how to verify this. Procexp doesn't seem to provide useful information because it appears that nothing is going on when the chugs happen. perfmon seems like it has the counters I need, but I'm never sure what counters I should add to cover the information I want. For perfmon, I prefer numbers that are percents, so I can gauge load. Here are the counters I have up, but they don't always seem to correlate to chugs: - % disk time (logical) - page faults/sec (an indicator of lots of paging activity) - processor/%priviliged time

    Read the article

  • Virtual Windows Servers and Pagefile location [closed]

    - by Luke Puplett
    Considering that Windows makes heavy use of the pagefile even with huge amounts of RAM available, is it not best to have this pagefile on the fastest disk possible as close to the virtual systems as possible? I'm thinking, RAM disk. Where I work, storage for VMs is out on a NAS/SAN. I'm worried that so much memory access is having to go across the network! As a side, I think its about time MS got rid of paging and told us to buy more DIMMS. UPDATE So this question has been downvoted??! Accessing a local spindle is C40,000 times slower than a DIMM, so going over the network will be even slower for hard faults. I don't know why I got the downvote, I'm certain that this is an issue unless there's some other mechanism in ESX/HyperV that manages this.

    Read the article

  • Is a memory upgrade a viable option to fix performance issues? [closed]

    - by ratchet freak
    I'm currently seeing my PC getting bogged down by Firefox 11.0 alone with only one hundred tabs open. Resulting in a memory use of over 530M , VM size of over 800M and an insane amount of page faults (easily reaching 100 million over the course of the day). The PF delta during normal operation easily reaches 7k with peaks to 15k sometimes reaching over 20k. This leads to a (real) deterioration to response time when switching, opening and closing tabs, opening menus, typing, ... My question is: Am I right in assuming that plugging in more RAM (either adding 2x1GB or replacing the existing RAM with 2x2GB or 4x1GB) will solve this problem? My specs: Windows XP Home Edition SP3 (32 bit) Intel Core Duo 2,4 GHz 2x512MB RAM 800MHz DDR2 (dual channel) 4MB unified cache 320GB HDD Intel G33 (X3100) onboard graphics (no graphics card but PCI express x16 slot is available)

    Read the article

  • links for 2010-06-03

    - by Bob Rhubart
    @rluttikhuizen: Fault handling in Oracle SOA Suite 11g "When it comes to technical faults," says  Oracle ACE Ronald van Luttikhuizen, "you probably do not want to design error handling in the process itself." (tags: soa oracleace oracle otn) Adrian Campbell: Enterprise Architecture and Zombies EA blogger Adrian Campbell invokes Harry Potter, the Lord of the Rings, Black Adder, and "Pride and Prejudice and Zombies" in this interpretation of Gartner's 10 EA pitfalls. (tags: entarch zombies gartner) Nathalie Roman: Oracle Forms -- alive and kicking Oracle ACE Director Nathalie Roman offers details on a recent Oracle Forms Modernization seminar.  (tags: oracle otn oracleace fusionmiddleware soa) Trond-Arne Undheim: Is Openness at the heart of the EU Digital Agenda? Trond-Arne Undheim shares some insight into the upcoming OpenForum Europe Summit 2010, to be held in Brussels. (tags: oracle otn entarch architect) Chris Raby: Oracle Financial Analytics Presentations and Photos Chris Raby shares details on Rittman Mead's series of seminars that combine the company's in-depth technical knowledge with a greater focus on the business perspective.  (tags: entarch bi architect oracle otn) June Oracle Technology Network NEW Member Benefits - books books and more books!!! Details on how OTN members can get discounts on books from APress, CRC, Pearson, and Packt Publishing.  (tags: oracle otn community books discounts) Manoj Neelapu: Oracle Service Bus + SOA in same server Manoj Neelapu's  tutorial covers on how to do create a domain in which SOA and Oracle Service Bus run in a single JVM . (tags: oracle otn soa architect)

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • apt-get update very slow, stuck at "Waiting for headers"

    - by Liam
    I have looked at similar questions: Stuck at 0% [waiting for headers] (apt) apt-get update stuck on "Waiting for Headers" However neither one of them answer my problem. I am running 12.04 AMD64 and have recently started getting an issue that when I update my repos from my connection at home through a terminal, using sudo apt-get update, it takes forever (literally after 2 hours it was at 28%), however when I run from a different location it takes less than 5 minutes to complete. I have attempted changing which mirror I use but that does not solve the issue. I have also cut down what is in my sources list but this also makes no difference. There are no faults on my ADSL line as I have already contacted my ISP to check this. It also makes no difference if I use a WiFi or network cable connection. What could be my issue? A speed test (www.speedtest.net) comes out at about 0.9 Mbps down and 0.42 Mbps up (which is a shade under the advertised line speed), I reside in South Africa and use the UCT LEG server. But I have also tried the other mirrors available in SA....none of them make a difference.

    Read the article

  • Failed to download repository information (Maveric)

    - by Rhiannon
    I have been through most of the duplicates for this question, and still can't find an answer. I may have missed one but hopefully this isn't a duplicate! Having a problem with updates. I get the "failed to download..."message followed by "Check your internet connection", which is clearing working fine as I am on it now. I click details and get the following **W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick-updates/universe/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-updates/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/multiverse/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/universe/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , E:Some index files failed to download. They have been ignored, or old ones used instead.** All the faults have "maveric" somewhere in them, so I have gone to settings and unticked all the Mavarics I can find, but this problem is still happening. Any ideas? Many thanks

    Read the article

  • Do you store mysql exports in your version control tool for reverting to in event of error?

    - by Rob
    We run an internal web server with in-house software to run a manufacturing line. When new product features are to be added, either or both of the following occur: changes to the in-house server software may be required to support these - these are for significant changes in functionality, being code drive. changes to the MySQL database for new entries for the part numbers, these are for smaller changes, configurations, changes to already existing values and parameters -- such changes don't require code changes. Ideally we'd want our changes to be here rather than in item 1. Item 1 is version controlled in Subversion, so previous revisions can be referred to for rolling back to in the event of problems introduced in the latest revision. But what about changes to the MySQL database? We have quality processes to ensure that such changes are error-free but there is always a chance that errors can pass through, e.g. mistake in data entry or faults with the code that uses the MySQL corrupting the database etc. We have a automated backup every 6 hours but what if we want more manual defined checkpoints in between these intervals, we could use the same backup system but I wondered if folks here used other methods to store previous states of databases, e.g. exporting the database as a plain text SQL dump -- at least with this method it would be possible to see diffs e.g. in Beyond Compare for trouble shooting. Thoughts?

    Read the article

  • ArchBeat Link-o-Rama for October 18, 2013

    - by OTN ArchBeat
    Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Lucas Jellema. Evil Behind ChangeEventPolicy PPR in CRUD ADF 12c and WebLogic Stuck Threads | Andrejus Baranovskis The latest post from Oracle ACE Director Andrejus Baranovskis is a bit of a preview of his presentation at the upcoming UKOUG 2013 event. Podcast: Interview with authors of "Hudson Continuous Integration in Practice" For your listening pleasure... Here's an Oracle Author Podcast Interview with "Hudson Continuous Integration in Practice" authors Ed Burns and Winston Prakash. Manual Recovery Mechanisms in SOA Suite and AIA | Shreenidhi Raghuram Solution architect Shreenidhi Raghuram's post combines information from several sources to provide "a quick reference for Manual Recovery of Faults within the SOA and AIA contexts." Event: Harnessing Oracle Weblogic and Oracle Coherence This OTN Virtual Developer Day event features eight sessions in two tracks, with presentations and hands-on labs for developers and architects delivered by experts in Weblogic, Coherence, and ADF. Registration is free. November 5th, 2013. 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT Podcast: IoT Challenges and Opportunities - Part 2 Part 2 of the OTN ArchBeat Internet of Things podcast features a roundtable discussion of IoT challenges: massive data streams, security and privacy issues, evolving standards and protocols. Listen! Video: Design - ADF Architectural Patterns - Two for One Deal | Chris Muir Chris Muir explores the reuse of BTF workspaces across multiple applications and the advantages and disadvantages of reuse at the application level. Thought for the Day "Can't nothing make your life work if you ain't the architect." — Terry McMillan, American author (Born October 18, 1951) Source: brainyquote.com

    Read the article

  • Displaying Datamatrix in application error screen

    - by DaveNay
    Quite often we will get a report from a user in the field saying there was an error in our application. Frequently this leads to the typical round of "What was the error?" "I don't know, it was just an error." We of course log these faults to the log files, and we can even enable detailed debug logs, but this involves the end user changing a setting in the configuration file and then finding the correct files and then emailing them to us. As I'm sure you can all imagine, there are plenty of pitfalls and alligators in this methodology. Recently a couple of people have used their cell phone to email me a "screen capture" of the fault, and while this helps, we still have to scrutinize the image to find the exact fault, and if enabled, the stack trace. So this evening, I had the brilliant idea (IMHO) to encode the fault into a Datamatrix barcode image and then encourage users to send me a picture from their cell phone. I can then decode the datamatrix and get a parse-able error message! Our core technology is machine vision, so the decoding of the datamatrix image would be trivial, I just need to find a method of generating the actual image to display in the fault handler. Thoughts?

    Read the article

  • Displaying Datamatrix in application error screen

    - by DaveNay
    Quite often we will get a report from a user in the field saying there was an error in our application. Frequently this leads to the typical round of "What was the error?" "I don't know, it was just an error." We of course log these faults to the log files, and we can even enable detailed debug logs, but this involves the end user changing a setting in the configuration file and then finding the correct files and then emailing them to us. As I'm sure you can all imagine, there are plenty of pitfalls and alligators in this methodology. Recently a couple of people have used their cell phone to email me a "screen capture" of the fault, and while this helps, we still have to scrutinize the image to find the exact fault, and if enabled, the stack trace. So this evening, I had the brilliant idea (IMHO) to encode the fault into a Datamatrix barcode image and then encourage users to send me a picture from their cell phone. I can then decode the datamatrix and get a parse-able error message! Our core technology is machine vision, so the decoding of the datamatrix image would be trivial, I just need to find a method of generating the actual image to display in the fault handler. Thoughts?

    Read the article

  • error while trying to resize the partition

    - by speedox
    im running out of space and i tried to resize the partition using g-parted but i got an error: Checking for bad sectors ... Bad cluster: 0x2904636 - 0x2904636 (1) Bad cluster: 0x290526d - 0x290526e (2) Bad cluster: 0x29052fd - 0x2905300 (4) Bad cluster: 0x2905392 - 0x2905392 (1) Bad cluster: 0x2905425 - 0x2905428 (4) Bad cluster: 0x290555d - 0x2905560 (4) Bad cluster: 0x29055f1 - 0x29055f8 (8) Bad cluster: 0x2905681 - 0x2905688 (8) Bad cluster: 0x29057ac - 0x29057ac (1) Bad cluster: 0x29887dd - 0x29887dd (1) Bad cluster: 0x299a086 - 0x299a086 (1) Bad cluster: 0x348ec05 - 0x348ec05 (1) Bad cluster: 0x353dabb - 0x353dabb (1) Bad cluster: 0x353dba4 - 0x353dba4 (1) Bad cluster: 0x354a162 - 0x354a162 (1) Bad cluster: 0x354a1ce - 0x354a1ce (1) ERROR: This software has detected that the disk has at least 40 bad sectors. **************************************************************************** * WARNING: The disk has bad sector. This means physical damage on the disk * * surface caused by deterioration, manufacturing faults or other reason. * * The reliability of the disk may stay stable or degrade fast. We suggest * * making a full backup urgently by running 'ntfsclone --rescue ...' then * * run 'chkdsk /f /r' on Windows and rebooot it TWICE! Then you can resize * * NTFS safely by additionally using the --bad-sectors option of ntfsresize.* **************************************************************************** I opened the "disk utility" and clicked on "Smart DATA" button I got this image:

    Read the article

  • http request terminating early

    - by spiderplant0
    I noticed that on some of my sites, images were occasionally not getting downloaded fully. After a bit of investigation it appears that it is not restricted to images - .css, .js etc were also occasionally terminating early. The faults appear to be random. When I use the debug/proxy tool Fiddler2 reports that fewer bytes have been received than were requested. Firebug reports "Image corrupt or truncated". Obviously this is mainly a concern between me and my hosting company. However despite many emails they have not been able to get to the bottom of it. Transfer to another hosting company is obviously an option but is really a last resort. Has anyone seen this kind of thing before or can anyone suggest what might be causing it. Or any apache setting or something that I can ask them to check out. Will apache log this kind of error - they havent been able to provide me with any logs, but if I know exactly where things have been logged, maybe I can prompt them in to action.

    Read the article

  • How to set MinWorkingSet and MaxWorkingSet in a 64-bit .NET process?

    - by Gravitas
    How do I set MinWorkingSet and MaxWorking set for a 64-bit .NET process? p.s. I can set the MinWorkingSet and MaxWorking set for a 32-bit process, as follows: [DllImport("KERNEL32.DLL", EntryPoint = "SetProcessWorkingSetSize", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern bool SetProcessWorkingSetSize(IntPtr pProcess, int dwMinimumWorkingSetSize, int dwMaximumWorkingSetSize); [DllImport("KERNEL32.DLL", EntryPoint = "GetCurrentProcess", SetLastError = true, CallingConvention = CallingConvention.StdCall)] internal static extern IntPtr MyGetCurrentProcess(); // In main(): SetProcessWorkingSetSize(Process.GetCurrentProcess().Handle, int.MaxValue, int.MaxValue); Update: Unfortunately, even if we do this call, the garbage collection trims the working set down anyway, bypassing MinWorkingSet (see "Automatic GC.Collect() in the diagram below). Question: Is there a way to lock the WorkingSet (the green line) to 1GB, to avoid the spike in page faults (the red lines) that occur when allocating new memory into the process? p.s. Every time a page fault occurs, it blocks the thread for 250us, which hits application performance badly.

    Read the article

  • The remote server returned an unexpected response: (400) Bad Request while streaming

    - by phenevo
    Hi, I have problem with streaming. When I send small file like 1kb txt everything is ok, but when I send larger file like 100 kb jpg or 2gb psd I get: The remote server returned an unexpected response: (400) Bad Request. I'm using windows 7, VS 2010 and .net 3.5 and WCF Service library I lost all my weekend on this and I still see this error :/ Please help me Client: var client = new WpfApplication1.ServiceReference1.Service1Client("WSHttpBinding_IService1"); client.GetString("test"); string filename = @"d:\test.jpg"; FileStream fs = new FileStream(filename, FileMode.Open); try { client.ProcessStreamFromClient(fs); } catch (Exception exception) { Console.WriteLine(exception); } app.config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="StreamedHttp" closeTimeout="10:01:00" openTimeout="10:01:00" receiveTimeout="10:10:00" sendTimeout="10:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536000" maxBufferPoolSize="524288000" maxReceivedMessageSize="65536000" messageEncoding="Text" textEncoding="utf-8" transferMode="Streamed" useDefaultWebProxy="true"> <readerQuotas maxDepth="0" maxStringContentLength="0" maxArrayLength="0" maxBytesPerRead="0" maxNameTableCharCount="0" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary2/Service1/" binding="basicHttpBinding" bindingConfiguration="StreamedHttp" contract="ServiceReference1.IService1" name="WSHttpBinding_IService1" /> </client> </system.serviceModel> </configuration> And Wcf ServiceLibrary: public void ProcessStreamFromClient(Stream str) { using (var outStream = new FileStream(@"e:\test.jpg", FileMode.Create)) { var buffer = new byte[4096]; int count; while ((count = str.Read(buffer, 0, buffer.Length)) > 0) { outStream.Write(buffer, 0, count); } } } App.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="Binding1" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536000" transferMode="Streamed" bypassProxyOnLocal="false" closeTimeout="10:01:00" openTimeout="10:01:00" receiveTimeout="10:10:00" sendTimeout="10:01:00" maxBufferPoolSize="524288000" maxReceivedMessageSize="65536000" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <client /> <services> <service name="WcfServiceLibrary2.Service1"> <host> <baseAddresses> <add baseAddress="http://localhost:8732/Design_Time_Addresses/WcfServiceLibrary2/Service1/" /> </baseAddresses> </host> <!-- Service Endpoints --> <!-- Unless fully qualified, address is relative to base address supplied above --> <endpoint address="" binding="basicHttpBinding" contract="WcfServiceLibrary2.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <!-- Metadata Endpoints --> <!-- The Metadata Exchange endpoint is used by the service to describe itself to clients. --> <!-- This endpoint does not use a secure binding and should be secured or removed before deployment --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <dataContractSerializer maxItemsInObjectGraph="2147483647"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration>

    Read the article

  • Trying to run WCF web service on non-domain VM, Security Errors

    - by NealWalters
    Am I in a Catch-22 situation here? My goal is to take a WCF service that I inherited, and run it on a VM and test it by calling it from my desktop PC. The VM is in a workgroup, and not in the company's domain. Basically, we need more test environments, ideally one per developer (we may have 2 to 4 people that need this). Thus the idea of the VM was that each developer could have his own web server that somewhat matches or real environment (where we actually have two websites, an external/exposed and internal). [Using VS2010 .NET 4.0] In the internal service, each method was decorated with this attribute: [OperationBehavior(Impersonation = ImpersonationOption.Required)] I'm still researching why this was needed. I think it's because a webapp calls the "internal" service, and either a) we need the credentials of the user, or b) we may doing some PrinciplePermission.Demands to see if the user is in a group. My interest is creating some ConsoleTest programs or UnitTest programs. I changed to allowed like this: [OperationBehavior(Impersonation = ImpersonationOption.Allowed)] because I was getting this error in trying to view the .svc in the browser: The contract operation 'EditAccountFamily' requires Windows identity for automatic impersonation. A Windows identity that represents the caller is not provided by binding ('WSHttpBinding','http://tempuri.org/') for contract ('IAdminService','http://tempuri.org/'. I don't get that error with the original bindings look like this: However, I believe I need to turn off this security since the web service is not on the domain. I tend to get these errors in the client: 1) The request for security token could not be satisfied because authentication failed - as an InnerException of "SecurityNegotiation was unhandled". or 2) The caller was not authenticated by the service as an InnerException of "SecurityNegotiation was unhandled". So can I create some configuration of code and web.config that will allow each developer to work on his own VM? Or must I join the VM to the domain? The number of permutations seems near endless. I've started to create a Word.doc that says what to do with each error, but now I'm in the catch-22 where I'm stuck. Thanks, Neal Server Bindings: <bindings> <wsHttpBinding> <binding name="wsHttpEndpointBinding" maxBufferPoolSize="2147483647" maxReceivedMessageSize="500000000"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <!-- <security mode="None" /> This is one thing I tried --> <security> <message clientCredentialType="Windows" /> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="ABC.AdminService.AdminServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> </serviceCredentials> <!--<serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="AspNetWindowsTokenRoleProvider"/>--> <serviceAuthorization principalPermissionMode="UseWindowsGroups" impersonateCallerForAllOperations="true" /> </behavior> <behavior name="ABC.AdminService.IAdminServiceTransportBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false" /> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerTrust" /> </clientCertificate> <serviceCertificate findValue="WCfServer" storeLocation="LocalMachine" storeName="My" x509FindType="FindBySubjectName" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> CLIENT: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IAdminService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://192.168.159.132/EC_AdminService/AdminService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IAdminService" contract="svcRef.IAdminService" name="WSHttpBinding_IAdminService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel>

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >