Search Results

Search found 6912 results on 277 pages for 'stay logged in'.

Page 258/277 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Ubuntu Server, 2 Ethernet Devices, Same Gateway - Want to force internet traffic through 1 device (or at least allow it to work!)

    - by Chris Drumgoole
    I have a Ubuntu 10.04 Server with 2 ethernet devices, eth0 and eth1. eth0 has a static IP of 192.168.1.210 eth1 has a static IP if 192.168.1.211 The DHCP server (which also serves as the internet gateway) sits at 192.168.1.1. The issue I have right now is when I have both plugged in, I can connect to both IPs over SSH internally, but I can't connect to the internet from the server. If I unplug one of the devices (e.g. eth1), then it works, no problem. (Also, I get the same result when I run sudo ifconfig eth1 down). Question, how can I configure it so that I can have both devices eth0 and eth1 play nice on the same network, but allow internet access as well? (I am open to either enforcing all inet traffic going through a single device, or through both, I'm flexible). From my google searching, it seems I could have a unique (or not popular) problem, so haven't been able to find a solution. Is this something that people generally don't do? The reason I want to make use of both ethernet devices is because I want to run different local traffic services on on both to split the load, so to speak... Thanks in advance. UPDATE Contents of /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp # The secondary network interface #auto eth1 #iface eth1 inet dhcp (Note: above, I commented out the last 2 lines because I thought that was causing issues... but it didn't solve it) netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 192.168.1.1 255.255.255.0 UG 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 UPDATE 2 I made a change to the /etc/network/interfaces file as suggested by Kevin. Before I display the file contents and the route table, when I am logged into the server (through SSH), I can not ping an external server, so this is the same issue I was experiencing that led to me posting this question. I ran a /etc/init.d/networking restart after making the file changes. Contents of /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp address 192.168.1.210 netmask 255.255.255.0 gateway 192.168.1.1 # The secondary network interface auto eth1 iface eth1 inet dhcp address 192.168.1.211 netmask 255.255.255.0 ifconfig output eth0 Link encap:Ethernet HWaddr 78:2b:cb:4c:02:7f inet addr:192.168.1.210 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::7a2b:cbff:fe4c:27f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6397 errors:0 dropped:0 overruns:0 frame:0 TX packets:683 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:538881 (538.8 KB) TX bytes:85597 (85.5 KB) Interrupt:36 Memory:da000000-da012800 eth1 Link encap:Ethernet HWaddr 78:2b:cb:4c:02:80 inet addr:192.168.1.211 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::7a2b:cbff:fe4c:280/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5799 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:484436 (484.4 KB) TX bytes:1184 (1.1 KB) Interrupt:48 Memory:dc000000-dc012800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:635 errors:0 dropped:0 overruns:0 frame:0 TX packets:635 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:38154 (38.1 KB) TX bytes:38154 (38.1 KB) netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Windows Media Player won't launch on Vista - how to repair or reinstall it?

    - by rpm1200
    My friend asked me to look at her Acer Aspire laptop with Vista Home Premium as it is no longer playing DVDs. I found that Windows Media Player would not launch. I found this thread, which contained a number of suggestions, none of which solved the problem. Here is what I tried: Tried running WMP via desktop shortcut, QuickLaunch bar or going to Program Files\Windows Media Player\wmplayer.exe. In all cases, wmplayer would launch then terminate immediately (verified through the Processes tab in Task Manager). Tried running wmplayer.exe as Administrator. The UAC dialog would come up, I'd approve, then wmplayer would launch and terminate immediately. Uninstalled all non-Microsoft media programs except RealPlayer, iTunes, QuickTime, Acer Arcade (the laptop owner uses all those apps). Tried running Program Files\Windows Media Player\setup_wm.exe as Administrator, it launched but said that a newer version of WMP was already installed. Deleted the "Windows Media" folder located under %userprofile%\appdata\local\Microsoft then tried starting WMP - wmplayer would launch and terminate immediately. Register wmp.dll by typing "regsvr32 wmp.dll" in an Administrator cmd window then tried starting WMP - wmplayer would launch and terminate immediately. Run "SFC /SCANFILE" in an Administrator cmd window - get an error message that it found invalid system files and could not fix them, so look at the log file cbs.log. The log file shows that there are broken files associated with Windows Sidebar (which the user does not use) but none relating to WMP. Log off to safe mode and run "SFC /SCANFILE" in an Administrator cmd window again - same results. Try to download and install XP WMP - the microsoft.com site recognizes the OS as Genuine and allows the download, but when I launch the installer it says the system is not Genuine. Clicking the link directs me back to IE where I can authenticate the system as Genuine. The installer still fails to recognize the system as Genuine. It is a Genuine Vista installation. Try to run this update (KB931621). The installer said it did not apply to the system. Set Windows Media Player as default in Program Access and Defaults. Same results. Tried running "for %a in (%systemroot%\system32\wm*.dll) do regsvr32 /s %a" in an Administrator cmd window - same results. Went to this Knowledge Base article (947541) and ran the Microsoft Fix It. The Fix It ran successfully, but WMP would still launch and terminate immediately. Multiple reboots in the process of doing all of these steps. After all this, looked in the Application and Security logs. No events pertaining to WMP were logged. The computer was preinstalled with Vista Home Premium and I have the Acer backup DVDs which will reimage the drive. I do not have Vista install DVDs. Reimaging the system is not an option. I'd also rather not restore the system to an earlier point unless it's absolutely necessary. What else can I do to repair or reinstall WMP?

    Read the article

  • KVM with one host IP and a different subnet for machines

    - by Jguy
    I've already setup a KVM host with proper IP configurations, but my host had me create DHCP and use that to assign the IP's to the machines. I want to see if there's an easier way to do it (or better). Upon my first setting out on this, I didn't find anything that pointed me in the right direction. I'm coming off a fresh install of Debian 6.0 x64, so I have nothing installed. I've logged in, queried for the below information and changed the password from my host set one. I have a Debian 6.0 x64 system with the following initial network configuration (substituted 255 in place of my real first octave): # tail /etc/network/interfaces auto eth0 iface eth0 inet static address 255.9.24.80 broadcast 255.9.24.95 netmask 255.255.255.224 gateway 255.9.24.65 # default route to access subnet up route add -net 255.9.24.64 netmask 255.255.255.224 gw 255.9.24.65 eth0 I have a /29 subnet that I want the virtual machines to use from my host: IP: 255.46.187.152 /29 Mask: 255.255.255.248 Broadcast: 255.46.187.159 Usable IP addresses: 255.46.187.153 to 255.46.187.158 I like the interface of Cloudmin, so I want to try and use that if I can to administrate my guests. So, my questions: How do I set this up on the host system the best so that I can use the additional Subnet IP's on the guests and have them accessible from the internet? I also need to host a DNS server, which means one of these VM's has to have two IP's assigned to it and accessable from the outside world. How can I do that using Cloudmin? I had a question about this here: Multiple IP addresses assigned to one KVM VM But I just reformatted the entire server and am trying to figure out a better way of doing this. Machine information: # ip route show 255.9.24.64/27 via 255.9.24.65 dev eth0 255.9.24.64/27 dev eth0 proto kernel scope link src 255.9.24.80 default via 255.9.24.65 dev eth0 brctl is empty # ip addr list 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether c8:60:00:54:b5:d8 brd ff:ff:ff:ff:ff:ff inet 255.9.24.80/27 brd 255.9.24.95 scope global eth0 inet6 fe80::ca60:ff:fe54:b5d8/64 scope link valid_lft forever preferred_lft forever Thank you for any help you can provide me. EDIT: I've installed kvm and cloudmin: aptitude install qemu-kvm libvirt-bin wget http://cloudmin.virtualmin.com/gpl/scripts/cloudmin-kvm-debian-install.sh ./cloudmin-kvm-debian-install.sh Rebooted and now my network configuration looks like this: # device: eth0 iface eth0 inet manual # default route to access subnet iface br0 inet static address 255.9.24.80 netmask 255.255.255.224 broadcast 255.9.24.95 network 255.9.24.64 bridge_ports eth0 gateway 255.9.24.65 I setup in Cloudmin the Start IP as 255.46.187.153 and End IP as 255.46.187.158. The CIDR is 29 and the gateway is 255.46.187.152. I've installed a guest with ubuntuserver 12.04 x64, which was able to get and retrieve internet resources during installation, but now cannot reach anything nor can it be reached from anything. Its network configuration is: iface eth0 inet static address 255.46.187.153 netmask 255.255.255.224 broadcast 255.46.187.159 gateway 255.46.187.152 dns-nameservers <host provided nameservers> And is not able to ping google.com through DNS or direct IP, I can't ping the VM from the outside or the host. any ideas now?

    Read the article

  • Lockdown users on Windows Server 2012

    - by el.severo
    I set up a Active Directory on a server machine with Windows Server 2012 and I'd like to create some users with limitations like Windows Steady State does in Windows XP (locally). Seen already the Windows SteadyState Handbook (with Windows Server 2008), but I'd like to know if anyone has tried this before, the limitations are the following: 1. Prevent locked or roaming user profiles that cannot be found on the computer from logging on 2. Do not cache copies of locked or roaming user profiles for users who have previously logged on to this computer 3. Do not allow Windows to compute and store passwords using LAN Manager Hash values 4. Do not store usernames or passwords used to log on to the Windows Live ID or the domain 5. Prevent users from creating folders and files on drive C:\ 6. Lock profile to prevent the user from making permanent changes 7. Remove the Control Panel, Printer and Network Settings from the Classic Start menu 8. Remove the Favorites icon 9. Remove the My Network Places icon 10. Remove the Frequently Used Program list 11. Remove the Shared documents folder from My Computer 12. Remove control Panel icon 13. Remove the Set Program Access and Defaults icon 14. Remove the Network Connection(Connect To)icon 15. Remove the Printers and Faxes icon 16. Remove the Run icon 17. Prevent access to Windows Explorer features: Folder Options, Customize Toolbar, and the Notification Area 18. Prevent access to the taskbar 19. Prevent access to the command prompt 20. Prevent access to the registry editor 21. Prevent access to the Task Manager 22. Prevent access to Microsoft Management Console utilities 23. Prevent users from adding or removing printers 24. Prevent users from locking the computer 25. Prevent password changes (also requires the Control Panel icon to be removed) 26. Disable System Tools and other management programs 27. Prevent users from saving files to the desktop 28. Hide A Drive 29. Hide B Drive 30. Hide C Drive 31. Prevent changes to Internet Explorer registry settings 32. Empty the Temporary Internet Files folder when Internet Explorer is closed 33. Remove Internet Options 34. Remove General tab in Internet Options 35. Remove Security tab in Internet Options 36. Remove Privacy tab in Internet Options 37. Remove Content tab in Internet Options 38. Remove Connections tab in Internet Options 39. Remove Programs tab in Internet Options 40. Remove Advanced tab in Internet Options 41. Set a home page (Internet Explorer) 42. Restrict the possibility to change desktop image 43. Restrict the possibility to change wallpaper 44. Restrict usb flash drives Any suggestions for this? UPDATE: As @Dan suggested me I'd like to specify that would be applied to a educational scenario where students can login from a computer and want to add some restrictions to them.

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • SignalR Server Error "The ConnectionId is in the incorrect format." with SignalR-ObjC Library

    - by ozzotto
    Before asking a separate question I've done lots of googling about it and added a comment in the already existing stackoverflow question. I have a SignalR Hub (tried both v. 1.1.3 and 2.0.0-rc) in my server with the below code: [HubName("TestHub")] public class TestHub : Hub { [Authorize] public void TestMethod(string test) { //some stuff here Clients.Caller.NotifyOnTestCompleted(); } } The problem persists if I remove the Authorize attribute. And in my iOS client I try to call it with the below code: SRHubConnection *hubConnection = [SRHubConnection connectionWithURL:_baseURL]; SRHubProxy *hubProxy = [hubConnection createHubProxy:@"TestHub"]; [hubProxy on:@"NotifyOnTestCompleted" perform:self selector:@selector(stopConnection)]; hubConnection.started = ^{ [hubProxy invoke:@"TestMethod" withArgs:@[@"test"]]; }; //received, error handling [hubConnection start]; When the app starts the user is not logged in and there is no open SignalR connection. The users logs in by calling a Login service in the server which makes use of WebSecurity.Login method. If the login service returns success I then make the above call to SignalR Hub and I get the server error 500 with description "The ConnectionId is in the incorrect format.". The full server stacktrace is the following: Exception information: Exception type: InvalidOperationException Exception message: The ConnectionId is in the incorrect format. at Microsoft.AspNet.SignalR.PersistentConnection.GetConnectionId(HostContext context, String connectionToken) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.Hubs.HubDispatcher.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(IDictionary`2 environment) at Microsoft.Owin.Mapping.MapMiddleware.<Invoke>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.IntegratedPipelineContext.EndFinalWork(IAsyncResult ar) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Request information: Request URL: http://myserverip/signalr/signalr/connect?transport=webSockets&connectionToken=axJs EQMZxpmUopL36owSUkdhNs85E0fyB2XvV5R5znZfXYI/CiPbTRQ3kASc3 mq60cLkZU7coYo1P fbC0U1LR2rI6WIvCNIMOmv/mHut/Unt9mX3XFkQb053DmWgCan5zHA==&connectionData=[{"Name":"testhub"}] Request path: /signalr/signalr/connect User host address: User: Is authenticated: False Authentication Type: Thread account name: IIS APPPOOL\DefaultAppPool Thread information: Thread ID: 14 Thread account name: IIS APPPOOL\DefaultAppPool Is impersonating: True Stack trace: at Microsoft.AspNet.SignalR.PersistentConnection.GetConnectionId(HostContext context, String connectionToken) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.Hubs.HubDispatcher.ProcessRequest(HostContext context) at Microsoft.AspNet.SignalR.PersistentConnection.ProcessRequest(IDictionary`2 environment) at Microsoft.Owin.Mapping.MapMiddleware.<Invoke>d__0.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at Microsoft.Owin.Host.SystemWeb.IntegratedPipeline.IntegratedPipelineContext.EndFinalWork(IAsyncResult ar) at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I understand this is some kind of authentication and user identity mismatching but up to now I have found no way of solving it. All other questions suggest stoping the opened connection when the user identity changes but as I mentioned above I have no open connection before the user logs in successfully. Any help would be much appreciated. Thank you.

    Read the article

  • Facebook graph api photo upload to a fan page album

    - by kielie
    Hi guys, I have gotten the photo upload function to work with this code, <?php include_once 'facebook-php-sdk/src/facebook.php'; include_once 'config.php';//this file contains the secret key and app id etc... $facebook = new Facebook(array( 'appId' => FACEBOOK_APP_ID, 'secret' => FACEBOOK_SECRET_KEY, 'cookie' => true, 'domain' => 'your callback url goes here' )); $session = $facebook->getSession(); if (!$session) { $url = $facebook->getLoginUrl(array( 'canvas' => 1, 'fbconnect' => 0, 'req_perms'=>'user_photos,publish_stream,offline_access'//here I am requesting the required permissions, it should work with publish_stream alone, but I added the others just to be safe )); echo 'You are not logged in, please <a href="' . $facebook->getLoginUrl() . '">Login</a> to access this application'; } else{ try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); $token = $session['access_token'];//here I get the token from the $session array $album_id = 'the id of the album you wish to upload to eg: 1122'; //upload your photo $file= 'test.jpg'; $args = array( 'message' => 'Photo from application', ); $args[basename($file)] = '@' . realpath($file); $ch = curl_init(); $url = 'https://graph.facebook.com/'.$album_id.'/photos?access_token='.$token; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $args); $data = curl_exec($ch); //returns the id of the photo you just uploaded print_r(json_decode($data,true)); } catch(FacebookApiException $e){ echo "Error:" . print_r($e, true); } } ?> I hope this helps, a friend and I smashed our heads against a wall for quite some time to get this working! Anyways, here is my question, how can I upload a image to a fan page? I am struggling to get this working, when I upload the image all I get is the photo id but no photo in the album. So basically, when the user clicks the upload button on our application, I need it to upload the image they created to our fan page's album with them tagged on it. Anyone know how I can accomplish this?

    Read the article

  • WCF net.tcp windows service - call duration and calls outstanding increases over time

    - by Brook
    I have a windows service which uses the ServiceHost class to host a WCF Service using the net.tcp binding. I have done some tweaking to the config to throttle sessions as well as number of connections, but it seems that every once in a while my "Calls outstanding" and "Call duration" shoot up and stay up in perfmon. It seems to me I have a leak somewhere, but the code I have is all fairly minimal, I'm relying on ServiceHost to handle the details. Here's how I start my service ServiceHost host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); My Faulted event just does the following (more or less, logging etc removed) if (host.State == CommunicationState.Faulted) { host.Abort(); } else { host.Close(); } host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); Here's some snippets from my app.config to show some of the things I've tried <runtime> <gcConcurrent enabled="true" /> <generatePublisherEvidence enabled="false" /> </runtime> ......... <behaviors> <serviceBehaviors> <behavior name="Throttled"> <serviceThrottling maxConcurrentCalls="300" maxConcurrentSessions="300" maxConcurrentInstances="300" /> .......... <services> <service name="MyService" behaviorConfiguration="Throttled"> <endpoint address="net.tcp://localhost:49001/MyService" binding="netTcpBinding" bindingConfiguration="Tcp" contract="IMyService"> </endpoint> </service> </services> .......... <netTcpBinding> <binding name="Tcp" openTimeout="00:00:10" closeTimeout="00:00:10" portSharingEnabled="true" receiveTimeout="00:5:00" sendTimeout="00:5:00" hostNameComparisonMode="WeakWildcard" listenBacklog="1000" maxConnections="1000"> <reliableSession enabled="false"/> <security mode="None"/> </binding> </netTcpBinding> .......... <!--for my diagnostics--> <diagnostics performanceCounters="ServiceOnly" wmiProviderEnabled="true" /> There's obviously some resource getting tied up, but I thought I covered everything with my config. I'm only getting about ~150 clients so I don't think I'm coming up against my "300" limit. "Calls per second" stays constant at anywhere from 2-5 calls per second. The service will run for hours and hours with 0-2 "calls outstanding" and very low "call duration" and then eventually it will shoot up to 30 calls oustanding and 20s call duration. Any tips on what might be causing my "calls outstanding" and "call duration" to spike? Where am I leaking? Point me in the right direction?

    Read the article

  • Invalid or expired security context token in WCF web service

    - by Damian
    All, I have a WCF web service (let's called service "B") hosted under IIS using a service account (VM, Windows 2003 SP2). The service exposes an endpoint that use WSHttpBinding with the default values except for maxReceivedMessageSize, maxBufferPoolSize, maxBufferSize and some of the time outs that have been increased. The web service has been load tested using Visual Studio Load Test framework with around 800 concurrent users and successfully passed all tests with no exceptions being thrown. The proxy in the unit test has been created from configuration. There is a sharepoint application that use the Office Sharepoint Server Search service to call web services "A" and "B". The application will get data from service "A" to create a request that will be sent to service "B". The response coming from service "B" is indexed for search. The proxy is created programmatically using the ChannelFactory. When service "A" takes less than 10 minutes, the calls to service "B" are successfull. But when service "A" takes more time (~20 minutes) the calls to service "B" throw the following exception: Exception Message: An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail Inner Exception Message: The message could not be processed. This is most likely because the action 'namespace/OperationName' is incorrect or because the message contains an invalid or expired security context token or because there is a mismatch between bindings. The security context token would be invalid if the service aborted the channel due to inactivity. To prevent the service from aborting idle sessions prematurely increase the Receive timeout on the service endpoint's binding. The binding settings are the same, the time in both client server and web service server are synchronize with the Windows Time service, same time zone. When i look at the server where web service "B" is hosted i can see the following security errors being logged: Source: Security Category: Logon/Logoff Event ID: 537 User NT AUTHORITY\SYSTEM Logon Failure: Reason: An error occurred during logon Logon Type: 3 Logon Process: Kerberos Authentication Package: Kerberos Status code: 0xC000006D Substatus code: 0xC0000133 After reading some of the blogs online, the Status code means STATUS_LOGON_FAILURE and the substatus code means STATUS_TIME_DIFFERENCE_AT_DC. but i already checked both server and client clocks and they are syncronized. I also noticed that the security token seems to be cached somewhere in the client server because they have another process that calls the web service "B" using the same service account and successfully gets data the first time is called. Then they start the proccess to update the office sharepoint server search service indexes and it fails. Then if they called the first proccess again it will fail too. Has anyone experienced this type of problems or have any ideas? Regards, --Damian

    Read the article

  • Core Data migration failing with error: Failed to save new store after first pass of migration

    - by unforgiven
    In the past I had already implemented successfully automatic migration from version 1 of my data model to version 2. Now, using SDK 3.1.3, migrating from version 2 to version 3 fails with the following error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0x5363360 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=256 UserInfo=0x53622b0 "Operation could not be completed. (Cocoa error 256.)"; reason = "Failed to save new store after first pass of migration."; } I have tried automatic migration using NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption and also migration using only NSMigratePersistentStoresAutomaticallyOption, providing a mapping model from v2 to v3. I see the above error logged, and no object is available in the application. However, if I quit the application and reopen it, everything is in place and working. The Core Data methods I am using are the following ones - (NSManagedObjectModel *)managedObjectModel { if (managedObjectModel != nil) { return managedObjectModel; } NSString *path = [[NSBundle mainBundle] pathForResource:@"MYAPP" ofType:@"momd"]; NSURL *momURL = [NSURL fileURLWithPath:path]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL]; return managedObjectModel; } - (NSManagedObjectContext *) managedObjectContext { if (managedObjectContext != nil) { return managedObjectContext; } NSPersistentStoreCoordinator *coordinator = [self persistentStoreCoordinator]; if (coordinator != nil) { managedObjectContext = [[NSManagedObjectContext alloc] init]; [managedObjectContext setPersistentStoreCoordinator: coordinator]; } return managedObjectContext; } - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP.sqlite"]]; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel: [self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:options error:&error]) { // Handle error NSLog(@"Unresolved error %@, %@", error, [error userInfo]); } return persistentStoreCoordinator; } In the simulator, I see that this generates a MYAPP~.sqlite files and a MYAPP.sqlite file. I tried to remove the MYAPP~.sqlite file, but BOOL oldExists = [[NSFileManager defaultManager] fileExistsAtPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"MYAPP~.sqlite"]]; always returns NO. Any clue? Am I doing something wrong? Thank you in advance.

    Read the article

  • Using DefaultCredentials and DefaultNetworkCredentials

    - by Fred
    Hi, We're having a hard time figuring how these credentials objects work. In fact, they may not work how we expected them to work. Here's an explanation of the current issue. We got 2 servers that needs to talk with each other through webservices. The first one (let's call it Server01) has a Windows Service running as the NetworkService account. The other one (Server02) has ReportingServices running with IIS 6.0. The Windows Service on Server01 is trying to use the Server02's ReportingServices' WebService to generate reports and send them by email. So, here's what we tried so far. Setting the credentials at runtime (This works perfectly fine): rs.Credentials = new NetworkCredentials("user", "pass", "domain"); Now, if we could use a generic user all would be fine, however... we are not allowed to. So, we are trying to use the DefaultCredetials or DefaultNetworkCredentials and pass it to the RS Webservice: `rs.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials OR `rs.Credentials = System.Net.CredentialCache.DefaultCredentials Either way won't work. We're always getting 401 Unauthrorized from IIS. Now, what we know is that if we want to give access to a resource logged as NetworkService, we need to grant it to "DOMAIN\MachineName$" (http://msdn.microsoft.com/en-us/library/ms998320.aspx): Granting Access to a Remote SQL Server If you are accessing a database on another server in the same domain (or in a trusted domain), the Network Service account's network credentials are used to authenticate to the database. The Network Service account's credentials are of the form DomainName\AspNetServer$, where DomainName is the domain of the ASP.NET server and AspNetServer is your Web server name. For example, if your ASP.NET application runs on a server named SVR1 in the domain CONTOSO, the SQL Server sees a database access request from CONTOSO\SVR1$. We assumed that granting access the same way with IIS would work. However, it does not. Or at least, something is not set properly for it to authenticate correctly. So, here are some questions: We've read about "Impersonating Users" somewhere, do we need to set this somewhere in the Windows Service ? Is it possible to grant access to the NetworkService built-in account to a remote IIS server ? Thanks for reading!

    Read the article

  • Facebook Connect icon isn't showing up in Internet Explorer

    - by John Duff
    I'm working on a site that is using Facebook Connect and recently made some changes so that the main pages are cached and if you are not logged in (checked with an ajax call) it loads the Facebook Connect javascript and renders the connect button into the page. This works perfectly everywhere except Internet Explorer 7 and 8. The weird part is I render the buttons into a hidden Sign Up / Sign In form and when you show either of those the Connect buttons appear. You can take a look here and you will see the button in Firefox and not Internet Explorer. If you click Sign In the button will show up. This is a Rails app so on the server-side we're responding to an ajax call with rjs like this: page['signin-status'].replace(:partial => "common/layout/signin_menu") page.select('.facebook-connect').each do |value, index| value.replace(render(:partial => '/facebook/signin')) end page << <<-eos LazyLoader.load('http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php', function(){ FB.init('#{Facebooker.api_key}','/xd_receiver.html'); }); eos The first line is replacing the header, the second is the Connect buttons in the Modal dialogs. The partial being rendered into the header looks like this: <span id='signin-status'> <%= fb_login_button(remote_function(:url => "/facebook/connect"))%> | <%= link_to_function "Sign In", "showSignInForm();", :id => "signin" %> | <%= link_to_function "Sign Up", "showSignUpForm();", :id => "signup" %> </span> The Partial being rendered into the Modal dialogs looks like this: <div class='facebook-connect'> <div id="FB_HiddenContainer" style="position:absolute; top:-10000px; width:0px; height:0px;" ></div> <label>Or sign in with your Facebook account</label> <%= fb_login_button(remote_function(:url => "/facebook/connect"))%> </div> I find it very strange that showing the Modal dialog causes all the icons to show. Does anyone have any ideas or suggestions about what's going on?

    Read the article

  • Using multiple distinct TCP security binding configurations in a single WCF IIS-hosted WCF service a

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Configuring multiple distinct WCF binding configurations causes an exception to be thrown

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Snow Leopard sqlite3-ruby install problem

    - by JZ
    UPDATE 3/20/10 I'm running Mac OSX Snow Leopard, this problem is caused by a recent train wreck in which I updated ruby without RVM. I've attempted to properly install/run RVM, however I can't get it to work. I am unable to install the sqlite3-ruby gem. I get the following ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. How do I fix this? justin-zollarss-mac-pro:~ justinz$ rails -v Rails 2.3.5 justin-zollarss-mac-pro:~ justinz$ ruby -v ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin10.2.0] justin-zollarss-mac-pro:~ justinz$ gem -v 1.3.5 justin-zollarss-mac-pro:~ justinz$ which gem /usr/local/bin/gem justin-zollarss-mac-pro:~ justinz$ whereis gem /usr/bin/gem justin-zollarss-mac-pro:~ justinz$ which ruby /usr/local/bin/ruby justin-zollarss-mac-pro:~ justinz$ whereis ruby /usr/bin/ruby justin-zollarss-mac-pro:~ justinz$ which rails /usr/local/bin/rails justin-zollarss-mac-pro:~ justinz$ whereis rails /usr/bin/rails justin-zollarss-mac-pro:~ justinz$ gem list *** LOCAL GEMS *** actionmailer (2.3.5) actionpack (2.3.5) activerecord (2.3.5) activeresource (2.3.5) activesupport (2.3.5) builder (2.1.2) bundler (0.9.11) columnize (0.3.1) erubis (2.6.5) fastercsv (1.5.1) ffi (0.6.3) gbarcode (0.98.16) i18n (0.3.5) linecache (0.43) mail (2.1.3) memcache-client (1.8.0) prawn (0.8.4) prawn-core (0.8.4) prawn-layout (0.8.4) prawn-security (0.8.4) rack (1.1.0, 1.0.1) rack-mount (0.6.1) rack-test (0.5.3) rails (2.3.5) rake (0.8.7) ruby-debug (0.10.3) ruby-debug-base (0.10.3) rubygems-update (1.3.6) sqlite3 (0.0.8) text-format (1.0.0) thor (0.13.4) tzinfo (0.3.17) justin-zollarss-mac-pro:~ justinz$ sudo gem install sqlite3-ruby Password: Building native extensions. This could take a while... ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for fdatasync() in -lrt... no checking for sqlite3.h... yes checking for sqlite3_open() in -lsqlite3... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/local/bin/ruby --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib --with-rtlib --without-rtlib --with-sqlite3lib --without-sqlite3lib Gem files will remain installed in /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5 for inspection. Results logged to /usr/local/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.5/ext/sqlite3_api/gem_make.out

    Read the article

  • Rendering a WPF Network Map/Graph layout - Manual? PathListBox? Something Else?

    - by Ben Von Handorf
    I'm writing code to present the user with a simplified network map. At any given time, the map is focused on a specific item... say a router or a server. Based on the focused item, other network entities are grouped into sets (i.e. subnets or domains) and then rendered around the focused item. Lines would represent connections and groups would be visually grouped inside a rectangle or ellipse. Panning and zooming are required features. An item can be selected to display more information in a "properties" style window. An item could also be double-clicked to re-focus the entire network map on that item. At that point, the entire map would be re-calculated. I am using MVVM without any framework, as of yet. Assume the logic for grouping items and determining what should be shown or not is all in place. I'm looking for the best way to approach the UI layout. So far, I'm aware of the following options: Use a canvas for layout (inside a ScrollViewer to handle the panning). Have my ViewModel make use of a Layout Manager type of class, which would handle assigning all the layout properties (Top, Left, etc.). Bind my set of display items to an ItemsControl and use Data Templates to handle the actual rendering. The drawbacks with this approach: Highly manual layout on my part. Lots of calculation. I have to handle item selection manually. Computation of connecting lines is manual. The Pros of this approach: I can draw additional lines between child subnets as appropriate (manually). Additional LayoutManagers could be added later to render the display differently. This could probably be wrapped up into some sort of a GraphLayout control to be re-used. Present the focused item at the center of the display and then use a PathListBox for layout of the additional items. Have my ViewModel expose a simple list of things to be drawn and bind them to the PathListBox. Override the ListBoxItem Template to also create a line geometry from the borders of the focused item (tricky) to the bound item. Use DataTemplates to handle the case where the item being bound is a subnet, in which case we would use another PathListBox in the template to display items inside the subnet. The drawbacks with this approach: Selected Item synchronization across multiple `PathListBox`es. Only one item on the whole graph can be selected at a time, but each child PathListBox maintains its own selection. Also, subnets cannot be selected, but would be selectable without additional work. Drawing the connecting lines is going to be a bit of trickery in the ListBoxItem template, since I need to know the correct side of the focused item to connect to. The pros of this approach: I get to stay out of the layout business, more. I'm looking for any advice or thoughts from others who have encountered similar issues or who have more WPF experience than I. I'm using WPF 4, so any new tricks are legal and encouraged.

    Read the article

  • .Net Intermittent System.Web.Services.Protocols.SoapHeaderException

    - by ScottE
    We have a .net 3.5 web app that consumes third party web services. The proxy was created by adding a web reference to their wsdl. This proxy is not compiled. Our error logging is picking up frequent but intermittent exceptions: An exception of type 'System.Web.Services.Protocols.SoapHeaderException' occurred and was caught If I follow the url to the page that generated the exception, I can't recreate it. Edit: Here is most of the exception - where it bubbled up from Message : Internal Error Type : System.Web.Services.Protocols.SoapHeaderException, System.Web.Services, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a Source : System.Web.Services Help link : Actor : Code : http://schemas.xmlsoap.org/soap/envelope/:Client Detail : Lang : Node : Role : SubCode : Data : System.Collections.ListDictionaryInternal TargetSite : System.Object[] ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) Stack Trace : at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Vendor.getSearch(getSearchRequest getSearchRequest) in c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\be43c34e\b09edc7e\App_WebReferences.pww-cf-q.0.cs:line 73 Edit 2: Inner exceptions: I sometimes get the following inner exceptions logged: Message : Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Type : System.IO.IOException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Read(Byte[], Int32, Int32) Stack Trace : at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) And/Or: Message : An existing connection was forcibly closed by the remote host Type : System.Net.Sockets.SocketException, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Source : System Help link : ErrorCode : 10054 SocketErrorCode : ConnectionReset NativeErrorCode : 10054 Data : System.Collections.ListDictionaryInternal TargetSite : Int32 Receive(Byte[], Int32, Int32, System.Net.Sockets.SocketFlags) Stack Trace : at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) Update We're still working on it. Originally there was a route issue, which was resolved. We're still getting the inner exception with socket errors. We had MS support involved today, and they looked at some traces and network captures. The web service host does round-robin DNS, and they may be responding on a different IP address for the syn syn/ack from one ip, and the next from a different ip. This is not good. This is likely quite specific to our situation, but perhaps it applies to others as well. Microsoft Network Monitor and an application trace got us the information we needed.

    Read the article

  • Facebook Connect: Error when clicking the Facebook Connect button

    - by Garrett
    I am getting this error when I click on the facebook connect button: API Error Code: 100 API Error Description: Invalid parameter Error Message: next is not owned by the application. I am not too sure how to do this, but I've read all the documentation for facebook connect and came up with this: <?php date_default_timezone_set("America/Toronto"); define('FACEBOOK_APP_ID', '##################'); define('FACEBOOK_SECRET', 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX'); function get_facebook_cookie($app_id, $application_secret) { if(!isset($_COOKIE['fbs_' . $app_id])) return null; $args = array(); parse_str(trim($_COOKIE['fbs_' . $app_id], '\\"'), $args); ksort($args); $payload = ''; foreach ($args as $key => $value) { if ($key != 'sig') { $payload .= $key . '=' . $value; } } if (md5($payload . $application_secret) != $args['sig']) { return null; } return $args; } $cookie = get_facebook_cookie(FACEBOOK_APP_ID, FACEBOOK_SECRET); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <!-- JQUERY INCLUDE --> <script src="js/jquery-1.4.2.min.js" type="text/javascript"></script> <!-- FACEBOOK CONNECT INCLUDE --> <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US" type="text/javascript"></script> <script type="text/javascript"> <!-- $(document).load(function() { }); FB.init("XXXXXXXXXXXXXXXXXXXXXXx ", "xd_receiver.htm"); FB.Event.subscribe('auth.login', function(response) { window.location.reload(); }); function facebook_onlogin() { FB.getLoginStatus(function(response) { if (response.session) { // logged in and connected user, someone you know fb_login.hide(); } else { // no user session available, someone you dont know } }); } // --> </script> </head> <body> <div id="fb_login"> <fb:login-button onlogin="facebook_onlogin();" v="2">Log In with Facebook</fb:login-button> </div> <?php $user = json_decode(file_get_contents( 'https://graph.facebook.com/me?access_token=' . $cookie['access_token']))->id; ?> </body> </html> how on earth can i get this to work? thanks!

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • SSRS 2005 giving me "Invalid URI: The format of the URI could not be determined" when trying to cust

    - by Brian
    Hello, I'm getting the error "Invalid URI: The format of the URI could not be determined" when customizing it. I've made several changes to the configuration files and UI, but I keep getting this error. It isn't logging it too in the event log nor the log files, which makes it very annoying to debug. So how do I figure out where the error is coming from? Is it with the URL that's pointing to the ReportServer2005.asmx file, or something else? Updated: The specific error being logged is: aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WatsonDumpOnExceptions to default value of 'Microsoft.ReportingServices.Diagnostics.Utilities.InternalCatalogException,Microsoft.ReportingServices.Modeling.InternalModelingException' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WatsonDumpExcludeIfContainsExceptions to default value of 'System.Data.SqlClient.SqlException,System.Threading.ThreadAbortException' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing SecureConnectionLevel to default value of '1' because it was not specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing DisplayErrorLink to 'True' as specified in Configuration file. aspnet_wp!library!9!3/11/2010-15:52:49:: i INFO: Initializing WebServiceUseFileShareStorage to default value of 'False' because it was not specified in Configuration file. aspnet_wp!ui!9!3/11/2010-15:52:52:: e ERROR: Invalid URI: The format of the URI could not be determined. aspnet_wp!ui!9!3/11/2010-15:52:53:: e ERROR: HTTP status code -- 500 -------Details-------- System.UriFormatException: Invalid URI: The format of the URI could not be determined. at Microsoft.SqlServer.ReportingServices2005.RSConnection.GetSecureMethods() at Microsoft.ReportingServices.UI.Global.RSWebServiceWrapper.GetSecureMethods() at Microsoft.SqlServer.ReportingServices2005.RSConnection.IsSecureMethod(String methodname) at Microsoft.SqlServer.ReportingServices2005.RSConnection.ValidateConnection() at Microsoft.ReportingServices.UI.Global.SecureAllAPI() at Microsoft.ReportingServices.UI.ReportingPage.EnsureHttpsLevel(HttpsLevel level) at Microsoft.ReportingServices.UI.ReportingPage.ReportingPage_Init(Object sender, EventArgs args) at System.EventHandler.Invoke(Object sender, EventArgs e) at System.Web.UI.Control.OnInit(EventArgs e) at System.Web.UI.Page.OnInit(EventArgs e) at System.Web.UI.Control.InitRecursive(Control namingContainer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) aspnet_wp!ui!9!3/11/2010-15:52:53:: e ERROR: Exception in ShowErrorPage: System.Threading.ThreadAbortException: Thread was being aborted. at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) at at System.Threading.Thread.AbortInternal() at System.Threading.Thread.Abort(Object stateInfo) at System.Web.HttpResponse.End() at System.Web.HttpServerUtility.Transfer(String path, Boolean preserveForm) at Microsoft.ReportingServices.UI.ReportingPage.ShowErrorPage(String errMsg) Thanks.

    Read the article

  • optimizing iPhone OpenGL ES fill rate

    - by NateS
    I have an Open GL ES game on the iPhone. My framerate is pretty sucky, ~20fps. Using the Xcode OpenGL ES performance tool on an iPhone 3G, it shows: Renderer Utilization: 95% to 99% Tiler Utilization: ~27% I am drawing a lot of pretty large images with a lot of blending. If I reduce the number of images drawn, framerates go from ~20 to ~40, though the performance tool results stay about the same (renderer still maxed). I think I'm being limited by the fill rate of the iPhone 3G, but I'm not sure. My questions are: How can I determine with more granularity where the bottleneck is? That is my biggest problem, I just don't know what is taking all the time. If it is fillrate, is there anything I do to improve it besides just drawing less? I am using texture atlases. I have tried to minimize image binds, though it isn't always possible (drawing order, not everything fits on one 1024x1024 texture, etc). Every frame I do 10 image binds. This seem pretty reasonable, but I could be mistaken. I'm using vertex arrays and glDrawArrays. I don't really have a lot of geometry. I can try to be more precise if needed. Each image is 2 triangles and I try to batch things were possible, though often (maybe half the time) images are drawn with individual glDrawArrays calls. Besides the images, I have ~60 triangles worth of geometry being rendered in ~6 glDrawArrays calls. I often glTranslate before calling glDrawArrays. Would it improve the framerate to switch to VBOs? I don't think it is a huge amount of geometry, but maybe it is faster for other reasons? Are there certain things to watch out for that could reduce performance? Eg, should I avoid glTranslate, glColor4g, etc? I'm using glScissor in a 3 places per frame. Each use consists of 2 glScissor calls, one to set it up, and one to reset it to what it was. I don't know if there is much of a performance impact here. If I used PVRTC would it be able to render faster? Currently all my images are GL_RGBA. I don't have memory issues. Here is a rough idea of what I'm drawing, in this order: 1) Switch to perspective matrix. 2) Draw a full screen background image 3) Draw a full screen image with translucency (this one has a scrolling texture). 4) Draw a few sprites. 5) Switch to ortho matrix. 6) Draw a few sprites. 7) Switch to perspective matrix. 8) Draw sprites and some other textured geometry. 9) Switch to ortho matrix. 10) Draw a few sprites (eg, game HUD). Steps 1-6 draw a bunch of background stuff. 8 draws most of the game content. 10 draws the HUD. As you can see, there are many layers, some of them full screen and some of the sprites are pretty large (1/4 of the screen). The layers use translucency, so I have to draw them in back-to-front order. This is further complicated by needing to draw various layers in ortho and others in perspective. I will gladly provide additional information if reqested. Thanks in advance for any performance tips or general advice on my problem!

    Read the article

  • SQL 2008 R2 login/network issue

    - by martinjd
    I have a Windows Server 2008 R2 new clean install , not a VM, that I have added to a Windows Server 2003 based domain using my account which has domain admin rights. The domain functional level is 2003. I performed a clean install of SQL Server 2008 R2 using my account which has domain admin rights. The installation completed without any errors. I logged into SSMS locally and attempted to add another domain account by clicking Search, Advanced and finding the user in the domain. When I return to the "Dialog - New" window and click OK I receive the following error: Create failed for Login 'Domain\User'. (Microsoft.SqlServer.Smo) An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Windows NT user or group 'Domain\User' not found. Check the name again. (Microsoft SQL Server, Error: 15401) I have verified that the firewall is off, tried adding a different domain user, tried using SA to add a user, installed the hotfix for KB 976494 and verified that the Local Security Policy for Domain Member: Digitally encrypt or sign secure channel Domain Member: Digitally encrypt secure channel Domain Member: Digitally sign secure channel are disabled none of which have made a difference. I can RDP to a Server 2003 server running SQL 2008 and add the same domain user without issue. Also if I try to connect with SSMS to the sql server from another system on the domain using my account I get the following error: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) and on the database server I see the following in the security event log: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: myUserName Account Domain: MYDOMAIN Failure Information: Failure Reason: An Error occured during Logon. Status: 0xc000018d Sub Status: 0x0 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: MYWKS Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I am sure that the "NULL SID" has some significant meaning but have no idea at this point what the issue could be.

    Read the article

  • How can I bind the nested viewmodels to properties of a control

    - by Robert
    I used Microsoft's Chart Control of the WPF toolkit to write my own chart control. I blogged about it here. My Chart control stacks the yaxes in the chart on top of each other. As you can read in the article this all works quite well. Now I want to create a viewmodel that controls the data and axes in the chart. So far I'm able to add axes to the chart and show them in the chart. But I have a problem when I try to add the lineseries because it has one DependentAxis and one InDependentAxis property. I don't know how to assign the proper xAxis and yAxis controls to it. Below you see part of the LineSeriesViewModel. It has a nested XAxisViewModel and YAxisViewModel property. public class LineSeriesViewModel : ViewModelBase, IChartComponent { XAxisViewModel _xAxis; public XAxisViewModel XAxis { get { return _xAxis; } set { _xAxis = value; RaisePropertyChanged(() => XAxis); } } //The YAxis Property look the same } The viewmodels all have their own datatemplate. The xaml code looks like this: <UserControl.Resources> <DataTemplate x:Key="xAxisTemplate" DataType="{x:Type l:YAxisViewModel}"> <chart:LinearAxis x:Name="yAxis" Orientation="Y" Location="Left" Minimum="0" Maximum="10" IsHitTestVisible="False" Width="50" /> </DataTemplate> <DataTemplate x:Key="yAxisTemplate" DataType="{x:Type l:XAxisViewModel}"> <chart:LinearAxis x:Name="xAxis" Orientation="X" Location="Bottom" Minimum="0" Maximum="100" IsHitTestVisible="False" Height="50" /> </DataTemplate> <DataTemplate DataType="{x:Type l:LineSeriesViewModel}"> <!--Binding doesn't work on the Dependent and IndependentAxis! --> <!--YAxis XAxis and Series are properties of the LineSeriesViewModel --> <l:FastLineSeries DependentAxis="{Binding Path=YAxis}" IndependentAxis="{Binding Path=XAxis}" ItemsSource="{Binding Path=Series}"/> </DataTemplate> <Style TargetType="ItemsControl"> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <!--My stacked chart control --> <l:StackedPanel x:Name="stackedPanel" Width="Auto" Height="Auto" Background="LightBlue"> </l:StackedPanel> </ItemsPanelTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" ClipToBounds="True"> <!-- View is an ObservableCollection of all axes and series--> <ItemsControl x:Name="chartItems" ItemsSource="{Binding Path=View}" Focusable="False"> </ItemsControl> </Grid> This code works quite well. When I add axes they get drawn. But the DependentAxis and InDependentAxis of the lineseries control stay null, so the series doesn't get drawn. How can I bind the nested viewmodels to the properties of a control?

    Read the article

  • Memory leak on CollectionView.View.Refresh

    - by Dabblernl
    I have defined my binding thus: <TreeView ItemsSource="{Binding UsersView.View}" ItemTemplate="{StaticResource MyDataTemplate}" /> The CollectionViewSource is defined thus: private ObservableCollection<UserData> users; public CollectionViewSource UsersView{get;set;} UsersView=new CollectionViewSource{Source=users}; UsersView.SortDescriptions.Add( new SortDescription("IsLoggedOn",ListSortDirection.Descending); UsersView.SortDescriptions.Add( new SortDescription("Username",ListSortDirection.Ascending); So far, so good, this works as expected: The view shows first the users that are logged on in alphabetical order, then the ones that are not. However, the IsLoggedIn property of the UserData is updated every few seconds by a backgroundworker thread and then the code calls: UsersView.View.Refresh(); on the UI thread. Again this works as expected: users that log on are moved from the bottom of the view to the top and vice versa. However: Every time I call the Refresh method on the view the application hoards 3,5MB of extra memory, which is only released after application shutdown (or after an OutOfMemoryException...) I did some research and below is a list of fixes that did NOT work: The UserData class implements INotifyPropertyChanged Changing the underlying users collection does not make any difference at all: any IENumerable<UserData as a source for the CollectionViewSource causes the problem. -Changing the ColletionViewSource to a List<UserData (and refreshing the binding) or inheriting from ObservableCollection to get access to the underlying Items collection to sort that in place does not work. I am out of ideas! Help? EDIT: I found it: The Resource MyDataTemplate contains a Label that is bound to a UserData object to show one of its properties, the UserData objects being handed down by the TreeView's ItemsSource. The Label has a ContextMenu defined thus: <ContextMenu Background="Transparent" Width="325" Opacity=".8" HasDropShadow="True"> <PrivateMessengerUI:MyUserData IsReadOnly="True" > <PrivateMessengerUI:MyUserData.DataContext> <Binding Path="."/> </PrivateMessengerUI:MyUserData.DataContext> </PrivateMessengerUI:MyUserData> </ContextMenu> The MyUserData object is a UserControl that shows All properties of the UserData object. In this way the user first only sees one piece of data of a user and on a right click sees all of it. When I remove the MyUserData UserControl from the DataTemplate the memory leak disappears! How can I still implement the behaviour as specified above?

    Read the article

  • javax.servlet.ServletException - how could i get to the cause?

    - by Michal
    Hi, i'm getting a very strange error while opening one of the pages in my web app. The application is built on Seam 2.2 and is using JSF (RichFaces) as a view technology. I run it on Tomcat 6. The error i'm describing doesn't occur on my machine (Mac OS X), but it does on my client's dev machines (Windows) and on the server (Linux Debian). I'm sure i'm running the same version on each system and i have tried connecting to the same database... In logs everything looks fine - each next JSF Phase executes normally, and after the last one, there is this moment when the request starts processing for the SEAM debug page... And this is the stack trace i see on the debug page (nothing is logged): Exception during request processing: Caused by javax.servlet.ServletException with message: "Servlet execution threw an exception" org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:313) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:83) org.jboss.seam.web.IdentityFilter.doFilter(IdentityFilter.java:40) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.MultipartFilter.doFilter(MultipartFilter.java:90) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.ExceptionFilter.doFilter(ExceptionFilter.java:64) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.RedirectFilter.doFilter(RedirectFilter.java:45) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.ajax4jsf.webapp.BaseXMLFilter.doXmlFilter(BaseXMLFilter.java:178) org.ajax4jsf.webapp.BaseFilter.handleRequest(BaseFilter.java:290) org.ajax4jsf.webapp.BaseFilter.processUploadsAndHandleRequest(BaseFilter.java:388) org.ajax4jsf.webapp.BaseFilter.doFilter(BaseFilter.java:515) org.jboss.seam.web.Ajax4jsfFilter.doFilter(Ajax4jsfFilter.java:56) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.LoggingFilter.doFilter(LoggingFilter.java:60) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.web.HotDeployFilter.doFilter(HotDeployFilter.java:53) org.jboss.seam.servlet.SeamFilter$FilterChainImpl.doFilter(SeamFilter.java:69) org.jboss.seam.servlet.SeamFilter.doFilter(SeamFilter.java:158) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) pl.mgibowski.alterium.util.LoggingFilter.doFilter(LoggingFilter.java:18) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:465) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) Exception without any cause... I was trying to catch the exception with my custom Filter (LoggingFilter.java that you can see on the strack trace), using this code: try { filterChain.doFilter(servletRequest, servletResponse); } catch (Throwable e) { e.printStackTrace(); System.out.println("Stack trace:"); System.out.println(e.getStackTrace()); System.out.println("Cause:"); System.out.println(e.getCause()); } But it doesn't catch anything, the line 18 from the stack trace is this one: filterChain.doFilter(servletRequest, servletResponse); nothing gets caught by the try block. Anybody has any ideas about how could i get closer to the real cause?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >