Search Results

Search found 16884 results on 676 pages for 'live video'.

Page 653/676 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

  • Apache on Win32: Slow Transfers of single, static files in HTTP, fast in HTTPS

    - by Michael Lackner
    I have a weird problem with Apache 2.2.15 on Windows 2000 Server SP4. Basically, I am trying to serve larger static files, images, videos etc. The download seems to be capped at around 550kB/s even over 100Mbit LAN. I tried other protocols (FTP/FTPS/FTP+ES/SCP/SMB), and they are all in the multi-megabyte range. The strangest thing is that, when using Apache with HTTPS instead of HTTP, it serves very fast, around 2.7MByte/s! I also tried the AnalogX SimpleWWW server just to test the plain HTTP speed of it, and it gave me a healthy 3.3Mbyte/s. I am at a total loss here. I searched the web, and tried to change the following Apache configuration directives in httpd.conf, one at a time, mostly to no avail at all: SendBufferSize 1048576 #(tried multiples of that too, up to 100Mbytes) EnableSendfile Off #(minor performance boost) EnableMMAP Off Win32DisableAcceptEx HostnameLookups Off #(default) I also tried to tune the following registry parameters, setting their values to 4194304 in decimal (they are REG_DWORD), and rebooting afterwards: HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultReceiveWindow HKLM\SYSTEM\CurrentControlSet\Services\AFD\Parameters\DefaultSendWindow Additionally, I tried to install mod_bw, which sets the event timer precision to 1ms, and allows for bandwidth throttling. According to some people it boosts static file serving performance when set to unlimited bandwidth for everybody. Unfortunately, it did nothing for me. So: AnalogX HTTP: 3300kB/s Gene6 FTPD, plain: 3500kB/s Gene6 FTPD, Implicit and Explicit SSL, AES256 Cipher: 1800-2000kB/s freeSSHD: 1100kB/s SMB shared folder: about 3000kB/s Apache HTTP, plain: 550kB/s Apache HTTPS: 2700kB/s Clients that were used in the bandwidth testing: Internet Explorer 8 (HTTP, HTTPS) Firefox 8 (HTTP, HTTPS) Chrome 13 (HTTP, HTTPS) Opera 11.60 (HTTP, HTTPS) wget under CygWin (HTTP, HTTPS) FileZilla (FTP, FTPS, FTP+ES, SFTP) Windows Explorer (SMB) Generally, transfer speeds are not too high, but that's because the server machine is an old quad Pentium Pro 200MHz machine with 2GB RAM. However, I would like Apache to serve at at least 2Mbyte/s instead of 550kB/s, and that already works with HTTPS easily, so I fail to see why plain HTTP is so crippled. I am using a Kerio Winroute Firewall, but no Throttling and no special filters peeking into HTTP traffic, just the plain Firewall functionality for blocking/allowing connections. The Apache error.log (Loglevel info) shows no warnings, no errors. Also nothing strange to be seen in access.log. I have already stripped down my httpd.conf to the bare minimum just to make sure nothing is interfering, but that didn't help either. If you have any idea, help would be greatly appreciated, since I am totally out of ideas! Thanks! Edit: I have now tried a newer Apache 2.2.21 to see if it makes any difference. However, the behaviour is exactly the same. Edit 2: KM01 has requested a sniff on the HTTP headers, so here comes the LiveHTTPHeaders output (an extension to Firefox). The Output is generated on downloading a single file called "elephantsdream_source.264", which is an H.264/AVC elementary video stream under an Open Source license. I have taken the freedom to edit the URL, removing folders and changing the actual servers domain name to www.mydomain.com. Here it is: LiveHTTPHeaders, Plain HTTP: http://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:55:16 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain LiveHTTPHeaders, HTTPS: https://www.mydomain.com/elephantsdream_source.264 GET /elephantsdream_source.264 HTTP/1.1 Host: www.mydomain.com User-Agent: Mozilla/5.0 (Windows NT 5.2; WOW64; rv:6.0.2) Gecko/20100101 Firefox/6.0.2 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Connection: keep-alive HTTP/1.1 200 OK Date: Wed, 21 Dec 2011 20:56:57 GMT Server: Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8r PHP/5.2.17 Last-Modified: Thu, 28 Oct 2010 20:20:09 GMT Etag: "c000000013fa5-29cf10e9-493b311889d3c" Accept-Ranges: bytes Content-Length: 701436137 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain

    Read the article

  • Need help identiying a nasty rootkit in Windows

    - by goofrider
    I have a nasty rootkit that not tools seem to be able to idenity. I know for sure it's a rootkit, but I can figure out which rootkit it is. Here's what I gathered so far: It creates multiple copies of itself in %HOME%\Local Settings\Temp with names like Q.EXE, IAJARZ.exe, etc., and install them as hidden services. These EXE have SysInternals identifiers in them so they're definitely rootkits. It hooked very deep in the system, including file read/write, security policies, registry read/write, and possibly WinSock/TCP/IP. When going to Sophos.com to download their software, the rootkit inject something called Microsoft Ajax Tootkit into the page, which injects code into the email submission form in order to redirect it. (EDIT: I might have panicked. Looks like Sophos does use an AJAZ email form, their form is just broken on Chrome so it looked like a mail form injection attack, the link is http://www.sophos.com/en-us/products/free-tools/virus-removal-tool/download.aspx ) Super-Antispyware found a lot of spyware cookies, in the name of .kaspersky.2o7.net, etc. (just chedk 2o7.net, looks like it's a legit ad company) I tried comparing DNS lookup from the infected systems and from system in other physical locations, no DNS redirections it seems. I used dd to copy the MBR and compared it with the MBR provided by ms-sys package, no differences so it's not infecting MBR. No antivirus or rootkit scanner be able to identify it. Most of them can't even find it. I tried scanning, in-situ (normal mode), in safe mode, and boot to linux live CD. Scanners used: Avast, Sophos anti rootkit, Kasersky TDSSKiller, GMER, RootkitRevealer, and many others. Kaspersky reported some unsigned system files that ought to be signed (e.g. tcpip.sys), and reported a number of MD5 mismatches. But otherwise couldn't identify anything based on signature. When running Sysinternal RootkitRevealer and Sophos AntiRootkit, CPU usage goes up to 100% and gets stucked. The Rootkit is blocking them. When trying running/installing HiJackThis, RootkitRevealer and some other scanners, it tells me system security policy prevent running/installing it. The list of malicious acitivities go on and on. here's a sample of logs from all my scans. In particular, aswSnx.SYS, apnenfno.sys and PROCMON20.SYS has a huge number of hooks. It's hard to tell if the rootkit replaced legit program files like aswSnx.SYS (from Avast) and PROCMON20.SYS (from Sysinternal Process Monitor). I can't find whether apnenfno.sys is from a legit program. Help to identify it is appreciated. Trend Micro RootkitBuster ------ [HIDDEN_REGISTRY][Hidden Reg Value]: KeyPath : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\sptd\Cfg Root : 586bfc0 SubKey : Cfg ValueName : g0 Data : 38 23 E8 D0 BF F2 2D 6F ... ValueType : 3 AccessType: 0 FullLength: 61 DataSize : 32 [HOOKED_SERVICE_API]: Service API : ZwCreateMutant Image Path : C:\WINDOWS\System32\Drivers\aswSnx.SYS OriginalHandler : 0x8061758e CurrentHandler : 0xaa66cce8 ServiceNumber : 0x2b ModuleName : aswSnx.SYS SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwCreateThread Image Path : c:\windows\system32\drivers\apnenfno.sys OriginalHandler : 0x805d1038 CurrentHandler : 0xaa5f118c ServiceNumber : 0x35 ModuleName : apnenfno.sys SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwDeleteKey Image Path : C:\WINDOWS\system32\Drivers\PROCMON20.SYS OriginalHandler : 0x80624472 CurrentHandler : 0xa709b0f8 ServiceNumber : 0x3f ModuleName : PROCMON20.SYS SDTType : 0x0 HiJackThis ------ O23 - Service: JWAHQAGZ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\JWAHQAGZ.exe O23 - Service: LHIJ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\LHIJ.exe Kaspersky TDSSKiller ------ 21:05:58.0375 3936 C:\WINDOWS\system32\ati2sgag.exe - copied to quarantine 21:05:59.0217 3936 ATI Smart ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0342 3936 C:\WINDOWS\system32\BUFADPT.SYS - copied to quarantine 21:05:59.0856 3936 BUFADPT ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0965 3936 C:\Program Files\CrashPlan\CrashPlanService.exe - copied to quarantine 21:06:00.0152 3936 CrashPlanService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0246 3936 C:\WINDOWS\system32\epmntdrv.sys - copied to quarantine 21:06:00.0433 3936 epmntdrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0464 3936 C:\WINDOWS\system32\EuGdiDrv.sys - copied to quarantine 21:06:00.0526 3936 EuGdiDrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0604 3936 C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe - copied to quarantine 21:06:01.0181 3936 FLEXnet Licensing Service ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0321 3936 C:\Program Files\AddinForUNCFAT\UNCFATDMS.exe - copied to quarantine 21:06:01.0430 3936 OTFSDMS ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0492 3936 C:\WINDOWS\system32\DRIVERS\tcpip.sys - copied to quarantine 21:06:01.0539 3936 Tcpip ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0601 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - copied to quarantine 21:06:01.0664 3936 HKLM\SYSTEM\ControlSet003\services\TULPUWOX - will be deleted on reboot 21:06:01.0664 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - will be deleted on reboot 21:06:01.0664 3936 TULPUWOX ( UnsignedFile.Multi.Generic ) - User select action: Delete 21:06:01.0757 3936 C:\WINDOWS\system32\Drivers\usbaapl.sys - copied to quarantine 21:06:01.0866 3936 USBAAPL ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0913 3936 C:\Program Files\VMware\VMware Player\vmware-authd.exe - copied to quarantine 21:06:02.0443 3936 VMAuthdService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - User select action: Skip 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - User select action: Skip

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • dual boot install--no GRUB

    - by Jim Syyap
    My computer recently had a hardware upgrade and now runs on Windows 7. I decided to install Ubuntu 11.04 as dual boot using the ISO I got from ubuntu.com downloaded onto my USB stick. Restarting with the USB stick, I was able to install Ubuntu 11.04 choosing the option: Install Ubuntu 11.04 side by side with Windows 7 (or something like that). No errors were encountered on installation. However on restarting, there was no GRUB; the system went straight into Windows 7. Looking for answers, I found these: http://essayboard.com/2011/07/12/how-to-dual-boot-ubuntu-11-04-and-windows-7-the-traditional-way-through-grub-2/ http://ubuntuforums.org/showthread.php?t=1774523 Following their instructions, I got: Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Syslinux MBR (3.61-4.03) is installed in the MBR of /dev/sdb. => Grub2 (v1.99) is installed in the MBR of /dev/sdc and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos7)/boot/grub on this drive. sda1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr sda2: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe sdb1: __________________________________________________ ________________________ File system: vfat Boot sector type: SYSLINUX 4.02 debian-20101016 ...........>...r>....... ......0...~.k...~...f...M.f.f....f..8~....>2} Boot sector info: Syslinux looks at sector 1437504 of /dev/sdb1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. According to the info in the boot sector, sdb1 starts at sector 0. But according to the info from fdisk, sdb1 starts at sector 62. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys sdc1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdc2: __________________________________________________ ________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdc5: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc6: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc7: __________________________________________________ ________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdc8: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: Going back into Ubuntu and running sudo fdisk -l , I got these: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002f393 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 19458 156185600 7 HPFS/NTFS Disk /dev/sdb: 2011 MB, 2011168768 bytes 62 heads, 62 sectors/track, 1021 cylinders Units = cylinders of 3844 * 512 = 1968128 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2ab9 Device Boot Start End Blocks Id System /dev/sdb1 * 1 1021 1962331 c W95 FAT32 (LBA) Disk /dev/sdc: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00261ddd Device Boot Start End Blocks Id System /dev/sdc1 * 1 60657 487222656+ 7 HPFS/NTFS /dev/sdc2 60657 121600 489527681 5 Extended /dev/sdc5 120563 121600 8337703+ 82 Linux swap / Solaris /dev/sdc6 120073 120562 3930112 82 Linux swap / Solaris /dev/sdc7 60657 119584 473328640 83 Linux /dev/sdc8 119584 120072 3923968 82 Linux swap / Solaris Should I proceed and do the following? Assuming Ubuntu 11.04 was installed on device sdb1, do this: sudo mount /dev/sdb1 /mnt Then do this: sudo grub-install--root-directory=/mnt /dev/sdb Notice there are two dashes in front of the root directory, and I'm not using sdb1 but sdb. Since the command in step 15 had reinstalled Grub 2, now we need to unmount the /mnt (i.e. sdb1) to clean up. Do this: sudo umount /mnt Reboot and remove Ubuntu 11.04 CD/DVD from disk tray. Log into Ubuntu 11.04 (you have no choice but it will make you log into Ubuntu 11.04 at this point). Open up a terminal in Ubuntu 11.04 (using real installation, not live CD/DVD). Execute this command: sudo update-grub Reboot the machine.

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • Mail Server on Google Apps, Hotmail Blocks

    - by detay
    My company provides private research tools (sites) for companies. Each site has its own domain name and users. These sites need to be able to send outbound mail for notifications such as password reminders. We started using Google Apps for this purpose, but lately we are having tremendous problems with mailing, especially to hotmail. I added the SPF record as required. Mxtoolbox says my domains are not in any blacklists. I was told to contact https://postmaster.live.com/snds/addnetwork.aspx and add my network. Given that I am using google's mail servers for sending mail, should I add their IP address for registering my domain? Or should I add my server's IP even though I am not hosting the mail server on it? Here's an example of a returned message; > Delivered-To: [email protected] Received: by 10.112.1.195 with SMTP > id 3csp62757lbo; > Mon, 10 Dec 2012 10:42:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; > d=gmail.com; s=20120113; > h=mime-version:from:to:x-failed-recipients:subject:message-id:date > :content-type; > bh=x41JiCH75hsonh+5c/kzwIs7R8u4Hum2u396lkV3g2w=; > b=v+bBPvufqfVkNz2UrnhyyoGj1Cuwf6x/yMj0IXJgH27uJU7TqBOvbwnNHf33sQ7B9T > uGzxwryC2fmxBh72cGz1spwWK348nUp6KK73MXKnF8mpc3nPQ8Ke+EQpSPeJdq/7oZvd > scZTGpy//IiGEUDU5bJ7YPqYXQRycY5N6AF5iI4mWWwbS4opybp3IKpDDktu11p/YEEO > Fj9wnSCx3nLMXB/XZgSjjmnaluGhYNdh3JFtz83Vmr50qXTG/TuIXlkirP73GfGvIt2S > 7sy8/YdEbZR92mYe9wucditYr7MOuyjyYrZYCg+weXeTZiJX1PfBVieD+kD9MnCairnP > rQeQ== Received: by 10.14.215.197 with SMTP id e45mr52766237eep.0.1355164974784; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) MIME-Version: 1.0 Return-Path: <> Received: by 10.14.215.197 with SMTP id > e45mr65950764eep.0; Mon, 10 Dec 2012 10:42:54 -0800 (PST) From: Mail > Delivery Subsystem <[email protected]> To: > [email protected] X-Failed-Recipients: ********@hotmail.fr > Subject: Delivery Status Notification (Failure) Message-ID: > <[email protected]> Date: Mon, 10 Dec 2012 > 18:42:54 +0000 Content-Type: text/plain; charset=ISO-8859-1 > > Delivery to the following recipient failed permanently: > > *********@hotmail.fr > > Technical details of permanent failure: Message rejected. See > http://support.google.com/mail/bin/answer.py?answer=69585 for more > information. > > ----- Original message ----- > > Received: by 10.14.215.197 with SMTP id > e45mr52766228eep.0.1355164974700; > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Return-Path: <[email protected]> Received: from WIN-1BBHFMJGUGS ([91.93.102.12]) > by mx.google.com with ESMTPS id l3sm26787704eem.14.2012.12.10.10.42.53 > (version=TLSv1/SSLv3 cipher=OTHER); > Mon, 10 Dec 2012 10:42:54 -0800 (PST) Message-ID: <[email protected]> MIME-Version: 1.0 > From: [email protected] To: *********@hotmail.fr Date: Mon, 10 Dec > 2012 10:42:54 -0800 (PST) Subject: Rejoignez-nous sur achbanlek.com > Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: > base64 > > ----- End of message -----

    Read the article

  • How to configure multiple WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • Configuring multiple WCF binding configurations for the same scheme doesn't work

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" bindingConfiguration="public" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" bindingConfiguration="public" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • How to configurie multiple distinct WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • ASP.NET MVC 2 Mdel encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • NLP with greatly contrained input and abilities

    - by Mike F
    Hat in hand here. I'm a seasoned developer and I would be grateful for a bit of help. I don't have time to read or digest long intricate discussions on theoretical concepts around NLP (or go get my PHD). That said, I have read a few and it's a damn interesting field. The problem is I need real world solutions, for real world products, in real world time frames. The problem I'm having is right now I'm not sure what the right questions are to ask to get started implementing. I believe this is mostly related to vocabulary. I'll read somewhere, a blog post, a forum post, a whitepaper, and it says, I'm doing flooping with the blargy blarg method, and I go google flooping and blargy blarg, and I get references to more obscurity. It seemingly never ends. So, my question is multiphased. First, more generally, how do I become passingly educated on this quickly? Just in time educated. I only need to know what I need to know to take the next step. I've spent 20 years writing code. Explain quick. I'll get it. (I mean provide a reference to something that explains quickly of course). I'm happy to read the right book, but I don't want to read a book where I read the chapter introduction that explains what floopy floop is and then skip over the rest of the chapter with examples of floopy flooping (because now I get what it is). I also don't want to read a book that goes into too much detail with theoretical underpinnings or history. For example, the Jurafsky book seems like way more than I need: http://www.amazon.com/gp/product/0131873210. But I will read it if this is the right book to read. (It's also dang expensive!) I need the root node of the expedited learning tree here, if you will. Point me in the right direction and I'll be quite grateful. I'm expecting quite a lot of firehose drinking - I just need the right firehose. Second, what I need to do is take a single sentence, with a very reduced vocabulary, and get a grammar tree (sorry if this is the wrong terminology) that I can do something with. I know I could easily write this command line input style in c in a more conventional manner, but I need it to be way better than that. But I don't need a chatterbot either. What I'm doing needs to live in a constrained environment. I can't use Python (unfortunately). I can't ship with gigabytes of corpuses. I need any libraries I use to be in c/c++. If I have to write this myself, I will. Hopefully, it will be achievable considering the reduced problem set. Maybe, probably, that's just naive. If so, let me know. :-) Thanks in advance - Mike

    Read the article

  • ASP.NET Wizard control with Dynamic UserControls

    - by wjat777
    Hello everyone I'm trying to use the Wizard control as follows: 1 - In the first step (first screen) has a cheklistbox with several options 2 - For each option selected, will be created an extra step 3 - In some steps may be set up intermediate stages Me problem is that each step is a usercontrol. So if the first step I select option1 and option3, I should create two more steps with usercontrol1 and UserControl3 .... Someone already did something similar? I will try explain better the problem. I'll paste here, but I put a copy of the project in skydrive: http://cid-3d949f1661d00819.skydrive.live.com/self.aspx/C%5E3/WizardControl2.7z It is a very basic example of what I'm trying to do. The project has page.ASPX 1 and 3 usercontrol.ASCX (UC1, UC2 and UC3) Default.ASPX: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="WebApplication2._Default" % Option1 Option2 Option3 Code Behind: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace WebApplication2 { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void Page_PreInit(object sender, EventArgs e) { LoadSteps(); } private void LoadSteps() { int count = Wizard1.WizardSteps.Count; for (int i = count - 1; i > 0; i--) { WizardStepBase step = Wizard1.WizardSteps[i]; if (step.StepType != WizardStepType.Start) Wizard1.WizardSteps.Remove(step); } string Activities=""; foreach (ListItem item in CheckBoxList1.Items) { if (item.Selected) { WizardStep step = new WizardStep {ID = "step_" + item.Value, Title = "step_" + item.Value}; UserControl uc=null; switch (item.Value) { case "1": uc=(UserControl)LoadControl("~/UC1.ascx"); break; case "2": uc=(UserControl)LoadControl("~/UC2.ascx"); break; case "3": uc=(UserControl)LoadControl("~/UC3.ascx"); break; } step.Controls.Add(uc); Wizard1.WizardSteps.Add(step); } } } protected void CheckBoxList1_SelectedIndexChanged(object sender, EventArgs e) { LoadSteps(); } } } Control UC1.ASCX to UC3.ASCX has the same code (as an example) usercontrol.ascx: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="UC1.ascx.cs" Inherits="WebApplication2.UC1" % AutoPostBack="True" AutoPostBack="True" Code Behind: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace WebApplication2 { public partial class UC1 : System.Web.UI.UserControl { protected void Page_Init(object sender, EventArgs e) { List list1 = new List { "Alice", "Bob", "Chris" }; List list2 = new List { "UN", "DEUX", "TROIS" }; DropDownList1.DataSource = list1; DropDownList2.DataSource = list2; DropDownList1.DataBind(); DropDownList2.DataBind(); } protected void Page_Load(object sender, EventArgs e) { if (IsPostBack) return; } protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { Label1.Text = DropDownList1.SelectedValue; } protected void DropDownList2_SelectedIndexChanged(object sender, EventArgs e) { Label2.Text = DropDownList2.SelectedValue; } } } You can see that the controls UserControls trigger events (they cause postbacks) The behavior of this project is a little different than I described in the first thread, but the problem is the same. After the second time "next" button, you will get an error. Thanks in advance William

    Read the article

  • Rails HABTM accepts_nested_attributes_for mapping table

    - by Rabbott
    Currently I have a habtm relationship between a datastream (stream), and a chart. I want to be able to use the 'typical' jquery "add new stream" method to add a new entry into a mapping table that holds the chart_id, and stream_id.. But I think that most examples out there are made to handle a one to many relationship.. The functionality provided here by Ryan Bates is what I'm looking for, But I dont want to use checkboxes, I want to use a select (drop down) to add new items to the mapping table. I need THIS and THIS combined chart.rb class Chart < ActiveRecord::Base has_and_belongs_to_many :streams, :readonly => false, :join_table => 'charts_streams' accepts_nested_attributes_for :streams, :reject_if => lambda { |a| a.values.all?(&:blank?) }, :allow_destroy => true stream.rb class Stream < ActiveRecord::Base has_and_belongs_to_many :charts, :readonly => false, :join_table => 'charts_streams' application.js /* * Method used to add new child form partials */ $('form a.add_child').click(function() { var assoc = $(this).attr('data-association'); var content = $('#' + assoc + '_fields_template').html(); var regexp = new RegExp('new_' + assoc, 'g'); var new_id = new Date().getTime(); var newElements = jQuery(content.replace(regexp, new_id)).hide(); $(this).parent().before(newElements).prev().slideFadeToggle(); return false; }); /* * Method used to remove child form partials */ $('form a.remove_child').live('click', function() { if(confirm('Are you sure?')) { var hidden_field = $(this).prev('input[type=hidden]')[0]; if(hidden_field) { hidden_field.value = '1'; } $(this).parents('.fields').slideFadeToggle(); } return false; }); _form.html.erb <div id="streams"> <h4>Streams</h4> <% form.fields_for :streams do |stream_form| %> <%= render :partial => "stream", :locals => {:f => stream_form} %> <% end %> </div> <p><%= add_child_link 'Add a stream', :streams %></p> <%= new_child_fields_template(form, :streams) %> _stream.html.erb <div class="fields"> <p> <%= f.label :stream_id %><br /> <%= select_tag "chart[stream_ids][]", options_for_select(@streams.map {|s| [s.label, s.id]}, f.object.id) %> </p> <p><%= remove_child_link 'Remove stream', f %></p> </div> This may be overkill and what I am looking to do could be much easier.. Basically when I create a chart I want to be able to add as many streams to it as I want, using the javascript for adding a new entry, and removing an old one... Thanks for the help! This is 'working' but gives me a ActiveRecord::ReadOnlyRecord error when it hits the chart.update_attributes method.. am I adding the the :readonly = false in the wrong spot? Or am I just doing it completely wrong?

    Read the article

  • Scaling MKMapView Annotations relative to the zoom level

    - by Jonathan
    The Problem I'm trying to create a visual radius circle around a annonation, that remains at a fixed size in real terms. Eg. So If i set the radius to 100m, as you zoom out of the Map view the radius circle gets progressively smaller. I've been able to achieve the scaling, however the radius rect/circle seems to "Jitter" away from the Pin Placemark as the user manipulates the view. The Manifestation Here is a video of the behaviour. The Implementation The annotations are added to the Mapview in the usual fashion, and i've used the delegate method on my UIViewController Subclass (MapViewController) to see when the region changes. -(void)mapView:(MKMapView *)pMapView regionDidChangeAnimated:(BOOL)animated{ //Get the map view MKCoordinateRegion region; CGRect rect; //Scale the annotations for( id<MKAnnotation> annotation in [[self mapView] annotations] ){ if( [annotation isKindOfClass: [Location class]] && [annotation conformsToProtocol:@protocol(MKAnnotation)] ){ //Approximately 200 m radius region.span.latitudeDelta = 0.002f; region.span.longitudeDelta = 0.002f; region.center = [annotation coordinate]; rect = [[self mapView] convertRegion:foo toRectToView: self.mapView]; if( [[[self mapView] viewForAnnotation: annotation] respondsToSelector:@selector(setRadiusFrame:)] ){ [[[self mapView] viewForAnnotation: annotation] setRadiusFrame:rect]; } } } The Annotation object (LocationAnnotationView)is a subclass of the MKAnnotationView and it's setRadiusFrame looks like this -(void) setRadiusFrame:(CGRect) rect{ CGPoint centerPoint; //Invert centerPoint.x = (rect.size.width/2) * -1; centerPoint.y = 0 + 55 + ((rect.size.height/2) * -1); rect.origin = centerPoint; [self.radiusView setFrame:rect]; } And finally the radiusView object is a subclass of a UIView, that overrides the drawRect method to draw the translucent circles. setFrame is also over ridden in this UIView subclass, but it only serves to call [UIView setNeedsDisplay] in addition to [UIView setFrame:] to ensure that the view is redrawn after the frame has been updated. The radiusView object's (CircleView) drawRect method looks like this -(void) drawRect:(CGRect)rect{ //NSLog(@"[CircleView drawRect]"); [self setBackgroundColor:[UIColor clearColor]]; //Declarations CGContextRef context; CGMutablePathRef path; //Assignments context = UIGraphicsGetCurrentContext(); path = CGPathCreateMutable(); //Alter the rect so the circle isn't cliped //Calculate the biggest size circle if( rect.size.height > rect.size.width ){ rect.size.height = rect.size.width; } else if( rect.size.height < rect.size.width ){ rect.size.width = rect.size.height; } rect.size.height -= 4; rect.size.width -= 4; rect.origin.x += 2; rect.origin.y += 2; //Create paths CGPathAddEllipseInRect(path, NULL, rect ); //Create colors [[self areaColor] setFill]; CGContextAddPath( context, path); CGContextFillPath( context ); [[self borderColor] setStroke]; CGContextSetLineWidth( context, 2.0f ); CGContextSetLineCap(context, kCGLineCapSquare); CGContextAddPath(context, path ); CGContextStrokePath( context ); CGPathRelease( path ); //CGContextRestoreGState( context ); } Thanks for bearing with me, any help is appreciated. Jonathan

    Read the article

  • How to get the child elementsvalue , when the parent element contains different number of child Elem

    - by Subhen
    Hi, I have the following XML Structure: <DIDL-Lite xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0/"> <item id="1268" parentID="20" restricted="1"> <upnp:class>object.item.audioItem.musicTrack</upnp:class> <dc:title>Hey</dc:title> <dc:date>2010-03-17T12:12:26</dc:date> <upnp:albumArtURI dlna:profileID="PNG_TN" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0">URL/albumart/22.png</upnp:albumArtURI> <upnp:icon>URL/albumart/22.png</upnp:icon> <dc:creator>land</dc:creator> <upnp:artist>sland</upnp:artist> <upnp:album>Change</upnp:album> <upnp:genre>Rock</upnp:genre> <res protocolInfo="http-get:*:audio/mpeg:DLNA.ORG_PN=MP3;DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="9527987" duration="0:03:58">URL/1268.mp3</res> <res protocolInfo="http-get:*:image/png:DLNA.ORG_PN=PNG_TN;DLNA.ORG_CI=01;DLNA.ORG_FLAGS=00f00000000000000000000000000000" colorDepth="24" resolution="160x160">URL/albumart/22.png</res> </item> <item id="1269" parentID="20" restricted="1"> <upnp:class>object.item.audioItem.musicTrack</upnp:class> <dc:title>Indian </dc:title> <dc:date>2010-03-17T12:06:32</dc:date> <upnp:albumArtURI dlna:profileID="PNG_TN" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0">URL/albumart/13.png</upnp:albumArtURI> <upnp:icon>URL/albumart/13.png</upnp:icon> <dc:creator>MC</dc:creator> <upnp:artist>MC</upnp:artist> <upnp:album>manimal</upnp:album> <upnp:genre>Rap</upnp:genre> <res protocolInfo="http-get:*:audio/mpeg:DLNA.ORG_PN=MP3;DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="8166707" duration="0:03:24">URL/1269.mp3</res> <res protocolInfo="http-get:*:image/png:DLNA.ORG_PN=PNG_TN;DLNA.ORG_CI=01;DLNA.ORG_FLAGS=00f00000000000000000000000000000" colorDepth="24" resolution="160x160">URL/albumart/13.png</res> </item> <item id="1277" parentID="20" restricted="1"> <upnp:class>object.item.videoItem.movie</upnp:class> <dc:title>IronMan_TeaserTrailer-full_NEW.mpg</dc:title> <dc:date>2010-03-17T12:50:24</dc:date> <upnp:genre>Unknown</upnp:genre> <res protocolInfo="http-get:*:video/mpeg:DLNA.ORG_PN=(NULL);DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="98926592" resolution="1920x1080" duration="0:02:30">URL/1277.mpg</res> </item> </DIDL-Lite> Here in the last Elemnt Item , there are few missing elements like icon,albumArtURi ..etc. Now whhile Ity to access the Values by following LINQ to XML query, var vAudioData = from xAudioinfo in xResponse.Descendants(ns + "DIDL-Lite").Elements(ns + "item").Where(x=>!x.Elements(upnp+"album").l orderby ((string)xAudioinfo.Element(upnp + "artist")).Trim() select new RMSMedia { strAudioTitle = ((string)xAudioinfo.Element(dc + "title")).Trim(),//((string)xAudioinfo.Attribute("audioAlbumcoverImage")).Trim()=="" ? ((string)xAudioinfo.Element("Song")).Trim():"" }; It show object reference is not set to reference of object. I do undersrand this is because of missing elements, is there any way I can pre-check if the elements exist , or check the number of elemnts inside item or any other way. Any help is deeply appreciated. Thanks, Subhen

    Read the article

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • Emulating old-school sprite flickering (theory and concept)

    - by Jeffrey Kern
    I'm trying to develop an oldschool NES-style video game, with sprite flickering and graphical slowdown. I've been thinking of what type of logic I should use to enable such effects. I have to consider the following restrictions if I want to go old-school NES style: No more than 64 sprites on the screen at a time No more than 8 sprites per scanline, or for each line on the Y axis If there is too much action going on the screen, the system freezes the image for a frame to let the processor catch up with the action From what I've read up, if there were more than 64 sprites on the screen, the developer would only draw high-priority sprites while ignoring low-priority ones. They could also alternate, drawing each even numbered sprite on opposite frames from odd numbered ones. The scanline issue is interesting. From my testing, it is impossible to get good speed on the XBOX 360 XNA framework by drawing sprites pixel-by-pixel, like the NES did. This is why in old-school games, if there were too many sprites on a single line, some would appear if they were cut in half. For all purposes for this project, I'm making scanlines be 8 pixels tall, and grouping the sprites together per scanline by their Y positioning. So, dumbed down I need to come up with a solution that.... 64 sprites on screen at once 8 sprites per 'scanline' Can draw sprites based on priority Can alternate between sprites per frame Emulate slowdown Here is my current theory First and foremost, a fundamental idea I came up with is addressing sprite priority. Assuming values between 0-255 (0 being low), I can assign sprites priority levels, for instance: 0 to 63 being low 63 to 127 being medium 128 to 191 being high 192 to 255 being maximum Within my data files, I can assign each sprite to be a certain priority. When the parent object is created, the sprite would randomly get assigned a number between its designated range. I would then draw sprites in order from high to low, with the end goal of drawing every sprite. Now, when a sprite gets drawn in a frame, I would then randomly generate it a new priority value within its initial priority level. However, if a sprite doesn't get drawn in a frame, I could add 32 to its current priority. For example, if the system can only draw sprites down to a priority level of 135, a sprite with an initial priority of 45 could then be drawn after 3 frames of not being drawn (45+32+32+32=141) This would, in theory, allow sprites to alternate frames, allow priority levels, and limit sprites to 64 per screen. Now, the interesting question is how do I limit sprites to only 8 per scanline? I'm thinking that if I'm sorting the sprites high-priority to low-priority, iterate through the loop until I've hit 64 sprites drawn. However, I shouldn't just take the first 64 sprites in the list. Before drawing each sprite, I could check to see how many sprites were drawn in it's respective scanline via counter variables . For example: Y-values between 0 to 7 belong to Scanline 0, scanlineCount[0] = 0 Y-values between 8 to 15 belong to Scanline 1, scanlineCount[1] = 0 etc. I could reset the values per scanline for every frame drawn. While going down the sprite list, add 1 to the scanline's respective counter if a sprite gets drawn in that scanline. If it equals 8, don't draw that sprite and go to the sprite with the next lowest priority. SLOWDOWN The last thing I need to do is emulate slowdown. My initial idea was that if I'm drawing 64 sprites per frame and there's still more sprites that need to be drawn, I could pause the rendering by 16ms or so. However, in the NES games I've played, sometimes there's slowdown if there's not any sprite flickering going on whereas the game moves beautifully even if there is some sprite flickering. Perhaps give a value to each object that uses sprites on the screen (like the priority values above), and if the combined values of all objects w/ sprites surpass a threshold, introduce the sprite flickering? IN CONCLUSION... Does everything I wrote actually sound legitimate and could work, or is it a pipe dream? What improvements can you all possibly think with this game programming theory of mine?

    Read the article

  • Get Asynchronous HttpResponse through Silverlight (F#)

    - by jack2010
    I am a newbie with F# and SL and playing with getting asynchronous HttpResponse through Silverlight. The following is the F# code pieces, which is tested on VS2010 and Window7 and works well, but the improvement is necessary. Any advices and discussion, especially the callback part, are welcome and great thanks. module JSONExample open System open System.IO open System.Net open System.Text open System.Web open System.Security.Authentication open System.Runtime.Serialization [<DataContract>] type Result<'TResult> = { [<field: DataMember(Name="code") >] Code:string [<field: DataMember(Name="result") >] Result:'TResult array [<field: DataMember(Name="message") >] Message:string } // The elements in the list [<DataContract>] type ChemicalElement = { [<field: DataMember(Name="name") >] Name:string [<field: DataMember(Name="boiling_point") >] BoilingPoint:string [<field: DataMember(Name="atomic_mass") >] AtomicMass:string } //http://blogs.msdn.com/b/dsyme/archive/2007/10/11/introducing-f-asynchronous-workflows.aspx //http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!194.entry type System.Net.HttpWebRequest with member x.GetResponseAsync() = Async.FromBeginEnd(x.BeginGetResponse, x.EndGetResponse) type RequestState () = let mutable request : WebRequest = null let mutable response : WebResponse = null let mutable responseStream : Stream = null member this.Request with get() = request and set v = request <- v member this.Response with get() = response and set v = response <- v member this.ResponseStream with get() = responseStream and set v = responseStream <- v let allDone = new System.Threading.ManualResetEvent(false) let getHttpWebRequest (query:string) = let query = query.Replace("'","\"") let queryUrl = sprintf "http://api.freebase.com/api/service/mqlread?query=%s" "{\"query\":"+query+"}" let request : HttpWebRequest = downcast WebRequest.Create(queryUrl) request.Method <- "GET" request.ContentType <- "application/x-www-form-urlencoded" request let GetAsynResp (request : HttpWebRequest) (callback: AsyncCallback) = let myRequestState = new RequestState() myRequestState.Request <- request let asyncResult = request.BeginGetResponse(callback, myRequestState) () // easy way to get it to run syncrnously w/ the asynch methods let GetSynResp (request : HttpWebRequest) : HttpWebResponse = let response = request.GetResponseAsync() |> Async.RunSynchronously downcast response let RespCallback (finish: Stream -> _) (asynchronousResult : IAsyncResult) = try let myRequestState : RequestState = downcast asynchronousResult.AsyncState let myWebRequest1 : WebRequest = myRequestState.Request myRequestState.Response <- myWebRequest1.EndGetResponse(asynchronousResult) let responseStream = myRequestState.Response.GetResponseStream() myRequestState.ResponseStream <- responseStream finish responseStream myRequestState.Response.Close() () with | :? WebException as e -> printfn "WebException raised!" printfn "\n%s" e.Message printfn "\n%s" (e.Status.ToString()) () | _ as e -> printfn "Exception raised!" printfn "Source : %s" e.Source printfn "Message : %s" e.Message () let printResults (stream: Stream)= let result = try use reader = new StreamReader(stream) reader.ReadToEnd(); finally () let data = Encoding.Unicode.GetBytes(result); let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let JsonSerializer = Json.DataContractJsonSerializer(typeof<Result<ChemicalElement>>) let result = JsonSerializer.ReadObject(stream) :?> Result<ChemicalElement> if result.Code<>"/api/status/ok" then raise (InvalidOperationException(result.Message)) else result.Result |> Array.iter(fun element->printfn "%A" element) let test = // Call Query (w/ generics telling it you wand an array of ChemicalElement back, the query string is wackyJSON too –I didn’t build it don’t ask me! let request = getHttpWebRequest "[{'type':'/chemistry/chemical_element','name':null,'boiling_point':null,'atomic_mass':null}]" //let response = GetSynResp request let response = GetAsynResp request (AsyncCallback (RespCallback printResults)) () ignore(test) System.Console.ReadLine() |> ignore

    Read the article

  • How to Access a descendant object's internal method in C#

    - by Giovanni Galbo
    I'm trying to access a method that is marked as internal in the parent class (in its own assembly) in an object that inherits from the same parent. Let me explain what I'm trying to do... I want to create Service classes that return IEnumberable with an underlying List to non-Service classes (e.g. the UI) and optionally return an IEnumerable with an underlying IQueryable to other services. I wrote some sample code to demonstrate what I'm trying to accomplish, shown below. The example is not real life, so please remember that when commenting. All services would inherit from something like this (only relevant code shown): public class ServiceBase<T> { protected readonly ObjectContext _context; protected string _setName = String.Empty; public ServiceBase(ObjectContext context) { _context = context; } public IEnumerable<T> GetAll() { return GetAll(false); } //These are not the correct access modifiers.. I want something //that is accessible to children classes AND between descendant classes internal protected IEnumerable<T> GetAll(bool returnQueryable) { var query = _context.CreateQuery<T>(GetSetName()); if(returnQueryable) { return query; } else { return query.ToList(); } } private string GetSetName() { //Some code... return _setName; } } Inherited services would look like this: public class EmployeeService : ServiceBase<Employees> { public EmployeeService(ObjectContext context) : base(context) { } } public class DepartmentService : ServiceBase<Departments> { private readonly EmployeeService _employeeService; public DepartmentService(ObjectContext context, EmployeeService employeeService) : base(context) { _employeeService = employeeService; } public IList<Departments> DoSomethingWithEmployees(string lastName) { //won't work because method with this signature is not visible to this class var emps = _employeeService.GetAll(true); //more code... } } Because the parent class lives is reusable, it would live in a different assembly than the child services. With GetAll(bool returnQueryable) being marked internal, the children would not be able to see each other's GetAll(bool) method, just the public GetAll() method. I know that I can add a new internal GetAll method to each service (or perhaps an intermediary parent class within the same assembly) so that each child service within the assembly can see each other's method; but it seems unnecessary since the functionality is already available in the parent class. For example: internal IEnumerable<Employees> GetAll(bool returnIQueryable) { return base.GetAll(returnIQueryable); } Essentially what I want is for services to be able to access other service methods as IQueryable so that they can further refine the uncommitted results, while everyone else gets plain old lists. Any ideas? EDIT You know what, I had some fun playing a little code golf with this... but ultimately I wouldn't be able to use this scheme anyway because I pass interfaces around, not classes. So in my example GetAll(bool returnIQueryable) would not be in the interface, meaning I'd have to do casting, which goes against what I'm trying to accomplish. I'm not sure if I had a brain fart or if I was just too excited trying to get something that I thought was neat to work. Either way, thanks for the responses.

    Read the article

  • Problem Registering a Generic Repository with Windsor IoC

    - by Robin
    I’m fairly new to IoC and perhaps my understanding of generics and inheritance is not strong enough for what I’m trying to do. You might find this to be a mess. I have a generic Repository base class: public class Repository<TEntity> where TEntity : class, IEntity { private Table<TEntity> EntityTable; private string _connectionString; private string _userName; public string UserName { get { return _userName; } set { _userName = value; } } public Repository() {} public Repository(string connectionString) { _connectionString = connectionString; EntityTable = (new DataContext(connectionString)).GetTable<TEntity>(); } public Repository(string connectionString, string userName) { _connectionString = connectionString; _userName = userName; EntityTable = (new DataContext(connectionString)).GetTable<TEntity>(); } // Data access methods ... ... } and a SqlClientRepository that inherits Repository: public class SqlClientRepository : Repository<Client> { private Table<Client> ClientTable; private string _connectionString; private string _userName; public SqlClientRepository() {} public SqlClientRepository(string connectionString) : base(connectionString) { _connectionString = connectionString; ClientTable = (new DataContext(connectionString)).GetTable<Client>(); } public SqlClientRepository(string connectionString, string userName) : base(connectionString, userName) { _connectionString = connectionString; _userName = userName; ClientTable = (new DataContext(connectionString)).GetTable<Client>(); } // data access methods unique to Client repository ... } The Repository class provides some generics methods like Save, Delete, etc, that I want all my repository derived classes to share. The TEntity parameter is constrained to the IEntity interface: public interface IEntity { int Id { get; set; } NameValueCollection GetSaveRuleViolations(); NameValueCollection GetDeleteRuleViolations(); } This allows the Repository class to reference these methods within its Save and Delete methods. Unit tests work fine on mock SqlClientRepository instances as well as live unit tests on the real database. However, in the MVC context: public class ClientController : Controller { private SqlClientRepository _clientRepository; public ClientController(SqlClientRepository clientRepository) { this._clientRepository = clientRepository; } public ClientController() { } // ViewResult methods ... ... } ... _clientRepository is always null. I’m using Windor Castle as an IoC container. Here is the configuration: <component id="ClientRepository" service="DomainModel.Concrete.Repository`1[[DomainModel.Entities.Client, DomainModel]], DomainModel" type="DomainModel.Concrete.SqlClientRepository, DomainModel" lifestyle="PerWebRequest"> <parameters> <connectionString>#{myConnStr}</connectionString> </parameters> </component> I’ve tried many variations in the Windsor configuration file. I suspect it’s more of a design flaw in the above code. As I'm looking over my code, it occurs to me that when registering components with an IoC container, perhaps service must always be an interface. Could this be it? Does anybody have a suggestion? Thanks in advance.

    Read the article

  • Jquery loaded? Colorbox is available? Calling some js file via $.getScript are always downloading fr

    - by uzay95
    I am dynamically load js files with _tumIsler.js (_allStuff.js) <script src="../js/_tumJsler.js" type="text/javascript"></script> It contains: // url=> http: + // + localhost:4399 + /yabant/ // ---- ---- -------------- -------- // protocol + "//" + host + '/virtualDirectory/' var baseUrl = document.location.protocol + "//" + document.location.host + '/yabant/'; // If there is "~/" at the begining of url, replace it with baseUrl function ResolveUrl(url) { if (url.indexOf("~/") == 0) { url = baseUrl + url.substring(2); } return url; } // Attaching scripts to any tag function addJavascript(jsname, pos) { var th = document.getElementsByTagName(pos)[0]; var s = document.createElement('script'); s.setAttribute('type', 'text/javascript'); s.setAttribute('src', jsname); th.appendChild(s); } // I want to make sure jQuery is loaded? addJavascript(ResolveUrl('~/js/1_jquery-1.4.2.min.js'), 'head'); var loaded = false; // assume it didn't first and if it is change it to true function fControl() { // alert("JQUERY is loaded?"); if (typeof jQuery == 'undefined') { loaded = false; fTry2LoadJquery(); } else { loaded = true; fGetOtherScripts(); } } // Check is jQuery loaded fControl(); function fTry2LoadJquery() { // alert("JQUERY didn't load! Trying to reload..."); if (loaded == false) { setTimeout("fControl()", 1000); } else { return; } } function getJavascript(jsname, pos) { // I want to retrieve every script one by one $.ajaxSetup({ async: false, beforeSend: function() { $.ajaxSetup({ async: false, cache: true }); }, complete: function() { $.ajaxSetup({ async: false, cache: true }); }, success: function() { // } }); $.getScript(ResolveUrl(jsname), function() { /* ok! */ }); } function fGetOtherScripts() { // alert("Other js files will be load in this function"); getJavascript(ResolveUrl('~/js/5_json_parse.js'), 'head'); getJavascript(ResolveUrl('~/js/3_jquery.colorbox-min.js'), 'head'); getJavascript(ResolveUrl('~/js/4_AjaxErrorHandling.js'), 'head'); getJavascript(ResolveUrl('~/js/6_jsSiniflar.js'), 'head'); getJavascript(ResolveUrl('~/js/yabanYeni.js'), 'head'); getJavascript(ResolveUrl('~/js/7_ResimBul.js'), 'head'); getJavascript(ResolveUrl('~/js/8_HaberEkle.js'), 'head'); getJavascript(ResolveUrl('~/js/9_etiketIslemleri.js'), 'head'); getJavascript(ResolveUrl('~/js/bugun.js'), 'head'); getJavascript(ResolveUrl('~/js/yaban.js'), 'head'); getJavascript(ResolveUrl('~/embed/bitgravity/functions.js'), 'head'); } After all these js files are loaded, this line is executing to show UploadFile page inside the page when clicked to the button which id is "btnResimYukle" . <script type="text/javascript"> if (jQuery().colorbox) { alert("colorbox exists"); } else { alert("colorbox doesn't exist"); $.ajaxSetup({ cache: true, async: false }); $.getScript(ResolveUrl('~/js/3_jquery.colorbox-min.js'), function() { alert("Loaded ! "); $('#btnResimYukle').live('click', function() { $.fn.colorbox({ iframe: true, width: 700, height: 600, href: ResolveUrl('~/Yonetim/DosyaYukle.aspx') }); return false; }); }); } </script> First i want to ask you very good people, i am always calling js files with $.getScript function. Are they always downloading in every $.getScript requests? And if it is so, how can i prevent that? is this work: $.ajaxSetup({ cache: true, async: false }); Second, i am always getting this error when i press F5 or Ctrl+F5 : But when i press enter key on url, there is no error :s

    Read the article

  • Jquery fadeOut looking different in FF than in Chrome/Safari

    - by StealthRT
    Hey all, i have a page that when a user desides to delete a company, it fades out using the jquery fadeOut() command. This works great and does as it should in FireFox but for some reason it doesn't do the same in Chrome/Safari browsers. The problem is that i have a png image overlay on top of the company logo image that also fades out with the company logo. But if there are more than one company logo on the page, it moves the others in the place where the one got deleted. Like i said, it works fine (and as it should) in Firefox and even IE (SHOCK!) but not in Chrome or Safari. This is the code i use to delete (fade out) the section: $('#moreInfoM' + theNum).fadeOut("slow"); $('#moreInfo' + theNum).fadeOut("slow"); $('#liNum' + theNum).fadeOut("slow"); Where moreInfoM, moreInfo & liNum are all within the code for each logo image. TheNum is just a unique number for each logo just for reference. And the ASP code: For iLoop = LBound(compNames) to UBound(compNames) theWidth = (iwidth / 2) ImgDimension(Server.MapPath("img/company/" & compNames(iLoop) & ".jpg")) theWidth = (iwidth / 2) response.write "<li onClick=""$('#moreInfo" & y & "').slideToggle('slow');"" id=""liNum" & y & """>" & vbcr if theBrowser = "chrome" or theBrowser = "safari" then response.write "<img src=""img/clickMI.png"" width=""97"" height=""33"" style=""position:absolute;width:97px;height:33px;z-index:12;padding-top: 54px; padding-left:" & theWidth & "px;"">" & vbcr else response.write "<img src=""img/clickMI.png"" width=""97"" height=""33"" style=""position:absolute;width:97px;height:33px;z-index:12;padding-top: 50px; padding-left:" & theWidth & "px;"">" & vbcr end if response.write "<img src=""img/company/" & compNames(iLoop) & ".jpg"" height=""60"" style=""border: 1px solid #4C3C1B;padding: 2px;background-color: #CCC; margin-left: 10px; margin-right: 10px; margin-bottom: 5px; margin-top: 5px;"" id=""moreInfoM" & y & """ />" & vbcr response.write "</li>" & vbcr y = y + 1 Next I have made a video of FF and Chrome to give you a better understanding. http://www.youtube.com/watch?v=ZQaRvosvRKE Thanks! David

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >