Search Results

Search found 4275 results on 171 pages for 'accept'.

Page 152/171 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • How do I resolve "conflicting accounts" in google apps without breaking links to online photos on picasa?

    - by lee
    I have been using google apps for some time, and only recently learned I have what google calls "conflicting accounts" which is creating a problem I haven't been able to resolve. Turns out that the apps account really only covers email, google docs, and the calendar and not other features like picasa, blogger, youtube etc. and at some point they gave me a non-apps google account with my same (proprietary non-gmail) email address for the additional apps. This is the "conflicting account." I had noticed that I sometimes had to come in through another door when I went back and forth, between docs, picasa, and mail let's say, but never understood why since it was the same username and password and I didn't get any communication about it at the time. Google is now in the process of giving google apps users access to the additional apps and providing instructions for consolidating the two accounts. But if I want to move my picasa site into the new apps structure I have to download my albums and re-synch them. This would be disastrous for me as I have hundreds of photos embedded in my websites, and new web addresses would break all the connections. The alternative seems to be to rename my "personal" (non-apps) accounts as described at http://www.google.com/support/a/bin/answer.py?answer=185186: Users with conflicting Google Accounts can easily resolve their conflicts by renaming their personal Google Accounts, and the data in their personal accounts will remain safe and accessible to them. Here’s how a user can rename their personal Google Account: * Step 1: Visit www.google.com/accounts and sign in with your personal Google Account * Step 2: Click ‘Change email’ under ‘Personal Settings’ * Step 3: Enter a different email address where you can receive mail, enter your password, and click ‘Save email address’ * Step 4: Check your other email If your users don’t have different email addresses where they can receive mail, they can resolve the conflict by renaming their personal Google Accounts to @gmail.com addresses instead. Sounds easy enough, right? I gave them a gmail address. The wizard said "sorry you can't use a gmail account for this" --which contradicts the last paragraph above but ok, I switched to a new email address I just created for one of my domains. I can send email back and forth between this account and my google apps account with no problem. But when I try to use it as a replacement on the "personal" side I always get "The password you gave is incorrect." I have tried it over and over and know the password is correct. Since I like to get all my emails though one web interface I initially had the new email set up as an add-on to my google apps email account, but noting that the instructions said the "personal account" email could not be associated with any other gmail account I took it off and went back to accessing it via horde so there would be no conflict there, which seemed to make no difference. I can't figure out why it won't accept the password. Does anyone have any thoughts about that? or suggestions for another way to resolve my picasa problem? any help at all is greatly appreciated. Lee

    Read the article

  • Squid not caching files (Randomly)

    - by Heinrich
    I want to use an intercepting squid server to cache specific large zip files that users in my network download frequently. I have configured squid on a gateway machine and caching is working for "static" zip files that are served from an Apache web server outside our network. The files that I want to have cached by squid are zip files 100MB which are served from a heroku-hosted Rails application. I set an ETag header (SHA hash of the zip file on the server) and Cache-Control: public header. However, these files are not cached by squid. This, for example, is a request that is not cached: $ curl --no-keepalive -v -o test.zip --header "X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a" --header "X-Customer: 5" "http://MY_APP.herokuapp.com/api/device/v1/media/download?version=latest" * Adding handle: conn: 0x7ffd4a804400 * Adding handle: send: 0 * Adding handle: recv: 0 ... > GET /api/device/v1/media/download?version=latest HTTP/1.1 > User-Agent: curl/7.30.0 > Host: MY_APP.herokuapp.com > Accept: */* > X-Access-Key: 20767ed397afdea90601fda4513ceb042fe6ab4e51578da63d3bc9b024ed538a > X-Customer: 5 > 0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0< HTTP/1.1 200 OK * Server Cowboy is not blacklisted < Server: Cowboy < Date: Mon, 18 Aug 2014 14:13:27 GMT < Status: 200 OK < X-Frame-Options: SAMEORIGIN < X-Xss-Protection: 1; mode=block < X-Content-Type-Options: nosniff < ETag: "95e888938c0d539b8dd74139beace67f" < Content-Disposition: attachment; filename="e7cce850ae728b81fe3f315d21a560af.zip" < Content-Transfer-Encoding: binary < Content-Length: 125727431 < Content-Type: application/zip < Cache-Control: public < X-Request-Id: 7ce6edb0-013a-4003-a331-94d2b8fae8ad < X-Runtime: 1.244251 < X-Cache: MISS from AAA.fritz.box < Via: 1.1 vegur, 1.1 AAA.fritz.box (squid/3.3.11) < Connection: keep-alive In the logs squid is reporting a TCP_MISS. This is the relevant excerpt from my squid file: # Squid normally listens to port 3128 http_port 3128 http_port 3129 intercept # Uncomment and adjust the following to add a disk cache directory. maximum_object_size 1000 MB maximum_object_size_in_memory 1000 MB cache_dir ufs /usr/local/var/cache/squid 10000 16 256 cache_mem 2000 MB # Leave coredumps in the first cache dir coredump_dir /usr/local/var/cache/squid cache_store_log daemon:/usr/local/var/logs/cache_store.log #refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern -i .(zip) 525600 100% 525600 override-expire ignore-no-cache ignore-no-store refresh_pattern . 0 20% 4320 ## DNS Configuration dns_nameservers 8.8.8.8 8.8.4.4 After trying around for some time I realized that squid is sometimes deciding that my file is cacheable, sometimes not, depending on whether and when I enable/disable the dns_nameservers directive. What could be wrong here?

    Read the article

  • How can I check if PHP was compiled with the UNICODE version of the Win32 API?

    - by Wesley Murch
    This is related to this Stack Overflow post: glob() can't find file names with multibyte characters on Windows? I'm having issues with PHP and files that have multibyte characters on Windows. Here's my test case: print_r(scandir('./uploads/')); print_r(glob('./uploads/*')); Correct Output on remote UNIX server: Array ( [0] => . [1] => .. [2] => filename-äöü.jpg [3] => filename.jpg [4] => test?test.jpg [5] => ??? ?????.jpg [6] => ?????????.jpg [7] => ???.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg [2] => ./uploads/test?test.jpg [3] => ./uploads/??? ?????.jpg [4] => ./uploads/?????????.jpg [5] => ./uploads/???.jpg ) Incorrect Output locally on Windows: Array ( [0] => . [1] => .. [2] => ??? ?????.jpg [3] => ???.jpg [4] => ?????????.jpg [5] => filename-äöü.jpg [6] => filename.jpg [7] => test?test.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg ) Here's a relevant excerpt from the answer I chose to accept (which actually is a quote from an article that was posted online over 2 years ago): From the comments on this article: http://www.rooftopsolutions.nl/blog/filesystem-encoding-and-php The output from your PHP installation on Windows is easy to explain : you installed the wrong version of PHP, and used a version not compiled to use the Unicode version of the Win32 API. For this reason, the filesystem calls used by PHP will use the legacy "ANSI" API and so the C/C++ libraries linked with this version of PHP will first try to convert yout UTF-8-encoded PHP string into the local "ANSI" codepage selected in the running environment (see the CHCP command before starting PHP from a command line window) Your version of Windows is MOST PROBABLY NOT responsible of this weird thing. Actually, this is YOUR version of PHP which is not compiled correctly, and that uses the legacy ANSI version of the Win32 API (for compatibility with the legacy 16-bit versions of Windows 95/98 whose filesystem support in the kernel actually had no direct support for Unicode, but used an internal conversion layer to convert Unicode to the local ANSI codepage before using the actual ANSI version of the API). Recompile PHP using the compiler option to use the UNICODE version of the Win32 API (which should be the default today, and anyway always the default for PHP installed on a server that will NEVER be Windows 95 or Windows 98...) I can't confirm whether this is my problem or not. I used phpinfo() and did not find anything interesting, but I wasn't sure what to look for. I've been using XAMPP for easy installations, so I'm really not sure exactly how it was installed. I'm using Windows 7, 64 bit - so forgive my ignorance, but I'm not even sure if "Win32" is relevant here. How can I check if my current version of PHP was compiled with the configuration mentioned above? PHP Version: 5.3.8 System: Windows NT WES-PC 6.1 build 7601 (Windows 7 Home Premium Edition Service Pack 1) i586 Build Date: Aug 23 2011 11:47:20 Compiler: MSVC9 (Visual C++ 2008) Architecture: x86 Configure Command: cscript /nologo configure.js "--enable-snapshot-build" "--disable-isapi" "--enable-debug-pack" "--disable-isapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=D:\php-sdk\oracle\instantclient11\sdk,shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze"

    Read the article

  • Do hard drive enclosures fail/is it the HDD or enclosure?

    - by x0a
    I'm having a whole host of problems with an external hard drive that was working just fine a couple of hours ago. I've had this problem before once, and that was about 3 months ago, here's what I documented: So a couple of hours ago I turned off all my computers and shut off the power to all my devices in my room, then went and turned the power off at the main switch so I could change an outlet. A couple hours later, after I've already slowly turned everything back on, I go to my xbox to try and watch a movie and it can't seem to list any of the movies I've got. So I go to my desktop to find that my external hard drive isn't there.. even though it's on and connected. It's also stationary and hidden behind something so there's not a whole lot of tampering/physical wear to that external. I plug it into my laptop to try and see what's going on. It starts making this endless loud screeching noise. None of that clicking that's usually associated with hd damage. It's not listed in my computers, and it shows up in Disk Management as "uninitialized" asking me to choose between two different partition types. After carefully disconnecting it and connecting it back, it asks me to format it, which I cancel. I start googling about my issue, starting to accept the situation, torn as hell and helpless and just about ready to toss the thing. Suddenly the screeching stops, after almost 45 minutes of it going, and Disk Management lists the drive as "Online" and "Healthy". Explorer pops up with all my files! I'm still being really careful with it and weary and treating it as though it's in fragile shape. I've downloaded some S.M.A.R.T. software to read the values and everything is listed as "OK" . No reallocated sectors, no read errors, no seek errors. I also ran a quick self-test, which completed without error. Everything seems fine. It looks to be a perfectly healthy external hard drive. So what the hell was that about? Was it doing some sort of maintenance or self-test? How am I supposed to tell the difference? I would've undoubtedly killed the drive for sure if had it gone on a bit longer. I've got the same problem now, with one exception: it doesn't magically reappear after the screeching stops. Occasionally I manage to get some S.M.A.R.T. diagnostics information, which basically reads everything as fine. The only problem is that my HD isn't initializing (so I can't access anything in it). I'm able to successfully run a quick smart test but not an extended one (I've only tried it once but got conflicting indications as to whether it was actually making any progress or not (was stuck on Random read test). So, final question (if all else fails): Could the hard drive enclosure be failing rather than the HDD? Is this a likely possibility at all? How would I know?

    Read the article

  • Pushing DNSSEC updates with offline keys

    - by eggyal
    In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. I outsource the whole shebang to an inexpensive managed hosting provider with a web interface through which I manage the zones; since the provider also offers DNSSEC, I have successfully deployed that too. These domains are so unimportant that an attack targetted against them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could protect against such an attack, but only if the zone's private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from which the provider can slave) and then copy offline-signed zonefiles to the master as required. The problem is that the only machine I (want to*) control is my personal laptop, which usually connects from a typical home ADSL (behind NAT over a dynamically-assigned IP address). Having them slave from that (e.g. with a very long Expiry time on the zone for periods when my laptop is offline/unavailable) would not only require a Dynamic DNS record from which they can slave (if indeed they can slave from a named host rather than a static IP address), but would also involve me running a DNS server on my laptop and opening both it and my home network up to the incoming zone transfer requests: not ideal. I would prefer a much more push-oriented design, whereby my laptop initiates transfer of offline-signed zonefiles/updates to the provider's servers. I looked into whether nsupdate could fit the bill: documentation is a little sketchy, but my testing (with BIND 9.7) suggests it can indeed update DNSSEC zones, but only where the server holds the keys to perform the zone signing; I have not found a way to have it take an update including the relevant RRSIG/NSEC/etc. records and have the server accept them. Is this a supported use-case? If not, I suspect the only solutions which could fit the bill will involve non-DNS-based transfer of the zone updates and would welcome recommendations that are supported by (hopefully inexpensive) hosting providers: SFTP/SCP? rsync? RDBMS replication? Proprietary API? Finally, what would be the practical implications of such a setup? Key rotation is jumping out at me as being an obvious difficulty, especially if my laptop is offline for extended periods. But the zones are extremely stable, so perhaps I could get away with long-lived ZSKs**...? * Whilst I could run a shadow/hidden master on e.g. an outsourced VPS, I dislike the overhead of having to secure / manage / monitor / maintain yet another system; not to mention the additional financial costs of so doing. ** Okay, this would enable a concerted attacker to replay outdated records—but the risk and impact of such are both tolerable in the case of these domains.

    Read the article

  • Problem installing build-essential and upgrading g++ on Ubuntu 8.04

    - by ehsanul
    I'm having some trouble with dependencies it seems, but myself don't really know how to resolve the issue. Here's the output: ~:) sudo apt-get install build-essential Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential: Depends: g++ (>= 4:4.3.1) but 4:4.2.3-1ubuntu6 is to be installed E: Broken packages ~:) sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: g++-4.3 (>= 4.3.1-1) but it is not going to be installed Depends: gcc-4.3 (>= 4.3.1-1) but it is not installable E: Broken packages ~:) Edit: I just tried aptitude instead of apt-get, as suggested. Doesn't work, had other problems: ~:) sudo aptitude install build-essential [sudo] password for ehsanul: Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Building tag database... Done The following packages are BROKEN: g++ g++-4.3 libstdc++6-4.3-dev The following packages have been automatically kept back: dpkg-dev fakeroot libdns35 libisc35 linux-libc-dev patch The following NEW packages will be automatically installed: libgmp3c2 libmpfr1ldbl The following packages have been kept back: adobe-flashplugin bind9-host dnsutils gvfs gvfs-backends gvfs-fuse libatm1 libbind9-30 libgvfscommon0 libisccc30 libisccfg30 liblwres30 libnautilus-extension1 linux-headers-2.6.24-24 linux-headers-2.6.24-24-generic linux-image-2.6.24-24-generic nautilus nautilus-data The following NEW packages will be installed: libgmp3c2 libmpfr1ldbl The following packages will be upgraded: build-essential The following partially installed packages will be configured: timidity 2 packages upgraded, 4 newly installed, 0 to remove and 24 not upgraded. Need to get 775kB/6265kB of archives. After unpacking 20.3MB will be used. The following packages have unmet dependencies: libstdc++6-4.3-dev: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libstdc++6 (>= 4.3.2-1ubuntu11) but 4.2.4-1ubuntu4 is installed. g++-4.3: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: gcc-4.3 (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libc6 (>= 2.8~20080505) but 2.7-10ubuntu4 is installed. g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc-4.3 (>= 4.3.1-1) which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Keep the following packages at their current version: build-essential [11.3ubuntu1 (hardy, now)] g++ [4:4.2.3-1ubuntu6 (hardy-updates, now)] g++-4.3 [Not Installed] libstdc++6-4.3-dev [Not Installed] Score is -9852 Accept this solution? [Y/n/q/?]

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • How to use iptables to forward all data from an IP to a Virtual Machine

    - by jro
    OK, in an attempt to get some response, a TL;DR version. I know that the following command: iptables -A PREROUTING -t nat -i eth0 --dport 80 --source 1.1.1.1 -j REDIRECT --to-port 8080 ... will redirect all traffic from port 80 to port 8080. The problem is that I have to do this for every port that is to be redirected. To be future-proof, I want all ports for an IP to be redirected to a different (internal) IP, so that if one might decide to enable SSH, they can directly connect without worrying about iptables. What is needed to reliable forward all traffic from an external IP, to an internal IP, and vice versa? Extended version I've scoured the internet for this, but I never got a solid answer. What I have is one physical server (HOST), with several virtual machines (VM) that need traffic redirected to them. Just getting it to work with a single machine is enough for now. The VM's run under VirtualBox, and are set to use a host-only adapter (vboxnet0). Everything seems to work, but it is greatly lagging. Both the host (CentOS 5.6) and the guest (Ubuntu 10.04) machine are running Linux. What I did was the following: Configure the VM to have a static IP in the network of the vboxnet0 adapter. Add an IP alias to the host, registering to the dedicated (outside) IP. Setup iptables to allow traffic to come through (via sysctl). Configure iptables to DNAT and SNAT data from a given IP address to the internal address. iptables commands: sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -A POSTROUTING -t nat -j MASQUERADE iptables -t nat -I PREROUTING -d $OUT_IP -I eth0 -j DNAT --to-destination $IN_IP iptables -t nat -I POSTROUTING -s $IN_IP -o eth0 -j SNAT --to-source $OUT_IP Now the site works, but is really, really slow. I'm hoping I missed something simple, but I'm out of ideas for now. Some background info: before this, the site was working with basic port forwarding. E.g. port 80 was mapped to port 8080 using iptables. In VirtualBox (having the network adapter configured as NAT), a port forwarding the other way around made things work beautifully. The problem was twofold: first, multiple ports needed to be forwarded (for admin interfaces, https, ssh, etc). Second, it only allowed one IP address to use port 80. To resolve things, multiple external IP addresses are used for different (sub)domains. Likewise, the "VirtualBox" network will contain the virtual machines: DNS Ext. IP Adapter VM "VirtalBox" IP ------------------------------------------------------------------ a.example.com 1.1.1.1 eth0:1 vm_guest_1 192.168.56.1 b.example.com 2.2.2.2 eth0:2 vm_guest_2 192.168.56.2 c.example.com 3.3.3.3 eth0:3 vm_guest_3 192.168.56.3 And so on. Put simply, the goal is to channel all traffic from a.example.com to vm_guest_1 (of put differently, from 1.1.1.1 to 192.168.56.1). And achieve this with an acceptable speed :).

    Read the article

  • Exchange Server 2007 Send and Receive Connectors

    - by Mistiry
    I have gotten awesome advice from users on here for getting Exchange on Windows SBS 2008 set up. I think this is the final piece and I'm ready for roll-out! I need to set up Exchange so that it RECEIVES mail from our existing mail server as a Forward [aliases on the existing mail server to forward mail from [email protected] to [email protected]] (not using the POP3 Connector), and SENDS mail through that server as well (sends from [email protected] to [email protected] and then out to the world, showing in the headers as from [email protected] or at absolute least have the reply-to set as this). Alternatively, as long as the .net email address doesn't show in the From and replies are directed to the .com account, email can go from Exchange to the outside world without directing through the existing mail server. External Domain: domain.com Internal Domain: domain.local Internet Domain Name Set in SBS Console: domain.net When I go to http://remote.domain.net I get the Remote Web Workspace, and can login to both Sharepoint and OWA. I can send an email from OWA to a GMail account. I receive it from [email protected], which is an alias of [email protected]. I cannot, however, send an email from OWA to ANY domain.com email addresses. I am also not receiving any email to this Exchange account (except for NDRs). When I try sending an email to a domain.com account, here is the error (I had to replace all < and with { and }): Delivery has failed to these recipients or distribution lists: [email protected] The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Generating server: IFEXCHANGE.domain.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a]) by IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a%10]) with mapi; Tue, 17 Aug 2010 14:14:14 -0400 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: John Doe {[email protected]} To: "[email protected]" {[email protected]} Date: Tue, 17 Aug 2010 14:14:12 -0400 Subject: asdf Thread-Topic: asdf Thread-Index: AQHLPjf+h6hA5MJ1JUu1WS4I4CiWeA== Message-ID: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} MIME-Version: 1.0 I hope I explained the situation well enough for someone to be able to explain to me what I'm missing. If I could, I'd be putting a 10K bounty, but unfortunately I've got only 74 reputation (hey, I'm a newbie here!). I'm pretty sure the obvious "RecipNotFound" error is why its not working, my question is how to resolve this. The email account exists, it receives mail just fine, yet when I send it from the Exchange server it fails. EDIT In OC-Hub Transport, the Email Address Policies has 2 entries. "Windows SBS Email Address Policy" is set up to: Include All Recipient Types, no conditions, and SMTP %[email protected]. "Default Policy" set to: Include All Recipient Types, no conditions, and SMTP @domain.net. Three Authoritative Accepted domains domain.com domain.local (Default) domain.net Remote Domains tab has two entries. Default with domain * Windows SBS Company Web Domain with domain companyweb.

    Read the article

  • Android: Trusting all Certificates using HttpClient over HTTPS

    - by psuguitarplayer
    Hi all, Recently posted a question regarding the HttpClient over Https (found here). I've made some headway, but I've run into new issues. As with my last problem, I can't seem to find an example anywhere that works for me. Basically, I want my client to accept any certificate (because I'm only ever pointing to one server) but I keep getting a javax.net.ssl.SSLException: Not trusted server certificate exception. So this is what I have: public void connect() throws A_WHOLE_BUNCH_OF_EXCEPTIONS { HttpPost post = new HttpPost(new URI(PROD_URL)); post.setEntity(new StringEntity(BODY)); KeyStore trusted = KeyStore.getInstance("BKS"); trusted.load(null, "".toCharArray()); SSLSocketFactory sslf = new SSLSocketFactory(trusted); sslf.setHostnameVerifier(SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register(new Scheme ("https", sslf, 443)); SingleClientConnManager cm = new SingleClientConnManager(post.getParams(), schemeRegistry); HttpClient client = new DefaultHttpClient(cm, post.getParams()); HttpResponse result = client.execute(post); } And here's the error I'm getting: W/System.err( 901): javax.net.ssl.SSLException: Not trusted server certificate W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.startHandshake(OpenSSLSocketImpl.java:360) W/System.err( 901): at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:92) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:321) W/System.err( 901): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:129) W/System.err( 901): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164) W/System.err( 901): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119) W/System.err( 901): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:348) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.connect(MainActivity.java:129) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.access$0(MainActivity.java:77) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity$2.run(MainActivity.java:49) W/System.err( 901): Caused by: java.security.cert.CertificateException: java.security.InvalidAlgorithmParameterException: the trust anchors set is empty W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerImpl.checkServerTrusted(TrustManagerImpl.java:157) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.startHandshake(OpenSSLSocketImpl.java:355) W/System.err( 901): ... 12 more W/System.err( 901): Caused by: java.security.InvalidAlgorithmParameterException: the trust anchors set is empty W/System.err( 901): at java.security.cert.PKIXParameters.checkTrustAnchors(PKIXParameters.java:645) W/System.err( 901): at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:89) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerImpl.<init>(TrustManagerImpl.java:89) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerFactoryImpl.engineGetTrustManagers(TrustManagerFactoryImpl.java:134) W/System.err( 901): at javax.net.ssl.TrustManagerFactory.getTrustManagers(TrustManagerFactory.java:226) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.createTrustManagers(SSLSocketFactory.java:263) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.<init>(SSLSocketFactory.java:190) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.<init>(SSLSocketFactory.java:216) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.connect(MainActivity.java:107) W/System.err( 901): ... 2 more

    Read the article

  • Creating Entity Framework objects with Unity for Unit of Work/Repository pattern

    - by TobyEvans
    Hi there, I'm trying to implement the Unit of Work/Repository pattern, as described here: http://blogs.msdn.com/adonet/archive/2009/06/16/using-repository-and-unit-of-work-patterns-with-entity-framework-4-0.aspx This requires each Repository to accept an IUnitOfWork implementation, eg an EF datacontext extended with a partial class to add an IUnitOfWork interface. I'm actually using .net 3.5, not 4.0. My basic Data Access constructor looks like this: public DataAccessLayer(IUnitOfWork unitOfWork, IRealtimeRepository realTimeRepository) { this.unitOfWork = unitOfWork; this.realTimeRepository = realTimeRepository; } So far, so good. What I'm trying to do is add Dependency Injection using the Unity Framework. Getting the EF data context to be created with Unity was an adventure, as it had trouble resolving the constructor - what I did in the end was to create another constructor in my partial class with a new overloaded constructor, and marked that with [InjectionConstructor] [InjectionConstructor] public communergyEntities(string connectionString, string containerName) :this() { (I know I need to pass the connection string to the base object, that can wait until once I've got all the objects initialising correctly) So, using this technique, I can happily resolve my entity framework object as an IUnitOfWork instance thus: using (IUnityContainer container = new UnityContainer()) { container.RegisterType<IUnitOfWork, communergyEntities>(); container.Configure<InjectedMembers>() .ConfigureInjectionFor<communergyEntities>( new InjectionConstructor("a", "b")) DataAccessLayer target = container.Resolve<DataAccessLayer>(); Great. What I need to do now is create the reference to the repository object for the DataAccessLayer - the DAL only needs to know the interface, so I'm guessing that I need to instantiate it as part of the Unity Resolve statement, passing it the appropriate IUnitOfWork interface. In the past, I would have just passed the Repository constructor the db connection string, and it would have gone away, created a local Entity Framework object and used that just for the lifetime of the Repository method. This is different, in that I create an Entity Framework instance as an IUnitOfWork implementation during the Unity Resolve statement, and it's that instance I need to pass into the constructor of the Repository - is that possible, and if so, how? I'm wondering if I could make the Repository a property and mark it as a Dependency, but that still wouldn't solve the problem of how to create the Repository with the IUnitOfWork object that the DAL is being Resolved with I'm not sure if I've understood this pattern correctly, and will happily take advice on the best way to implement it - Entity Framework is staying, but Unity can be swapped out if not the best approach. If I've got the whole thing upside down, please tell me thanks

    Read the article

  • IdHttp Post Method Delphi 2010

    - by Dusten S
    Like others before me, I'm having troubles using the IdHttp(Indy 10.5.5) component in Delphi 2010. The code works fine in Delphi 7: var XMLString : AnsiString; lService : AnsiString; ResponseStream: TMemoryStream; InputStringList : TStringList; begin ResponseStream := TMemoryStream.Create; InputStringList := TStringList.Create; XMLString :='<?xml version="1.0" encoding="ISO-8859-1"?> '+ '<!DOCTYPE pnet_imessage_send PUBLIC "-//PeopleNet//pnet_imessage_send" "http://open.peoplenetonline.com/dtd/pnet_imessage_send.dtd"> '+ '<pnet_imessage_send> '+ ' <cid>username</cid> '+ ' <pw>password</pw> '+ ' <vehicle_number>tr01</vehicle_number> '+ ' <deliver>now</deliver> '+ ' <action> '+ ' <action_type>reply_with_freeform</action_type> '+ ' <urgent_reply>yes</urgent_reply> '+ ' </action> '+ ' <freeform_message>Test Message Version 2</freeform_message> '+ '</pnet_imessage_send> '; lService := 'imessage_send'; InputStringList.Values['service'] := lService; InputStringList.Values['xml'] := XMLString; try IdHttp1.Request.Accept := '*/*'; IdHttp1.Request.ContentType := 'text/XML'; IdHTTP1.Post('http://open.peoplenetonline.com/scripts/open.dll', InputStringList, ResponseStream); ... finally ResponseStream.Free; InputStringList.Free; end; The only differences so far between this and the D7 code is that I've changed the String types to AnsiString, and added the HTTP Request properties. The response I get back from the server is 'XML failed to parse. Whitespace expected at Line:1 Position: 19', I'm assuming the XML got garbled up somewhere in the process, but I can't figure our where I'm going wrong. Any ideas?

    Read the article

  • ASP.NET DataList - defining "columns/rows" when repeating horizontal and using flow layout

    - by Ian Robinson
    Here is my DataList: <asp:DataList id="DataList" Visible="false" RepeatDirection="Horizontal" Width="100%" HorizontalAlign="Justify" RepeatLayout="Flow" runat="server"> [Contents Removed] </asp:DataList> This generates markup that has each item wrapped in a span. From there, I'd like to break each of these spans out into rows of three columns. Ideally I would like something like this: <div> <span>Item 1</span> <span>Item 2</span> <span>Item 3</span> </div> <div> <span>Item 4</span> <span>Item 5</span> <span>Item 6</span> </div> [etc] The closest I can get to this is to set RepeatColumns to "3" and then a <br> is inserted after every three items in the DataList. <span>Item 1</span> <span>Item 2</span> <span>Item 3</span> <br> <span>Item 4</span> <span>Item 5</span> <span>Item 6</span> <br> This gets me kind of close, but really doesn't do the trick - I still can't control the layout the way I'd like to be able to. Can anyone suggest a way to make this better? If I could implement the above example - that would be perfect, however I'd accept a less elegant solution as well - as long as its more flexible than <br> (such as inserting a <span class="clear"></span> instead of <br>).

    Read the article

  • Using JQuery to open a popup window and print

    - by TJ Kirchner
    Hello, A while back I created a lightbox plugin using jQuery that would load a url specified in a link into a lightbox. The code is really simple: $('.readmore').each(function(i){ $(this).popup(); }); and the link would look like this: <a class='readmore' href='view-details.php?Id=11'>TJ Kirchner</a> The plugin could also accept arguments for width, height, a different url, and more data to pass through. The problem I'm facing right now is printing the lightbox. I set it up so that the lightbox has a print button at the top of the box. That link would open up a new window and print that window. This is all being controlled by the lightbox plugin. Here's what that code looks like: $('.printBtn').bind('click',function() { var url = options.url + ( ( options.url.indexOf('?') < 0 && options.data != "" ) ? '?' : '&' ) + options.data; var thePopup = window.open( url, "Member Listing", "menubar=0,location=0,height=700,width=700" ); thePopup.print(); }); The problem is the script doesn't seem to be waiting until the window loads. It wants to print the moment the window appears. As a result, if I click "cancel" to the print dialog box, it'll popup again and again until the window loads. The first time I tried printing I got a blank page. That might be because the window didn't finish load. I need to find a way to alter the previous code block to wait until the window loads and then print. I feel like there should be an easy way to do this, but I haven't found it yet. Either that, or I need to find a better way to open a popup window and print from the lightbox script in the parent window, without alternating the webpage code in the popup window.

    Read the article

  • Java RMI (Server: TCP Connection Idle/Client: Unmarshalexception (EOFException))

    - by Perry Dahl Christensen
    I'm trying to implement Sun Tutorials RMI application that calculates Pi. I'm having some serious problems and I cant find the solution eventhough I've been searching the entire web and several javaskilled people. I'm hoping you can put an end to my frustrations. The crazy thing is that I can run the application from the cmd on my desktop computer. Trying the exact same thing with the exact same code in the exact same directories on my laptop produces the following errors. The problem occures when I try to connect the client to the server. I don't believe that the error is due to my policyfile as I can run it on the desktop. It must be elsewhere. Have anyone tried the same and can you give me a hint as to where my problem is, please? POLICYFILE SERVER: grant { permission java.security.AllPermissions; permission java.net.SocketPermission"*", "connect, resolve"; }; POLICYFILE CLIENT: grant { permission java.security.AllPermissions; permission java.net.SocketPermission"*", "connect, resolve"; }; SERVERSIDE ERRORS: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\STUDENTcd\ C:start rmiregistry C:java -cp c:\java;c:\java\compute.jar -Djava.rmi.server.codebase=file:/c:/jav a/compute.jar -Djava.rmi.server.hostname=localhost -Djava.security.policy=c:/jav a/servertest.policy engine.ComputeEngine ComputeEngine bound Exception in thread "RMI TCP Connection(idle)" java.security.AccessControlExcept ion: access denied (java.net.SocketPermission 127.0.0.1:1440 accept,resolve) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkAccept(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.checkAcceptPermi ssion(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.checkAcceptPermission(Unknown Sour ce) at sun.rmi.transport.Transport$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Sou rce) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Sour ce) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source ) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) CLIENTSIDE ERRORS: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\STUDENTcd\ C:java -cp c:\java;c:\java\compute.jar -Djava.rmi.server.codebase=file:\C:\jav a\files\ -Djava.security.policy=c:/java/clienttest.policy client.ComputePi local host 45 ComputePi exception: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source) at sun.rmi.server.UnicastRef.invoke(Unknown Source) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unkn own Source) at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source) at $Proxy0.executeTask(Unknown Source) at client.ComputePi.main(ComputePi.java:18) Caused by: java.io.EOFException at java.io.DataInputStream.readByte(Unknown Source) ... 6 more C: Thanks in advance Perry

    Read the article

  • Error using httlib's HTTPSConnection with PKCS#12 certificate

    - by Remi Despres-Smyth
    Hello. I'm trying to use httplib's HTTPSConnection for client validation, using a PKCS #12 certificate. I know the certificate is good, as I can connect to the server using it in MSIE and Firefox. Here's my connect function (the certificate includes the private key). I've pared it down to just the basics: def connect(self, cert_file, host, usrname, passwd): self.cert_file = cert_file self.host = host self.conn = httplib.HTTPSConnection(host=self.host, port=self.port, key_file=cert_file, cert_file=cert_file) self.conn.putrequest('GET', 'pathnet/,DanaInfo=200.222.1.1+') self.conn.endheaders() retCreateCon = self.conn.getresponse() if is_verbose: print "Create HTTPS connection, " + retCreateCon.read() (Note: No comments on the hard-coded path, please - I'm trying to get this to work first; I'll make it pretty afterwards. The hard-coded path is correct, as I connect to it in MSIE and Firefox. I changed the IP address for the post.) When I try to run this using a PKCS#12 certificate (a .pfx file), I get back what appears to be an openSSL error. Here is the entire error traceback: File "Usinghttplib_Test.py", line 175, in t.connect(cert_file=opts["-keys"], host=host_name, usrname=opts["-username"], passwd=opts["-password"]) File "Usinghttplib_Test.py", line 40, in connect self.conn.endheaders() File "c:\python26\lib\httplib.py", line 904, in endheaders self._send_output() File "c:\python26\lib\httplib.py", line 776, in _send_output self.send(msg) File "c:\python26\lib\httplib.py", line 735, in send self.connect() File "c:\python26\lib\httplib.py", line 1112, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "c:\python26\lib\ssl.py", line 350, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File "c:\python26\lib\ssl.py", line 113, in __init__ cert_reqs, ssl_version, ca_certs) ssl.SSLError: [Errno 336265225] _ssl.c:337: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Notice, the openSSL error (the last entry in the list) notes "PEM lib", which I found odd, since I'm not trying to use a PEM certificate. For kicks, I converted the PKCS#12 cert to a PEM cert, and ran the same code using that. In that case, I received no error, I was prompted to enter the PEM pass phrase, and the code did attempt to reach the server. (I received the response "The service is not available. Please try again later.", but I believe that would be because the server does not accept the PEM cert. I can't connect in Firefox to the server using the PEM cert either.) Is httplib's HTTPSConnection supposed to support PCKS#12 certificates? (That is, pfx files.) If so, why does it look like openSSL is trying to load it inside the PEM lib? Am I doing this all wrong? Any advice is welcome. EDIT: The certificate file contains both the certificate and the private key, which is why I'm providing the same file name for both the HTTPSConnection's key_file and cert_file parameters.

    Read the article

  • Need help implementing simple socket server using GIOService (GLib, Glib-GIO)

    - by Mark Renouf
    I'm learning the basics of writing a simple, efficient socket server using GLib. I'm experimenting with GSocketService. So far I can only seem to accept connections but then they are immediately closed. From the docs I can't figure out what step I am missing. I'm hoping someone can shed some light on this for me. When running the following: # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Output from the server: # ./server New Connection from 127.0.0.1:36962 New Connection from 127.0.0.1:36963 New Connection from 127.0.0.1:36965 Current code: /* * server.c * * Created on: Mar 10, 2010 * Author: mark */ #include <glib.h> #include <gio/gio.h> gchar *buffer; gboolean network_read(GIOChannel *source, GIOCondition cond, gpointer data) { GString *s = g_string_new(NULL); GError *error; GIOStatus ret = g_io_channel_read_line_string(source, s, NULL, &error); if (ret == G_IO_STATUS_ERROR) g_error ("Error reading: %s\n", error->message); else g_print("Got: %s\n", s->str); } gboolean new_connection(GSocketService *service, GSocketConnection *connection, GObject *source_object, gpointer user_data) { GSocketAddress *sockaddr = g_socket_connection_get_remote_address(connection, NULL); GInetAddress *addr = g_inet_socket_address_get_address(G_INET_SOCKET_ADDRESS(sockaddr)); guint16 port = g_inet_socket_address_get_port(G_INET_SOCKET_ADDRESS(sockaddr)); g_print("New Connection from %s:%d\n", g_inet_address_to_string(addr), port); GSocket *socket = g_socket_connection_get_socket(connection); gint fd = g_socket_get_fd(socket); GIOChannel *channel = g_io_channel_unix_new(fd); g_io_add_watch(channel, G_IO_IN, (GIOFunc) network_read, NULL); return TRUE; } int main(int argc, char **argv) { g_type_init(); GSocketService *service = g_socket_service_new(); GInetAddress *address = g_inet_address_new_from_string("127.0.0.1"); GSocketAddress *socket_address = g_inet_socket_address_new(address, 4000); g_socket_listener_add_address(G_SOCKET_LISTENER(service), socket_address, G_SOCKET_TYPE_STREAM, G_SOCKET_PROTOCOL_TCP, NULL, NULL, NULL); g_object_unref(socket_address); g_object_unref(address); g_socket_service_start(service); g_signal_connect(service, "incoming", G_CALLBACK(new_connection), NULL); GMainLoop *loop = g_main_loop_new(NULL, FALSE); g_main_loop_run(loop); }

    Read the article

  • How to improve the builder pattern?

    - by tangens
    Motivation Recently I searched for a way to initialize a complex object without passing a lot of parameter to the constructor. I tried it with the builder pattern, but I don't like the fact, that I'm not able to check at compile time if I really set all needed values. Traditional builder pattern When I use the builder pattern to create my Complex object, the creation is more "typesafe", because it's easier to see what an argument is used for: new ComplexBuilder() .setFirst( "first" ) .setSecond( "second" ) .setThird( "third" ) ... .build(); But now I have the problem, that I can easily miss an important parameter. I can check for it inside the build() method, but that is only at runtime. At compile time there is nothing that warns me, if I missed something. Enhanced builder pattern Now my idea was to create a builder, that "reminds" me if I missed a needed parameter. My first try looks like this: public class Complex { private String m_first; private String m_second; private String m_third; private Complex() {} public static class ComplexBuilder { private Complex m_complex; public ComplexBuilder() { m_complex = new Complex(); } public Builder2 setFirst( String first ) { m_complex.m_first = first; return new Builder2(); } public class Builder2 { private Builder2() {} Builder3 setSecond( String second ) { m_complex.m_second = second; return new Builder3(); } } public class Builder3 { private Builder3() {} Builder4 setThird( String third ) { m_complex.m_third = third; return new Builder4(); } } public class Builder4 { private Builder4() {} Complex build() { return m_complex; } } } } As you can see, each setter of the builder class returns a different internal builder class. Each internal builder class provides exactly one setter method and the last one provides only a build() method. Now the construction of an object again looks like this: new ComplexBuilder() .setFirst( "first" ) .setSecond( "second" ) .setThird( "third" ) .build(); ...but there is no way to forget a needed parameter. The compiler wouldn't accept it. Optional parameters If I had optional parameters, I would use the last internal builder class Builder4 to set them like a "traditional" builder does, returning itself. Questions Is this a well known pattern? Does it have a special name? Do you see any pitfalls? Do you have any ideas to improve the implementation - in the sense of fewer lines of code?

    Read the article

  • SubSonic 2.x now supports TVP's - SqlDbType.Structure / DataTables for SQL Server 2008

    - by ElHaix
    For those interested, I have now modified the SubSonic 2.x code to recognize and support DataTable parameter types. You can read more about SQL Server 2008 features here: http://download.microsoft.com/download/4/9/0/4906f81b-eb1a-49c3-bb05-ff3bcbb5d5ae/SQL%20SERVER%202008-RDBMS/T-SQL%20Enhancements%20with%20SQL%20Server%202008%20-%20Praveen%20Srivatsav.pdf What this enhancement will now allow you to do is to create a partial StoredProcedures.cs class, with a method that overrides the stored procedure wrapper method. A bit about good form: My DAL has no direct table access, and my DB only has execute permissions for that user to my sprocs. As such, SubSonic only generates the AllStructs and StoredProcedures classes. The SPROC: ALTER PROCEDURE [dbo].[testInsertToTestTVP] @UserDetails TestTVP READONLY, @Result INT OUT AS BEGIN SET NOCOUNT ON; SET @Result = -1 --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] ON INSERT INTO [dbo].[tbl_TestTVP] ( [GroupInsertID], [FirstName], [LastName] ) SELECT [GroupInsertID], [FirstName], [LastName] FROM @UserDetails IF @@ROWCOUNT > 0 BEGIN SET @Result = 1 SELECT @Result RETURN @Result END --SET IDENTITY_INSERT [dbo].[tbl_TestTVP] OFF END The TVP: CREATE TYPE [dbo].[TestTVP] AS TABLE( [GroupInsertID] [varchar](50) NOT NULL, [FirstName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL ) GO The the auto gen tool runs, it creates the following erroneous method: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(string UserDetails, int? Result) { SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); sp.Command.AddParameter("@UserDetails", UserDetails, DbType.AnsiString, null, null); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } It sets UserDetails as type string. As it's good form to have two folders for a SubSonic DAL - Custom and Generated, I created a StoredProcedures.cs partial class in Custom that looks like this: /// <summary> /// Creates an object wrapper for the testInsertToTestTVP Procedure /// </summary> public static StoredProcedure TestInsertToTestTVP(DataTable dt, int? Result) { DataSet ds = new DataSet(); SubSonic.StoredProcedure sp = new SubSonic.StoredProcedure("testInsertToTestTVP", DataService.GetInstance("MyDAL"), "dbo"); // TODO: Modify the SubSonic code base in sp.Command.AddParameter to accept // a parameter type of System.Data.SqlDbType.Structured, as it currently only accepts // System.Data.DbType. //sp.Command.AddParameter("@UserDetails", dt, System.Data.SqlDbType.Structured null, null); sp.Command.AddParameter("@UserDetails", dt, SqlDbType.Structured); sp.Command.AddOutputParameter("@Result", DbType.Int32, 0, 10); return sp; } As you can see, the method signature now contains a DataTable, and with my modification to the SubSonic framework, this now works perfectly. I'm wondering if the SubSonic guys can modify the auto-gen to recognize a TVP in a sproc signature, as to avoid having to re-write the warpper? Does SubSonic 3.x support Structured data types? Also, I'm sure many will be interested in using this code, so where can I upload the new code? Thanks.

    Read the article

  • exception occured in java compiler

    - by user2892977
    I am a beginner in Java.I have JDK1.7.0 installed on windows 7 OS.I just wrote a sample java file where the file was not getting compiled and throws the below error. Sam.java:5: ';' expected Sample p = New Sample(); An exception has occurred in the compile r (1.7.0-ea). Please file a bug at the Java Developer Connection (http://java.su n.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. java.lang.StringIndexOutOfBoundsException: String index out of range: 26 at java.lang.String.charAt(String.java:694) at com.sun.tools.javac.util.Log.printErrLine(Log.java:251) at com.sun.tools.javac.util.Log.writeDiagnostic(Log.java:343) at com.sun.tools.javac.util.Log.report(Log.java:315) at com.sun.tools.javac.util.AbstractLog.error(AbstractLog.java:96) at com.sun.tools.javac.parser.Parser.reportSyntaxError(Parser.java:295) at com.sun.tools.javac.parser.Parser.accept(Parser.java:326) at com.sun.tools.javac.parser.Parser.blockStatements(Parser.java:1599) at com.sun.tools.javac.parser.Parser.block(Parser.java:1500) at com.sun.tools.javac.parser.Parser.block(Parser.java:1514) at com.sun.tools.javac.parser.Parser.methodDeclaratorRest(Parser.java:25 69) at com.sun.tools.javac.parser.Parser.classOrInterfaceBodyDeclaration(Par ser.java:2518) at com.sun.tools.javac.parser.Parser.classOrInterfaceBody(Parser.java:24 45) at com.sun.tools.javac.parser.Parser.classDeclaration(Parser.java:2290) at com.sun.tools.javac.parser.Parser.classOrInterfaceOrEnumDeclaration(P arser.java:2228) at com.sun.tools.javac.parser.Parser.typeDeclaration(Parser.java:2217) at com.sun.tools.javac.parser.Parser.compilationUnit(Parser.java:2163) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:530) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:571) at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:82 2) at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:748) at com.sun.tools.javac.main.Main.compile(Main.java:386) at com.sun.tools.javac.main.Main.compile(Main.java:312) at com.sun.tools.javac.main.Main.compile(Main.java:303) at com.sun.tools.javac.Main.compile(Main.java:82) at com.sun.tools.javac.Main.main(Main.java:67) Below is the code for Sam.java file class sam { public static void main(String args[]) { Sample p = New Sample(); p.show(); p.display(); } } I researched in google with the various compiler options but that did not help.I would like to understand the below errors. 1 - Sam.java:5: ';' expected 2 - An exception has occurred in the compiler (1.7.0-ea)

    Read the article

  • Using will_paginate with AJAX live search with jQuery in Rails

    - by Mark Richman
    I am using will_paginate to successfully page through records. I am also using AJAX live search via jQuery to update my results div. No problem so far. The issue I have is when trying to paginate through those live search results. I simply get "Page is loading..." with no div update. Am I missing something fundamental? # index.html.erb <form id="searchform" accept-charset="utf-8" method="get" action="/search"> Search: <input id="search" name="search" type="text" autocomplete="off" title="Search location, company, description..." /> <%= image_tag("spinner.gif", :id => "spinner", :style =>"display: none;" ) %> </form> # JobsController#search def search if params[:search].nil? @jobs = Job.paginate :page => params[:page], :order => "created_at desc" elsif params[:search] and request.xhr? @jobs = Job.search params[:search], params[:page] end render :partial => "jobs", :layout => false, :locals => { :jobs => @jobs } end # Job#search def self.search(search, page) logger.debug "Job.paginate #{search}, #{page}" paginate :per_page => @@per_page, :page => page, :conditions => ["description LIKE ? or title LIKE ? or company LIKE ?", "%#{search}%", "%#{search}%", "%#{search}%"], :order => 'created_at DESC' end # search.js $(document).ready(function(){ $("#search").keyup(function() { $("#spinner").show(); // show the spinner var form = $(this).parents("form"); // grab the form wrapping the search bar. var url = form.attr("action"); // grab the URL from the form's action value. var formData = form.serialize(); // grab the data in the form $.get(url, formData, function(html) { // perform an AJAX get, the trailing function is what happens on successful get. $("#spinner").hide(); // hide the spinner $("#jobs").html(html); // replace the "results" div with the result of action taken }); }); });

    Read the article

  • IOS - Performance Issues with SVProgressHUD

    - by ScottOBot
    I have a section of code that is producing pour performance and not working as expected. it utilizes the SVProgressHUD from the following github repository at https://github.com/samvermette/SVProgressHUD. I have an area of code where I need to use [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]. I want to have a progress hud displayed before the synchronous request and then dismissed after the request has finished. Here is the code in question: //Display the progress HUB [SVProgressHUD showWithStatus:@"Signing Up..." maskType:SVProgressHUDMaskTypeClear]; NSError* error; NSURLResponse *response = nil; [self setUserCredentials]; // create json object for a users session NSDictionary* session = [NSDictionary dictionaryWithObjectsAndKeys: firstName, @"first_name", lastName, @"last_name", email, @"email", password, @"password", nil]; NSData *jsonSession = [NSJSONSerialization dataWithJSONObject:session options:NSJSONWritingPrettyPrinted error:&error]; NSString *url = [NSString stringWithFormat:@"%@api/v1/users.json", CoffeeURL]; NSURL *URL = [NSURL URLWithString:url]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:URL cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:30.0]; [request setHTTPMethod:@"POST"]; [request setValue:@"application/json" forHTTPHeaderField:@"Accept"]; [request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [request setValue:[NSString stringWithFormat:@"%d", [jsonSession length]] forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:jsonSession]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSString *dataString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding]; NSDictionary *JSONResponse = [NSJSONSerialization JSONObjectWithData:[dataString dataUsingEncoding:NSUTF8StringEncoding] options:NSJSONReadingMutableContainers error:&error]; NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse *)response; NSInteger statusCode = httpResponse.statusCode; NSLog(@"Status: %d", statusCode); if (JSONResponse != nil && statusCode == 200) { //Dismiss progress HUB here [SVProgressHUD dismiss]; return YES; } else { [SVProgressHUD dismiss]; return NO; } For some reason the synchronous request blocks the HUB from being displayed. It displays immediately after the synchronous request happens, unfortunatly this is also when it is dismissed. The behaviour displayed to the user is that the app hangs, waits for the synchronous request to finish, quickly flashes the HUB and then becomes responsive again. How can I fix this issue? I want the SVProgressHUB to be displayed during this hang time, how can I do this?

    Read the article

  • asp.net with jquery valduation issue

    - by Eyla
    Greetings, I have 2 textboxs, 2 checkboxs and 2 labels. first textbox, checkbox, and label related to each other and the same for the rest. textbox should accept valid phone number based on jquery validation plugin and when the chackbox check the validation rule would change and in both option the error message would disply inside the label. I have no problem to implement that for one text box but when I add second one will have a problem and only the validation will happen for the second one only. please look at my code and advice. <script src="js/jquery-1.4.1.js" type="text/javascript"></script> <script src="js/jquery.validate.js" type="text/javascript"></script> <script type="text/javascript"> var RegularExpression; var USAPhone = /^[01]?[- .]?(\([2-9]\d{2}\)|[2-9]\d{2})[- .]?\d{3}[- .]?\d{4}$/; var InterPhone = /^\d{9,12}$/; var errmsg; function ValidPhoneHome(sender) { if (sender.checked == true) { RegularExpression = InterPhone; errmsg = "Enter 9 to 12 numbers as international number"; } else { RegularExpression = USAPhone; errmsg = "Enter a valid number"; } jQuery.validator.addMethod("phonehome", function(value, element) { return this.optional(element) || RegularExpression.test(value); }, errmsg); } function ValidMobileHome(sender) { if (sender.checked == true) { RegularExpression = InterPhone; errmsg = "Enter 9 to 12 numbers as international number"; } else { RegularExpression = USAPhone; errmsg = "Enter a valid number"; } jQuery.validator.addMethod("mobilephone", function(value, element) { return this.optional(element) || RegularExpression.test(value); }, errmsg); } $(document).ready(function() { ValidPhoneHome("#<%= chkIntphoneHome%>"); ValidMobileHome("#<%= chkIntMobileHome%>"); $("#aspnetForm").validate({ rules: { "<%=txtHomePhone.UniqueID %>": { phonehome: true } } , errorLabelContainer: "#lblHomePhone", rules: { "<%=txtMobileHome.UniqueID %>": { mobilephone: true } } , errorLabelContainer: "#lblMobileHome" }) </script> <asp:CheckBox ID="chkIntphoneHome" runat="server" Text="Internation Code" onclick="ValidPhoneHome(this)" > <asp:TextBox ID="txtHomePhone" runat="server" ></asp:TextBox> <label id="lblHomePhone"></label> <asp:CheckBox ID="chkIntMobileHome" runat="server" Text="Internation Code" onclick="ValidMobileHome(this)" /> <asp:TextBox ID="txtMobileHome" runat="server"></asp:TextBox> <label id="lblMobileHome"></label>

    Read the article

  • HTTP request, strange socket behavoir

    - by hoodoos
    I expirience strange behavior when doing HTTP requests through sockets, here the request: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 10926 Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml Later on there goes body of my request with given content length. After that I recive something like: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT So everything seem to be fine here. I read all contents from network stream and successfuly recieve response. But my socket which I'm doing polling on switches it's modes like that: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really ) read ( here it switches to read back again ) last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 1076 Accept-Encoding: gzip Content-Encoding: gzip Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml I recieve response like that: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Encoding: gzip Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 07:26:33 GMT 2000 ? I recieve a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! ) What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something? PS All actual service references and login/passwords replaced with fake ones :)

    Read the article

  • NoHostAvailableException With Cassandra & DataStax Java Driver If Large ResultSet

    - by hughj
    The setup: 2-node Cassandra 1.2.6 cluster replicas=2 very large CQL3 table with no secondary index Rowkey is a UUID.randomUUID().toString() read consistency set to ONE Using DataStax java driver 1.0 The request: Attempting to do a table scan by "SELECT some-col from schema.table LIMIT nnn;" The fail: Once I go beyond a certain nnn LIMIT, I start to get NoHostAvailableExceptions from the driver. It reads like this: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:64) at com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:214) at com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:169) at com.jpmc.es.rtm.storage.impl.EventExtract.main(EventExtract.java:36) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.181.13.239 ([/10.181.13.239] Unexpected exception triggered)) at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:98) at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:165) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) Given: This is probably not the most enlightened thing to do to a large table with millions of rows, but this is how I learn what not to do, so I would really appreciate someone who could volunteer how this kind of error can be debugged. For example, when this happens, there are no indications that the nodes in the cluster ever had an issue with the request (there is nothing in the logs on either node that indicate any timeout or failure). Also, I enabled the trace on the driver, which gives you some nice autotrace (ala Oracle) info as long as the query succeeds. But in this case, the driver blows a NoHostAvailableException and no ExecutionInfo is available, so tracing has not provided any benefit in this case. I also find it interesting that this does not seem to be recorded as a timeout (my JMX consoles tell me no timeouts have occurred). So, I am left not understanding WHERE the failure is actually occurring. I am left with the idea that it is the driver that is having a problem, but I don't know how to debug it (and I would really like to). I have read several posts from folks that state that query'g for resultSets 10000 rows is probably not a good idea, and I am willing to accept this, but I would like to understand what is causing the exception and where the exception is happening. FWIW, I also tried bumping the timeout properties in the cassandra.yaml, but this made no difference whatsoever. I welcome any suggestions, anecdotes, insults, or monetary contributions for my registration in the house of moron-developers. Regards!!

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >