Search Results

Search found 13586 results on 544 pages for 'trusted domain'.

Page 505/544 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • Wordpress inserting comments via wp_insert_comment()

    - by Cyber Junkie
    Hello all happy holidays! :) I'm trying to insert comments in my wordpress blog via the wp_insert_comment() function. It's for a plugin I'm trying to make. I have this code in my header for testing. It works every time I refresh the page. $agent = $_SERVER['HTTP_USER_AGENT']; $data = array( 'comment_post_ID' => 256, 'comment_author' => 'Dave', 'comment_author_email' => '[email protected]', 'comment_author_url' => 'http://www.someiste.com', 'comment_content' => 'Lorem ipsum dolor sit amet...', 'comment_author_IP' => '127.3.1.1', 'comment_agent' => $agent, 'comment_date' => date('Y-m-d H:i:s'), 'comment_date_gmt' => date('Y-m-d H:i:s'), 'comment_approved' => 1, ); $comment_id = wp_insert_comment($data); It successfully inserts comments into the database. The problem: Comments don't show via the Disqus comment system. I compared table rows and I noticed that user_agent differs. Normal comments use for example, Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv... and Disqus comments use Disqus/1.1(2.61):119598902 numbers are different for each comment. Does anyone know how to insert comments with wp_insert_comment() when Disqus is enabled?

    Read the article

  • Spring + iBatis + Hessian caching

    - by ILya
    Hi. I have a Hessian service on Spring + iBatis working on Tomcat. I'm wondering how to cache results... I've made the following config in my sqlmap file: <sqlMap namespace="Account"> <cacheModel id="accountCache" type="MEMORY" readOnly="true" serialize="false"> <flushInterval hours="24"/> <flushOnExecute statement="Account.addAccount"/> <flushOnExecute statement="Account.deleteAccount"/> <property name="reference-type" value="STRONG" /> </cacheModel> <typeAlias alias="Account" type="domain.Account" /> <select id="getAccounts" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts; </select> <select id="getAccount" parameterClass="Long" resultClass="Account" cacheModel="accountCache"> fix all; select id, name, pin from accounts where id=#id#; </select> <insert id="addAccount" parameterClass="Account"> fix all; insert into accounts (id, name, pin) values (#id#, #name#, #pin#); </insert> <delete id="deleteAccount" parameterClass="Long"> fix all; delete from accounts where id = #id#; </delete> </sqlMap> Then i've done some tests... I have a hessian client application. I'm calling getAccounts several times and after each call it's a query to DBMS. How to make my service to query DBMS only a first time (after server restart) getAccounts called and for the following calls to use a cache?

    Read the article

  • How to avoid hard-coded credentials in Sharepoint webpart?

    - by Bryan
    I am building a Sharepoint web part that will be used by all users, but can only be modified by admins. The web part connects to a web service which needs credentials. I hard coded credentials in the web part's code. query.Credentials = new System.Net.NetworkCredential("username", "password", "domain"); query is an instance of the web service class This may not be a good approach. In regard with security, the source code of the web apart is available to people who are not allowed to see the credentials. In normal ASP.net applications, credentials can be written into web.config and encrypted. A web part doesn't have a .config file associated. There is a application-level .config file for the whole sharepoint site, but I don't want to modify it for a single webpart. I wonder if there is a webpart-specific way to solve the credential problem? Say we provide a WebBrowsable property of that web part so that privileged users can modify credentials. If this is desirable, how should I make the property displayed in a password ("*") rather than in plain text? Thanks.

    Read the article

  • GUI Agent accepts statuses from Daemon and shows it using progress indicator

    - by Pavel
    Hi to all! My application is a GUI agent, which communicate with daemon through the unix domain socket, wrapped in CFSocket.... So there are main loop and added CFRunLoop source. Daemon sends statuses and agent shows it with a progress indicator. When there are any data on socket, callback function begin to work and at this time I have to immediately show the new window with progress indicator and increase counter. //this function initiate the runloop for listening socket - (int) AcceptDaemonConnection:(ConnectionRef)conn { int err = 0; conn->fSockCF = CFSocketCreateWithNative(NULL, (CFSocketNativeHandle) conn->fSockFD, kCFSocketAcceptCallBack, ConnectionGotData, NULL); if (conn->fSockCF == NULL) err = EINVAL; if (err == 0) { conn->fRunLoopSource = CFSocketCreateRunLoopSource(NULL, conn->fSockCF, 0); if (conn->fRunLoopSource == NULL) err = EINVAL; else CFRunLoopAddSource(CFRunLoopGetCurrent(), conn->fRunLoopSource, kCFRunLoopDefaultMode); CFRelease(conn->fRunLoopSource); } return err; } // callback function void ConnectionGotData(CFSocketRef s, CFSocketCallBackType type, CFDataRef address, const void * data, void * info) { #pragma unused(s) #pragma unused(address) #pragma unused(info) assert(type == kCFSocketAcceptCallBack); assert( (int *) data != NULL ); assert( (*(int *) data) != -1 ); TStatusUpdate status; int nativeSocket = *(int *) data; status = [agg AcceptPacket:nativeSocket]; // [stWindow InitNewWindow] inside [agg SendUpdateStatus:status.percent]; } AcceptPacket function receives packet from the socket and trying to show new window with progress indicator. Corresponding function is called, but nothing happens... I think, that I have to make work the main application loop with interrupting CFSocket loop... Or send a notification? No idea....

    Read the article

  • ASPX page renders differently when reached on intranet vs. internet?

    - by MattSlay
    This is so odd to me.. I have IIS 5 running on XP and it's hosting a small ASP.Net app for our LAN that we can access by using the computer name, virtual directory, and page name (http://matt/smallapp/customers.aspx), but you can also hit that IIS server and page from the internet because I have a public IP that my firewall routes to the "Matt" computer (like http://213.202.3.88/smallapp/customers.aspx [just a made-up IP]). Don't worry, I have Windows domain authentication is in place to protect the app from anonymous users. So all the abovea parts works fine. But what's weird is that the Border of the divs on the page are rendered much thicker when you access the page from the intranet, versus the internet, (I'm using IE8) and also, some of the div layout (stretching and such) acts differently. Why would it render different in the same browser based on whether it was reached from the LAN vs. the internet? It does NOT do this in FireFox. So it must be just an IE8 thing. All the CSS for the divs is right in the HTML page, so I do not think it is a caching matter of a CSS file. Notice how the borders are different in these two images: Internet: http://twitpic.com/hxx91 . Lan: http://twitpic.com/hxxtv

    Read the article

  • Where to store site settings: DB? XML? CONFIG? CLASS FILES?

    - by Emin
    I am re-building a news portal of which already have a large number of visits every day. One of the major concerns when re-building this site was to maximize performance and speed. Having said this, we have done many things from caching, to all sort of other measures to ensure speed. Now towards the end of the project, I am having a dilemma of where to store my site settings that would least affect performance. The site settings will include things such as: Domain, DefaultImgPath, Google Analytics code, default emails of editors as well as more dynamic design/display feature settings such as the background color of specific DIVs and default color for links etc.. As far as I know, I have 4 choices in storing all these info. Database: Storing general settings in the DB and caching them may be a solution however, I want to limit the access to the database for only necessary and essential functions of the project which generally are insert/update/delete news items, author articles etc.. XML: I can store these settings in an XML file but I have not done this sort of thing before so I don't know what kind of problems -if any- I might face in the future. CONFIG: I can also store these settings in web.config CLASS FILE: I can hard code all these settings in a SiteSettings class, but since the site admin himself will be able to edit these settings, It may not be the best solution. Currently, I am more close to choosing web.config but letting people fiddle with it too often is something I do not want. E.g. if somehow, I miss out a validation for something and it breaks the web.config, the whole site will go down. My concern basically is that, I cannot forsee any possible consequences of using any of the methods above (or is there any other?), I was hoping to get this question over to more experienced people out here who hopefully help make my decision.

    Read the article

  • c# Sending emails with authentication. standard approach not working

    - by Ready Cent
    I am trying to send an email using the following very standard code. However, I get the error that follow... MailMessage message = new MailMessage(); message.Sender = new MailAddress("[email protected]"); message.To.Add("[email protected]"); message.Subject = "test subject"; message.Body = "test body"; SmtpClient client = new SmtpClient(); client.Host = "mail.myhost.com"; //client.Port = 587; NetworkCredential cred = new NetworkCredential(); cred.UserName = "[email protected]"; cred.Password = "correct password"; cred.Domain = "mail.myhost.com"; client.Credentials = cred; client.UseDefaultCredentials = false; client.Send(message); Mailbox unavailable. The server response was: No such user here. This recipient email address definitely works. To make this account work I had to do some special steps in outlook. Specifically, I had to do change account settings - more settings - outgoing server - my outgoing server requires authentication & use same settings. I am wondering if there is some other strategy. I think the key here is that my host is Server Intellect and I know that some people on here use them so hopefully someone else has been able to get through this. I did talk to support but they said with coding issues I am on my own :o

    Read the article

  • Is it possible to disable/bypass the login popup caused by mod_auth_ntlm_winbind (Single Sign On) an

    - by cvack
    I have an intranet on a remote Web server. This will be integrated with Active Directory on our local server. The web server is running Apache / Linux and the AD server is running Windows 2003. This is all done with VPN. Login to the intranet is conducted in two ways: 1. Users who are logged in to AD to be logged in automatically with SSO. 2. Users who are NOT logged in to AD to be logged in using a common login form. In order to auto login (SSO) I use mod_auth_ntlm_winbind. The problem here is that the users not logged in to AD will get a popup box where they must enter their DOMAIN/username + AD password. If I disable this popup, there is no way to get $_SERVER['REMOTE_USER'] Then my question: Is it possible to turn off this popup box and still get the REMOTE_USER? Or: If possible, can I use AJAX to check if http://my-intranet/auth returns 401 error (non-AD users). If so, do not go to the /auth folder.

    Read the article

  • MVC2 Modelbinder for List of derived objects

    - by user250773
    I want a list of different (derived) object types working with the Default Modelbinder in Asp.net MVC 2. I have the following ViewModel: public class ItemFormModel { [Required(ErrorMessage = "Required Field")] public string Name { get; set; } public string Description { get; set; } [ScaffoldColumn(true)] //public List<Core.Object> Objects { get; set; } public ArrayList Objects { get; set; } } And the list contains objects of diffent derived types, e.g. public class TextObject : Core.Object { public string Text { get; set; } } public class BoolObject : Core.Object { public bool Value { get; set; } } It doesn't matter if I use the List or the ArrayList implementation, everything get's nicely scaffolded in the form, but the modelbinder doesn't resolve the derived object type properties for me when posting back to the ActionResult. What could be a good solution for the Viewmodel structure to get a list of different object types handled? Having an extra list for every object type (e.g. List, List etc.) seems to be not a good solution for me, since this is a lot of overhead both in building the viewmodel and mapping it back to the domain model. Thinking about the other approach of binding all properties in a custom model binder, how can I make use the data annotations approach here (validating required attributes etc.) without a lot of overhead?

    Read the article

  • Parsing XML in Javascript getElementsByTagName not working

    - by Probocop
    Hi, I am trying to parse the following XML with javascript: <?xml version='1.0' encoding='UTF-8'?> <ResultSet> <Result> <URL>www.asd.com</URL> <Value>10500</Value> </Result> </ResultSet> The XML is generated by a PHP script to get how many pages are indexed in Bing. My javascript function is as follows: function bingIndexedPages() { ws_url = "http://archreport.epiphanydev2.co.uk/worker.php?query=bingindexed&domain="+$('#hidden_the_domain').val(); $.ajax({ type: "GET", url: ws_url, dataType: "xml", success: function(xmlIn){ alert('success'); result = xmlIn.getElementsByTagName("Result"); $('#tb_actualvsindexedbing_indexed').val($(result.getElementsByTagName("Value")).text()); $('#img_actualvsindexedbing_worked').attr("src","/images/worked.jpg"); }, error: function() {$('#img_actualvsindexedbing_worked').attr("src","/images/failed.jpg");} }); } The problem I'm having is firebug is saying: 'result.getElementsByTagName is not a function' Can you see what is going wrong? Thanks

    Read the article

  • Jquery Sorting by Letter

    - by Batfan
    I am using jquery to sort through a group of paragraph tags (kudos to Aaron Harun). It pulls the value "letter" (a letter) from the url string and displays only paragraphs that start with that letter. It hides all others and also consolidates the list so that there are no duplicates showing. See the code: var letter = '<?php echo(strlen($_GET['letter']) == 1) ? $_GET['letter'] : ''; ?>' function finish(){ var found_first = []; jQuery('p').each(function(){ if(jQuery(this).text().substr(0,1).toUpperCase() == letter){ if(found_first[jQuery(this).text()] != true){ jQuery(this).addClass('current-series'); found_first[jQuery(this).text()] = true; }else{ jQuery(this).hide(); } } else{ jQuery(this).hide();} }) } Been working with this all day and I have 2 Questions on this: Is there a way to get it to ignore the word 'The', if it's first? For example, if a paragraph starts with 'The Amazing', I would like it to show up on the 'A' page, not the 'T' page, like it currently is. Is there a way to have a single page for (all) numbers? For example, the url to the page would be something similar to domain.com/index.php?letter=0 and this would show only the paragraph tags that start with a number, any number. I can currently do this with single numbers but, I would like 1 page for all numbers.

    Read the article

  • Tfs 2010: how to set up a corporate source server?

    - by bwerks
    Hi all, I'm looking for guidance in setting up a corporate source server, but when I google this topic the best I can come up with is articles and walkthrough concerned with configuring VS to use microsoft's public symbol servers for use with debugging .NET assemblies. Provided for background info, the environment I'm concerned with using is Vs2010/Tfs2010. Basically, the workflow I'm looking to facilitate is this: 1) customer reports problem with application 2) application of the appropriate version is installed on a virtual machine 3) developer repros bug attaching to process on virtual machine and leveraging source server (symbol server?) on corporate domain. This is the step I'm concerned with. 4) developer pinpoints problem fixes bug in workspace. 5) developer performs a dll swap on VM to test changes? (side topic, not sure on this) 6) normal development/source control workflows. Any advice is welcome! Edit: since writing this, I have stumbled on this article, which is a nice writeup on the configuration of source server for TFS 2008. Has anyone adapted this for Tfs 2010?

    Read the article

  • In which domains are message oriented middleware like AMQP useful?

    - by cocotwo
    What problem do MOM (Message Oriented Middleware) solve? Scalability? Integration? In which domain are they typically used and in which domains are they typically not used? For example, say, is Google using such solution for it's main search engine or to power GMail? What about big websites like Walmart, eBay, FedEx (pretty much a Java shop) and buy.com (pretty much an MS shop)? Does MOM solve a need there? Does it make any sense when you're writing a Webapp where you control the server-side and have an homogenous environment (say tens of Amazon EC2 instances all running Linux + Java JVMs) there and where the clients are, well, Web browsers? Does it make sense for desktop apps that need to communicate with a server? Or is it 'only' for big enterprise stuff where you typically have a happy mix of countless of different systems that needs to communicate in a way or another? I'm a bit confused as to what they're useful for and I think that with example of where they're appropriate and where they're not appropriate I could better understand their use.

    Read the article

  • SQLBrowser will not start

    - by Oliver
    SQL Server 2005 x64 on Windows Server 2003 x64, with multiple instances (default + 2 named). Engineers moved server to a different domain. Since then, cannot get SQLBrowser to start. Still able to query the default instance, and can access named instances by port (TCP:hostname,port#). When on server, can use SSMS to connect to the instances, all is well from that perspective. No errors in the SQL Server logs. As SQLBrowser is starting, an entry in EventViewer.Application says that one of the named instances has an invalid configuration, but I haven't been able to figure out what is invalid. Startup continues, and next message says "The SQLBrowser service was unable to establish SQL instance and connectivity discovery." Next, it enables instance and connectivity discovery support; next, another message about that same named instance having an invalid configuration; then an event says that SQLBrowser has started; last, an event shows the SQLBrowser service has shutdown. I got SQLBrowser to get past the issue with the first named instance by temporarily renaming a registry entry, and now the second named instance can be accessed by name rather than port. Still, cannot access the first named instance by name. Advice?

    Read the article

  • Why doesn't the Java Collections API include a Graph implementation?

    - by dvanaria
    I’m currently learning the Java Collections API and feel I have a good understanding of the basics, but I’ve never understood why this standard API doesn’t include a Graph implementation. The three base classes are easily understandable (List, Set, and Map) and all their implementations in the API are mostly straightforward and consistent. Considering how often graphs come up as a potential way to model a given problem, this just doesn’t make sense to me (it’s possible it does exist in the API and I’m not looking in the right place of course). Steve Yegge suggests in one of his blog posts that a programmer should consider graphs first when attacking a problem, and if the problem domain doesn’t fit naturally into this data structure, only then consider the alternative structures. My first guess is that there is no universal way to represent graphs, or that their interfaces may not be generic enough for an API implementation to be useful? But if you strip down a graph to its basic components (vertices and a set of edges that connect some or all of the vertices) and consider the ways that graphs are commonly constructed (methods like addVertex(v) and insertEdge(v1, v2)) it seems that a generic Graph implementation would be possible and useful. Thanks for helping me understand this better.

    Read the article

  • Is .NET 4.0 just a show?

    - by Will Marcouiller
    I went to a presentation about the .NET Framework and Visual Studio 2010, last night. The topis were: ASP.NET 4 - Some of the new features of ASP.NET 4 More control over ClientID's in WebForms; Output Caching; ... // Some other stuff I don't really remember being more in framework and WinForms world. Entity Framework 2.0 (.NET 4.0) T4 Templates; Domain driven development; Data driven development; Contexts (edmx files); Some of real-world limitations of EF4 (projects with over 70 to 75 tables); Better POCO support, despite there are still these hidden EntityObject and StructuralObject, but used differently in comparison to EF 1.0 so that it doesn't take off your inheritance; Allows to easily choose how to persist the hierarchy into the underlying database; Code only (start working with EF4 directly from your code!); Design by Contract (DbC). The most interesting feature is, and only, as far as I'm concerned, all related to parallelism made easier. Which really works! No additional assembly references to add. In conclusion, I'm far from impressed about .NET Framework 4.0, apart that it makes some things easier to do. But when you're used to make it a way, it doesn't really change much, in my opinion. Is it me who cannot foresee what .NET 4.0 has to offer? What would you guys base your decision on to migrate to .NET 4.0, in a practical way?

    Read the article

  • URL development and mod_rewrite

    - by iRector
    My site is made-up of the main page, and multiple sub-directories, all under the same domain. My URLS are currently like .................| Ideal clean version: mysite.com mysite.com/?content=content1 ......................| mysite.com/content1/ mysite.com/?content=content2&page=4 ........| mysite.com/content2/4/ mysite.com/?content=content3 ......................| mysite.com/content3/ mysite.com/?content=content4 ......................| mysite.com/content4/ mysite.com/?content=article&id=34 ............| mysite.com/article/34/ Then the sub-directories are essentially the same: mysite.com/subdir, mysite.com/subdir2, mysite.com/subdir3, etc mysite.com/subdir/?content=content1 ...................| mysite.com/subdir/content1/ mysite.com/subdir/?content=content2&page=4 .....| mysite.com/subdir/content2/4/ mysite.com/subdir/?content=content3 ...................| mysite.com/subdir/content3/ mysite.com/subdir/?content=content4 ...................| mysite.com/subdir/content4/ mysite.com/subdir/?content=article&id=34 .........| mysite.com/subdir/article/34/ I've used mod_rewrite briefly, but I'm not sure how to approach these multiple variables. Also, how would I differentiate between the actually subfolders, and the content variable. As so to prevent 'subdir' or 'subdir2' from being plugged in as the content variable for the root site. I've played around with plenty of code snippets, but I've wiped my .htaccess slate clean, and approach you all in an attempt to help me repopulate it. Your input would thoroughly be appreciated. Note: The only time the page query string will be needed is when 'content' == 'content2' ?content=content2&page=4 **Same rule is shared by the article/id relationship, all other 'content' values are expected to be dynamic.

    Read the article

  • Why is alert not run even though $.getJSON runs fine? (Callback not executed, even though the reques

    - by Emre Sevinç
    I have a snippet of code such as: $.getJSON("http://mysite.org/polls/saveLanguageTest?url=" + escape(window.location.href) + "&callback=?", function (data) { var serverResponse = data.result; console.log(serverResponse); alert(serverResponse); }); It works fine in the sense that it makes a cross-domain request to my server and the server saves the data as I expect. Unfortunately, even though the server saves data and sends back a response I just can't get any alert or the console.log run. Why may be that? The server side code is (if that is relevant): def saveLanguageTest(request): callback = request.GET.get('callback', '') person = Person(firstName = 'Anonymous', ipAddress = request.META['REMOTE_ADDR']) person.save() webPage = WebPage(url = request.GET.get('url')) webPage.save() langTest = LanguageTest(type = 'prepositionTest') langTest.person = person langTest.webPage = webPage langTest.save() req ['result'] = 'Your test is saved.' response = json.dumps(req) response = callback + '(' + response + ');' return HttpResponse(response, mimetype = "application/json") What am I missing? (I tried the same code both within my web pages and inside the Firebug and I always have the problem stated above.)

    Read the article

  • Locating SSL certificate, key and CA on server

    - by jovan
    Disclaimer: you don't need to know Node to answer this question but it would help. I have a Node server and I need to make it work with HTTPS. As I researched around the internet, I found that I have to do something like this: var fs = require('fs'); var credentials = { key: fs.readFileSync('path/to/ssl/private-key'), cert: fs.readFileSync('path/to/ssl/cert'), ca: fs.readFileSync('path/to/something/called/CA') }; var app = require('https').createServer(credentials, handler); I have several problems with this. First off, all the examples I found use completely different approaches. Some link to .pem files for both the certificate and key. I don't know what pem files are but I know my certificate is .crt and my key is .key. Some start off at the root folder and some seem to just have these .pem files in the application directory. I don't. Some use the ca thing too and some don't. This CA is supposed to be my domain's CA bundle according to some articles - but none explain where to find this file. In the ssl directory on my server I have one .crt file in the certs directory and one .key file in the keys directory, in addition to an empty csrs directory and an ssl.db file. So, where do I find these 3 files (key, cert, ca) and how do I link to them correctly?

    Read the article

  • Cannot install windows service

    - by Matthew Dalton
    I have created a very simple window service using visual studio 2010 and .Net 4.0. This service has no functionality added from the default windows service project, other than an installer has been added. If i run installutil.exe appName.exe on my dev box or other windows 2008 R2 machines in our domain the windows service installs without issue. When i try to do this same thing on our customer site, it fails to install with the following error. Microsoft (R) .NET Framework Installation utility Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Exception occurred while initializing the installation: System.IO.FileLoadException: Could not load file or assembly 'file:///C:\TestService\WindowsService1.exe' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515). This solution has only 1 project and no dependencies added. I have tried it on multiple machines in our environment and two in our customers. The machines are all windows 2008 R2, both fresh installs. One machine has just .net 2.0 and .net 4.0. The other .net 2, 3, 3.5 and 4. I am a local admin on each of the machines. I have also tried the 64bit installer but get the following error, so i think the 32 bit one is the one to use. System.BadImageFormatException Any guidance would be appreciated. Thanks.

    Read the article

  • Dynamic WCF base addresses in SharePoint

    - by Paul Bevis
    I'm attempting to host a WCF service in SharePoint. I have configured the service to be compatible with ASP.NET to allow me access to HttpContext and session information [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] public class MISDataService : IMISDataService { ... } And my configuration looks like this <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service name="MISDataService"> <endpoint address="" binding="webHttpBinding" contract="MISDataViews.IMISDataService" /> </service> </services> </system.serviceModel> Whilst this gives me access to the current HTTP context, the serivce is always hosted under the root domain, i.e. http://www.mydomain.com/_layouts/MISDataService.svc. In SharePoint the URL being accessed gives you specific context information about the current site via the SPContext class. So with the service hosted in a virtual directory, I would like it to be available on mulitple addresses e.g. http://www.mydomain.com/_layouts/MISDataService.svc http://www.mydomain.com/sites/site1/_layouts/MISDataService.svc http://www.mydomain.com/sites/site2/_layouts/MISDataService.svc so that the service can figure out what data to return based upon the current context. Is it possible to configure the endpoint address dynamically? Or is the only alternative to host the service in one location and then pass the "context" to it some how?

    Read the article

  • HTML not appearing in Emails

    - by John
    My website sends out html emails but most of my recipients are receiving them as HTML marked up source pages instead of the nice table layouts. The problem doesn't appear to be an email client issue since the emails display properly in web mail clients like gmail, yahoo, hotmail etc... They also display properly when viewed through outlook or thunderbird that are connected to gmail, yahoo, hotmail etc... However, I have one domain name that I registered with a shared hosting provider called 1and1.com. I tried viewing my emails through their webmail client, thunderbird and outlook, but in all three cases, only the html mark up showed up. Also, I assume most of my recipients use MS Outlook with MS Exchange Server because they are business/finance people. Unfortunately, I don't know how to get an email that's managed by an MS Exchange Server. I made sure I'm sending my emails with the following headers: MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 Does anyone know what might be wrong? can anyone recommend a solution?

    Read the article

  • HQL query problem

    - by yigit
    Hi all, I'm using this hql query for my filters. Query perfectly working except width (string) part. Here is the query, public IList<ColorGroup> GetDistinctColorGroups(int typeID, int finishID, string width) { string queryStr = "Select distinct c from ColorGroup c inner join c.Products p " + "where p.ShowOnline = 1 "; if (typeID > 0) queryStr += " and p.ProductType.ID = " + typeID; if (finishID > 0) queryStr += " and p.FinishGroup.ID = " + finishID; if (width != "") queryStr += " and p.Size.Width = " + width; IList<ColorGroup> colors = NHibernateSession.CreateQuery(queryStr).List<ColorGroup>(); return colors; } ProductType and Size have same mappings and relations. This is the error; NHibernate.QueryException: illegal syntax near collection: Size [Select distinct c from .Domain.ColorGroup c inner join c.Products p where p.ShowOnline = 1 and p.ProductType.ID = 1 and p.FinishGroup.ID = 5 and p.Size.Width = 4] Any ideas ?

    Read the article

  • PHP: json_decode dumping NULL, BOM not found

    - by SerEnder
    I've been trying to find out why this 'json_encode'd string isn't parsing out correctly, and came across previously answered questions that had the UTF BOM sequence that was throwing the error, but didn't help me here. Here's the code that isn't currently working: //Decode the notes attached to the sig $aNotes = json_decode($rule->getNotes(),true); $bom = pack("CCC",0xef,0xbb,0xbf); if(0 == strncmp($rule->getNotes(),$bom,3)) { print('BOM detected - json encoding in UTF-8<br/>'); } else { print('BOM NOT detected - json encoding correctly<br/>'); } print('rule->getNotes:<br/>' . $rule->getNotes() .'<br/>'); var_dump($aNotes); Which generates this result: BOM NOT detected - json encoding correctly rule->getNotes: [{"lDate":"Unknown","sAuthor":"Unknown","sNote":"This is a general purpose Russian spam rule that matches anything starting with 2, 3 or 4 hex digits followed by a domain name ending with .ru -RSK 2010-05-10"},{"lDate":"1295031463082","sAuthor":"Drew Thorstenson","sNote":"this is Ryan's ru rule"}] NULL I've run it through JSON Lint, which said it was valid, and An Online JSON Parser which parsed it correctly too. Any insight would be greatly appreciated.

    Read the article

  • How to add a WHERE clause on the second table of a 1-to-1 join in Fluent NHibernate?

    - by daddywoodland
    I'm using a legacy database that was 'future proofed' to keep track of historical changes. It turns out this feature is never used so I want to map the tables into a single entity. My tables are: CodesHistory (CodesHistoryID (pk), CodeID (fk), Text) Codes (CodeID (pk), CodeName) To add an additional level of complexity, these tables hold the content for the drop down lists throughout the application. So, I'm trying to map a Title entity (Mr, Mrs etc.) as follows: Title ClassMap - Public Sub New() Table("CodesHistory") Id(Function(x) x.TitleID, "CodesHistoryID") Map(Function(x) x.Text) 'Call into the other half of the 1-2-1 join in order to merge them in 'this single domain object Join("Codes", AddressOf AddTitleDetailData) Where("CodeName like 'C.Title.%'") End Sub ' Method to merge two tables with a 1-2-1 join into a single entity in VB.Net Public Sub AddTitleDetailData(ByVal m As JoinPart(Of Title)) m.KeyColumn("CodeID") m.Map(Function(x) x.CodeName) End Sub From the above, you can see that my 'CodeName' field represents the select list in question (C.Title, C.Age etc). The problem is that the WHERE clause only applies to the 'CodesHistory' table but the 'CodeName' field is in the 'Codes' table. As I'm sure you can guess there's no scope to change the database. Is it possible to apply the WHERE clause to the Codes table?

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >