Search Results

Search found 30367 results on 1215 pages for 'service reference'.

Page 207/1215 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • What is the disadvantage of using abstract class as a database connectivity in zend framework 2 instead of service locator

    - by arslaan ejaz
    If I use database by creating adapter with drivers, initialize it in some abstract class and extend that abstract class to required model. Then use simple query statement. Like this: namespace My-Model\Model\DB; abstract class MysqliDB { protected $adapter; public function __construct(){ $this->adapter = new \Zend\Db\Adapter\Adapter(array( 'driver' => 'Mysqli', 'database' => 'my-database', 'username' => 'root', 'password' => '' )); } } And use abstract class of database like this in my models: class States extends DB\MysqliDB{ public function __construct(){ parent::__construct(); } protected $states = array(); public function select_all_states(){ $data = $this->adapter->query('select * from states'); foreach ($data->execute() as $row){ $this->states[] = $row; } return $this->states; } } I am new to zend framework, before i have experience of working in YII and Codeigniter. I like the object oriented in zend so i want to use it like this. And don't want to use it through service locater something like this: public function getServiceConfig(){ return array( 'factories' => array( 'addserver-mysqli' => new Model\MyAdapterFactory('addserver-mysqli'), 'loginDB' => function ($sm){ $adapter = $sm->get('addserver-mysqli'); return new LoginDB($adapter); } ) ); } In module. Am i Ok with this approach?

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

  • [Get Proactive!] Oracle Service Tools Bundle (STB) ???????????????????????????????

    - by aiyoku
    ??????????????????·?????????? ?????????·?????????????????????·?????????? ?????????·???????????????????????·????????? ???Solaris????????·??????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????? LED ?????????????????????? ??????????????????LED ????????????????????????????????????????????????? ???LED??????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????????????????????????????????????????????????? ????????·?????????????????????????????????????????????????????????????????????????Oracle Service Tools Bundle (STB) ??????????????????? ?????????????????????????? Oracle Explorer Data Collector Oracle Remote Diagnostic Agent (RDA) Oracle Autonomous Crashdump Tool (ACT)   Oracle Explorer Data Collector - ???????????Solaris????????·?????????????????? Oracle Explorer Data Collector??????????????? ???????????????????????????????????·????????????????????????????????????????????·????????·???????????????? ???????????????????????????????·?????????????????????? ???????·?????????????????????????????????????? ???? ???????????????????????????????????????????????????????????? Oracle Services Tools Bundle (STB) - RDA/Explorer, SNEEP, ACT - ??? (Doc ID 1496381.1) Oracle Explorer Data Collector???????·?????? (Doc ID 1571154.1) ??: ?????????????????·????????????????????????????????????·??????????????????????? ?????????????????·???????????????????????????????????   Oracle Remote Diagnostic Agent (RDA) - Solaris ???????????·????????????? MacOS?UNIX?VMS?Windows??????Oracle Remote Diagnostic Agent (RDA) ?????????????????????? Oracle Remote Diagnostic Agent (RDA)??perl???????????????????????Perl???????????????????????????????????????? Oracle Remote Diagnostic Agent (RDA) ?????????????????????????? Remote Diagnostic Agent (RDA) - Getting Started (Doc ID 314422.1) ???Solaris ????????·???????RDA?????Oracle Explorer Data Collector????????   Oracle Autonomous Crashdump Tool (ACT) - kernel core dump ??????????????? ????·?????????·????????????·???????????????????????????????????? ????kernel core????????????????? Solaris[TM]????????·????: x86???x64?????????·??·?????????????? (Doc ID 1515734.1) ????·??????????????kernel core dump????????????????kernel core dump??????????????????????·??????????????????? ?????kernel core dump????????????????????????????????????????????? Oracle Autonomous Crashdump Tool (ACT) ? Services Tools Bundle (STB) ???????????? ????????????????????????????????????????????????????·?????????????????kernel core dump ?????·?????????????????Oracle Autonomous Crashdump Tool (ACT) ????????·???????????????????? Oracle Autonomous Crashdump Tool (ACT)?????????? Oracle Autonomous Crashdump Tool (ACT) - ??? (Doc ID 1596529.1) ???????

    Read the article

  • publishing asp.net website give "Object reference not set to an instance of an object." error

    - by Johan
    Good day I am using VS 2008 I am getting fed up with this error. I have search all over the web and tried every possible suggestion to this error I could find. 1. delete app_code, build, add files back, publish. (did not work) 2. delete temporary asp.net files (did not work) in the end I even tried the command line and get the following stacktrace. error ASPRUNTIME: Object reference not set to an instance of an object. [NullReferenceException]: Object reference not set to an instance of an object. at System.Web.Compilation.BuildManager.CopyPrecompiledFile(VirtualFile vfile, String destPhysicalPath) at System.Web.Compilation.BuildManager.CopyStaticFilesRecursive(VirtualDirect ory sourceVdir, String destPhysicalDir, Boolean topLevel) at System.Web.Compilation.BuildManager.CopyStaticFilesRecursive(VirtualDirect ory sourceVdir, String destPhysicalDir, Boolean topLevel) at System.Web.Compilation.BuildManager.CopyStaticFilesRecursive(VirtualDirect ory sourceVdir, String destPhysicalDir, Boolean topLevel) at System.Web.Compilation.BuildManager.PrecompileAppInternal(VirtualPath star tingVirtualDir) at System.Web.Compilation.BuildManager.PrecompileApp(VirtualPath startingVirt ualDir) at System.Web.Compilation.BuildManager.PrecompileApp(ClientBuildManagerCallba ck callback) at System.Web.Compilation.BuildManagerHost.PrecompileApp(ClientBuildManagerCa llback callback) at System.Web.Compilation.BuildManagerHost.PrecompileApp(ClientBuildManagerCa llback callback) at System.Web.Compilation.ClientBuildManager.PrecompileApplication(ClientBuil dManagerCallback callback, Boolean forceCleanBuild) at System.Web.Compilation.ClientBuildManager.PrecompileApplication(ClientBuil dManagerCallback callback) at System.Web.Compilation.Precompiler.Main(String[] args) I used the following command line: aspnet_compiler.exe -p d:\code\websites\brokerweb -v / d:\code\websites\published -f -c -errorstack -u Please help as I cannot publish this site at all at present and it worked fine for quite a long time now before this stupid error. Regards Johan

    Read the article

  • WPF binding to ComboBox SelectedItem when reference not in ItemsSource.

    - by juharr
    I'm binding the PageMediaSize collection of a PrintQueue to the ItemSource of a ComboBox (This works fine). Then I'm binding the SelectedItem of the ComboBox to the DefaultPrintTicket.PageMediaSize of the PrintQueue. While this will set the selected value to the DefaultPrintTicket.PageMediaSize just fine it does not set the initially selected value of the ComboBox to the initial value of DefaultPrintTicket.PageMediaSize This is because the DefaultPrintTicket.PageMediaSize reference does not match any of the references in the collection. However I don't want it to compare the objects by reference, but instead by value, but PageMediaSize does not override Equals (and I have no control over it). What I'd really like to do is setup a IComparable for the ComboBox to use, but I don't see any way to do that. I've tried to use a Converter, but I would need more than the value and I couldn't figured out how to pass the collection to the ConverterProperty. Any ideas on how to handle this problem. Here's my xaml <ComboBox x:Name="PaperSizeComboBox" ItemsSource="{Binding ElementName=PrintersComboBox, Path=SelectedItem, Converter={StaticResource printQueueToPageSizesConverter}}" SelectedItem="{Binding ElementName=PrintersComboBox, Path=SelectedItem.DefaultPrintTicket.PageMediaSize}" DisplayMemberPath="PageMediaSizeName" Height="22" Margin="120,76,15,0" VerticalAlignment="Top"/> And the code for the converter that gets the PageMediaSize collection public class PrintQueueToPageSizesConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { return value == null ? null : ((PrintQueue)value).GetPrintCapabilities().PageMediaSizeCapability; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } }

    Read the article

  • Using Directives, Namespace and Assembly Reference - all jumbled up with StyleCop!

    - by Jack
    I like to adhere to StyleCop's formatting rules to make code nice and clear, but I've recently had a problem with one of its warnings: All using directives must be placed inside of the namespace. My problem is that I have using directives, an assembly reference (for mocking file deletion), and a namespace to juggle in one of my test classes: using System; using System.IO; using Microsoft.Moles.Framework; using Microsoft.VisualStudio.TestTools.UnitTesting; [assembly: MoledType(typeof(System.IO.File))] namespace MyNamespace { //Some Code } The above allows tests to be run fine - but StyleCop complains about the using directives not being inside the namespace. Putting the usings inside the namespace gives the error that "MoledType" is not recognised. Putting both the usings and the assembly reference inside the namespace gives the error 'assembly' is not a valid attribute location for this declaration. Valid attribute locations for this declaration are 'type'. All attributes in this block will be ignored. It seems I've tried every layout I can but to no avail - either the solution won't build, the mocking won't work or StyleCop complains! Does anyone know a way to set these out so that everything's happy? Or am I going to have to ignore the StyleCop warning in this case?

    Read the article

  • Error while deploying a web application in OSGI container using pax web

    - by RaulDM
    Hello I am trying to deploy a web application in a Felix container. I have all the required configuration done with my web app like the setting up of the manifest headers: Webapp-Context: Bundle-ClassPath: Bundle-Activator: Import-Package: Bundle-SymbolicName: etc The Pax bundles that I have dropped in the same container are: pax-web-service-0.6.0.jar pax-web-jsp-0.7.1.jar pax-web-extender-war-0.7.1.jar pax-logging-service-1.5.0.jar pax-logging-api-1.5.0.jar Though it had been written in the pax web site that pax-web-service is included in pax-war-extender, it seems without pax-web-service bundle, all other bundles become handicapped. I had removed the other pax bundles like pax-web-extender-whiteboard-0.7.1.jar pax-web-jetty-0.7.1.jar, as I have not seen any usefulness of those. The pax-web-jetty-0.7.1.jar even does not get start up. it has dependencies which it could not be able to resolve from any one of the bundle provided by PAX. My browser is displaying: HTTP ERROR 403 Problem accessing /adminmodule/. Reason: FORBIDDEN Powered by Jetty:// while the Console log says: [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - REQUEST /adminmodule/ on org.mortbay.jetty.HttpConnection@1e94001 [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.service.internal.model.ServerModel - Matching [/adminmodule/]... [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.service.internal.model.ServerModel - Path [/adminmodule/] matched to {pattern=/adminmodule/.*,model=ResourceModel{id=org.ops4j.pax.web.service.internal.model.ResourceModel-2,name=,urlPatterns=[/],alias=/,servlet=ResourceServlet{context=/adminmodule,alias=/,name=},initParams={},context=ContextModel{id=org.ops4j.pax.web.service.internal.model.ContextModel-1,name=adminmodule,httpContext=org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext@11710be,contextParams={webapp.context=adminmodule}}}} [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.service.internal.HttpServiceContext - Handling request for [/adminmodule/] using http context [org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext@11710be] [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - sessionManager=org.mortbay.jetty.servlet.HashSessionManager@19c6163 [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - session=null [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - servlet= [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - chain=org.ops4j.pax.web.service.internal.model.FilterModel-3- [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - servlet holder= [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - call filter org.ops4j.pax.web.service.internal.model.FilterModel-3 [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.service.internal.WelcomeFilesFilter - Apply welcome files filter... [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.service.internal.WelcomeFilesFilter - Servlet path: / [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.service.internal.WelcomeFilesFilter - Path info: null [5884890@qtp-16567002-0 - /adminmodule/] INFO org.ops4j.pax.web.service.internal.HttpServiceContext - getting resource: [/adminmodule.jsp] [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext - Searching bundle [com.cisco.zaloni.gwt.admin [1]] for resource [/adminmodule.jsp], normalized to [adminmodule.jsp] [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext - Resource not found [5884890@qtp-16567002-0 - /adminmodule/] INFO org.ops4j.pax.web.service.internal.HttpServiceContext - found resource: null [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - call servlet [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext - Searching bundle [com.cisco.zaloni.gwt.admin [1]] for resource [/], normalized to [/] [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.ops4j.pax.web.extender.war.internal.WebAppWebContainerContext - Resource found as url [bundle://1.0:1/] [5884890@qtp-16567002-0 - /adminmodule/] DEBUG org.mortbay.jetty - RESPONSE /adminmodule/ 403 It is really frustrating. please help. as I am new to OSGI. Raul

    Read the article

  • How do I reference an unsigned assembly from a VSTO Word Doc project?

    - by Gishu
    I created a new project in VS2008. Project type Visual C# > Office > 2007 > Word 2007 Document Added some code.. got Word to do a few jumps through some custom hoops.. all fine. Now I need to reference another assembly (CopyLocal as false) which is not signed. So I add the project reference. Now the project will not build complaining error MSB3188: Assembly 'X.dll' must be strong signed in order to be marked as a prerequisite. The error code page is concise (now accustomed to this) Been googling and reading posts ever since.. No Luck. How do I get around this ? Or is the hidden commandment that all references (for VSTO?) must be strong named / signed. I cannot sign 'X.dll' and be done with it because it is a binary that I don't control also it depends on another bunch of unsigned dlls.. can't set off a chain sign reaction. Update: Solved the build issue by turning CopyLocal=True. But this meant dumping the referenced X.DLL and all its dependencies into the bin\debug folder... Ughh! Tried creating a subfolder called bin\debug\refExecs and referencing X.dll CopyLocal=false from there. The error message was back.

    Read the article

  • C#/.Net Error: MySql.Data Object reference not set to an instance of an object.

    - by Simon
    Hello, I get this exception on my Windows 7 64bit in application running in VS 2008 express. I am using Connector/Net 6.2.2.0: Message: Object reference not set to an instance of an object. Source: MySql.Data in MySql.Data.MySqlClient.NativeDriver.GetResult(Int32& affectedRow, Int32& insertedId) Stack trace: in MySql.Data.MySqlClient.Driver.GetResult(Int32 statementId, Int32& affectedRows, Int32& insertedId) in MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId) in MySql.Data.MySqlClient.MySqlDataReader.NextResult() in MySql.Data.MySqlClient.MySqlDataReader.Close() in MySql.Data.MySqlClient.MySqlConnection.Close() in MySql.Data.MySqlClient.MySqlConnection.Dispose(Boolean disposing) in System.ComponentModel.Component.Finalize() No inner exception. This exception is unhalted and the debugger dont point on any code line. It just say "Object reference not set to an instance of an object. MySql.Data" This error is really hard to repeat. On my Windows XP 32bit is all ok. Could it be error in 64bit Windows 7? Thank you very much for your answers. Regards, simon

    Read the article

  • Function parameters evaluation order: is undefined behaviour if we pass reference?

    - by bolov
    This is undefined behaviour: void feedMeValue(int x, int a) { cout << x << " " << a << endl; } int main() { int a = 2; int &ra = a; feedMeValue(ra = 3, a); return 0; } because depending on what parameter gets evaluated first we could call (3, 2) or (3, 3). However this: void feedMeReference(int x, int const &ref) { cout << x << " " << ref << endl; } int main() { int a = 2; int &ra = a; feedMeReference(ra = 3, a); return 0; } will always output 3 3 since the second parameter is a reference and all parameters have been evaluated before the function call, so even if the second parameter is evaluated before of after ra = 3, the function received a reference to a wich will have a value of 2 or 3 at the time of the evaluation, but will always have the value 3 at the time of the function call. Is the second example UB? It is important to know because the compiler is free to do anything if he detects undefined behaviour, even if I know it would always yield the same results. *Note: I think that feedMeReference(a = 3, a) is the exact same situation as feedMeReference(ra = 3, a). However it seems not everybody agrees, in the addition to having 2 completely different answers.

    Read the article

  • Is there a way to increase the efficiency of shared_ptr by storing the reference count inside the co

    - by BillyONeal
    Hello everyone :) This is becoming a common pattern in my code, for when I need to manage an object that needs to be noncopyable because either A. it is "heavy" or B. it is an operating system resource, such as a critical section: class Resource; class Implementation : public boost::noncopyable { friend class Resource; HANDLE someData; Implementation(HANDLE input) : someData(input) {}; void SomeMethodThatActsOnHandle() { //Do stuff }; public: ~Implementation() { FreeHandle(someData) }; }; class Resource { boost::shared_ptr<Implementation> impl; public: Resource(int argA) explicit { HANDLE handle = SomeLegacyCApiThatMakesSomething(argA); if (handle == INVALID_HANDLE_VALUE) throw SomeTypeOfException(); impl.reset(new Implementation(handle)); }; void SomeMethodThatActsOnTheResource() { impl->SomeMethodThatActsOnTheHandle(); }; }; This way, shared_ptr takes care of the reference counting headaches, allowing Resource to be copyable, even though the underlying handle should only be closed once all references to it are destroyed. However, it seems like we could save the overhead of allocating shared_ptr's reference counts and such separately if we could move that data inside Implementation somehow, like boost's intrusive containers do. If this is making the premature optimization hackles nag some people, I actually agree that I don't need this for my current project. But I'm curious if it is possible.

    Read the article

  • WCF Service error received when using TCP: "The message could not be dispatched..."

    - by StM
    I am new to creating WCF services. I have created a WCF web service in VS2008 that is running on IIS 7. When I use http the service works perfectly. When I configure the service for TCP and run I get the following error message. There was a communication problem. The message could not be dispatched because the service at the endpoint address 'net:tcp://elec:9090/CoordinateIdTool_Tcp/IdToolService.svc is unavailable for the protocol of the address. I have searched a lot of forums, including this one, for a resolution but nothing has worked. Everything appears to be set up correctly on IIS 7. WAS has been set up to run. The default web site has a net.tcp binding and the application has net.tcp under the enabled protocols. I am including what I think is the important part of the web.config from the host project and also the app.config from the client project I am using to test the service. Hopefully someone can spot my error. Thanks in advance for any help or recommendations that anyone can provide. Web.Config <bindings> <wsHttpBinding> <binding name="wsHttpBindingNoMsgs"> <security mode="None" /> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="CogIDServiceHost.ServiceBehavior" name="CogIDServiceLibrary.CogIdService"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingNoMsgs" contract="CogIDServiceLibrary.CogIdTool"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <endpoint name="CoordinateIdService_TCP" address="net.tcp://elec:9090/CoordinateIdTool_Tcp/IdToolService.svc" binding="netTcpBinding" bindingConfiguration="" contract="CogIDServiceLibrary.CogIdTool"> <identity> <dns value="localhost" /> </identity> </endpoint> </service> </services> <behaviors> <serviceBehaviors> <behavior name="CogIDServiceHost.ServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> App.Config <system.serviceModel> <diagnostics performanceCounters="Off"> <messageLogging logEntireMessage="true" logMalformedMessages="false" logMessagesAtServiceLevel="false" logMessagesAtTransportLevel="false" /> </diagnostics> <behaviors /> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_CogIdTool" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" establishSecurityContext="true" /> </security> </binding> <binding name="wsHttpBindingNoMsg"> <security mode="None"> <transport clientCredentialType="Windows" /> <message clientCredentialType="Windows" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://sdet/CogId_WCF/IdToolService.svc" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingNoMsg" contract="CogIdServiceReference.CogIdTool" name="IISHostWsHttpBinding"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="http://localhost:1890/IdToolService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_CogIdTool" contract="CogIdServiceReference.CogIdTool" name="WSHttpBinding_CogIdTool"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="http://elec/CoordinateIdTool/IdToolService.svc" binding="wsHttpBinding" bindingConfiguration="wsHttpBindingNoMsg" contract="CogIdServiceReference.CogIdTool" name="IIS7HostWsHttpBinding_Elec"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="net.tcp://elec:9090/CoordinateIdTool_Tcp/IdToolService.svc" binding="netTcpBinding" bindingConfiguration="" contract="CogIdServiceReference.CogIdTool" name="IIS7HostTcpBinding_Elec" > <identity> <dns value="localhost"/> </identity> </endpoint> </client> </system.serviceModel>

    Read the article

  • Drupal 6: getting particular fields from Node Reference types...

    - by artmania
    Hi friends, I'm a drupal newbie... <?php print $node->field_date[0]['view']; ?> I can get the custom created CCK fields' value and display in tpl.php files as above... that's fine. my question is how can I get the Node reference fields' in-fields? for example, I have an event content type, and I have defined Node Reference for Location (title, address, img, etc.). When I write the code below, it displays all location content; <?php print $node->field_location[0]['view']; ?> but I need to get only address field from this location content type. sth like below would be great :D but not working; <?php print $node->field_location[0]['field_address']['view']; ?> so how can get that? appreciate helps so much! thanks a lot!

    Read the article

  • Strict pointer aliasing: is access through a 'volatile' pointer/reference a solution?

    - by doublep
    On the heels of a specific problem, a self-answer and comments to it, I'd like to understand if it is a proper solution, workaround/hack or just plain wrong. Specifically, I rewrote code: T x = ...; if (*reinterpret_cast <int*> (&x) == 0) ... As: T x = ...; if (*reinterpret_cast <volatile int*> (&x) == 0) ... with a volatile qualifier to the pointer. Let's just assume that treating T as int in my situation makes sense. Does this accessing through a volatile reference solve pointer aliasing problem? For a reference, from specification: [ Note: volatile is a hint to the implementation to avoid aggressive optimization involving the object because the value of the object might be changed by means undetectable by an implementation. See 1.9 for detailed semantics. In general, the semantics of volatile are intended to be the same in C++ as they are in C. — end note ] EDIT: The above code did solve my problem at least on GCC 4.5.

    Read the article

  • Is this reference or code in mistake or bug?

    - by mikezang
    I copied some text from NSDate Reference as below, please check Return Value, it is said the format will be in YYYY-MM-DD HH:MM:SS ±HHMM, but I got as below in my app, so the reference is mistake? or code in mistake? Saturday, January 1, 2011 12:00:00 AM Japan Standard Time or 2011?1?1????0?00?00? ????? descriptionWithLocale: Returns a string representation of the receiver using the given locale. - (NSString *)descriptionWithLocale:(id)locale Parameters locale An NSLocale object. If you pass nil, NSDate formats the date in the same way as the description method. On Mac OS X v10.4 and earlier, this parameter was an NSDictionary object. If you pass in an NSDictionary object on Mac OS X v10.5, NSDate uses the default user locale—the same as if you passed in [NSLocale currentLocale]. Return Value A string representation of the receiver, using the given locale, or if the locale argument is nil, in the international format YYYY-MM-DD HH:MM:SS ±HHMM, where ±HHMM represents the time zone offset in hours and minutes from GMT (for example, “2001-03-24 10:45:32 +0600”)

    Read the article

  • Is it possible, in ASP.net, to reference private assembly in a non-virtual subdirectory?

    - by Bago
    Is it possible to reference a private assembly in asp .net from a sub-folder that is not setup as a virtual directory? In other words, my page is setup in ~/subdir, I don't have access to ~/, and I am not an IIS admin. Can I reference a private assembly? How would I do this? I've tried <%@ Assembly Src="/subdir/bin/Assembly.dll % and <%@ Assembly Src="/subdir/bin/Assembly.dll % , but I get the messages "There is no build provider to match the extension .dll" or "Failed to map to path" respectively. Here is my folder structure: / | -subdir | | - Bin | | | *Assembly.dll | | *Default.aspx I've heard that in web.config might do the trick, but when I've tried it, it doesn't seem to work. Furthermore, I've read that only works in the application .config file. (i.e., the one in ~/). Anyhow, I already tried adding the following to web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="/subdir/bin" /> <dependentAssembly> <codeBase href="/subdir/bin/Assembly.dll"/> </dependentAssembly> </assemblyBinding> For more background on my problem, I am simply using a shared host, all I have is access to is that subdirectory, and I am trying to use fckeditor.

    Read the article

  • Is it possible, in ASP .net, to reference private assembly in a non-virtual subdirectory?

    - by Bago
    Is it possible to reference a private assembly in asp .net from a sub-folder that is not setup as a virtual directory? In other words, my page is setup in ~/subdir, I don't have access to ~/, and I am not an IIS admin. Can I reference a private assembly? How would I do this? I've tried <%@ Assembly Src="/subdir/bin/Assembly.dll % and <%@ Assembly Src="/subdir/bin/Assembly.dll % , but I get the messages "There is no build provider to match the extension .dll" or "Failed to map to path" respectively. Here is my folder structure: / | -subdir | | - Bin | | | *Assembly.dll | | *Default.aspx I've heard that in web.config might do the trick, but when I've tried it, it doesn't seem to work. Furthermore, I've read that only works in the application .config file. (i.e., the one in ~/). Anyhow, I already tried adding the following to web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="/subdir/bin" /> <dependentAssembly> <codeBase href="/subdir/bin/Assembly.dll"/> </dependentAssembly> </assemblyBinding> For more background on my problem, I am simply using a shared host, all I have is access to is that subdirectory, and I am trying to use fckeditor.

    Read the article

  • Are vector assignments copied by value or by reference in Google's Go language?

    - by Brian T Hannan
    In the following code, I create one peg puzzle then do a move on it which adds a move to its movesAlreadyDone vector. Then I create another peg puzzle then do a move on it which adds a move to its movesAlreadyDone vector. When I print out the values in that vector for the second one, it has the move in it from the first one along with the move from the second one. Can anyone tell me why it seems to be assigning by reference and not value? Are vector assignments copied by value or by reference in Google's Go language? package main import "fmt" import "container/vector" type Move struct { x0, y0, x1, y1 int } type PegPuzzle struct { movesAlreadyDone * vector.Vector; } func (p *PegPuzzle) InitPegPuzzle(){ p.movesAlreadyDone = vector.New(0); } func NewChildPegPuzzle(parent *PegPuzzle) *PegPuzzle{ retVal := new(PegPuzzle); retVal.movesAlreadyDone = parent.movesAlreadyDone; return retVal } func (p *PegPuzzle) doMove(move Move){ p.movesAlreadyDone.Push(move); } func (p *PegPuzzle) printPuzzleInfo(){ fmt.Printf("-----------START----------------------\n"); fmt.Printf("moves already done: %v\n", p.movesAlreadyDone); fmt.Printf("------------END-----------------------\n"); } func main() { p := new(PegPuzzle); cp1 := new(PegPuzzle); cp2 := new(PegPuzzle); p.InitPegPuzzle(); cp1 = NewChildPegPuzzle(p); cp1.doMove(Move{1,1,2,3}); cp1.printPuzzleInfo(); cp2 = NewChildPegPuzzle(p); cp2.doMove(Move{3,2,5,1}); cp2.printPuzzleInfo(); } Any help will be greatly appreciated. Thanks!

    Read the article

  • why no implicit conversion from pointer to reference to const pointer.

    - by user316606
    I'll illustrate my question with code: #include <iostream> void PrintInt(const unsigned char*& ptr) { int data = 0; ::memcpy(&data, ptr, sizeof(data)); // advance the pointer reference. ptr += sizeof(data); std::cout << std::hex << data << " " << std::endl; } int main(int, char**) { unsigned char buffer[] = { 0x11, 0x11, 0x11, 0x11, 0x22, 0x22, 0x22, 0x22, }; /* const */ unsigned char* ptr = buffer; PrintInt(ptr); // error C2664: ... PrintInt(ptr); // error C2664: ... return 0; } When I run this code (in VS2008) I get this: error C2664: 'PrintInt' : cannot convert parameter 1 from 'unsigned char *' to 'const unsigned char *&'. If I uncomment the "const" comment it works fine. However shouldn't pointer implicitly convert into const pointer and then reference be taken? Am I wrong in expecting this to work? Thanks!

    Read the article

  • How do I get a reference to a rootViewController to a sub-view?

    - by Andy
    An answer posted for one of my previous questions brings up another question; I am calling a new view controller, "RuleBuilder," from my rootViewController. The rootViewController holds a reference to a contacts array. How do I get a reference to that array into the RuleBuilder? I tried adding UITableViewController *rootViewController; ... @property (nonatomic, retain) UITableViewController *rootViewController; to RuleBuilder.h, and then @synthesize rootViewController; in RuleBuilder.m. When I instantiate and push the RuleBuilder from within rootViewController, I do this: ruleBuilder.rootViewController = self; But when I try this [rootViewController.contacts addObject:newContact]; from within RuleBuilder, I get a compiler error to the effect of "request for 'contacts' in something not a struct" (or very similar; I haven't implemented this exact snippet of code, but I tried an identical approach not an hour ago for a couple of different references that I never was able to get working). Thanks, again, for your help.

    Read the article

  • How to reference a Domain Controller out of the Local Network?

    - by Adrian
    We have multiple servers scattered over different hosting providers. For learning, experimenting and, ultimately, production purposes, I set one of them as a Domain Controller. That went well, most of our services are now authenticating via AD, which helps us a lot. What I want to do now is to simplify the authentication for the multiple servers, by making each of them look at the Domain Controller. This way, our Devs can log into (Remote Desktop) the multiple servers with the same credentials from AD. I know I have to configure each server to look at the Domain Controller. But when I try to add the Domain Controller to the Computer, it cannot find it, although the Domain Controller address is a valid, reachable internet sub-domain (as in "ad.ourcompany.com"). This is the detailed error message: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain ad.ourcompany.com: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.ad.ourcompany.com Common causes of this error include the following: - The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 109.188.207.9 109.188.207.10 - One or more of the following zones do not include delegation to its child zone: ad.ourcompany.com ourcompany.com com . (the root zone) For information about correcting this problem, click Help. What am I missing? I'm an experienced Dev, but a newbie Sysdamin experimenting with new stuff. Disclaimer All IP addresses and domains/subdomains were changed to preserve security. If by any chance you still can see private information, please let me know so that I can change it.

    Read the article

  • Agile PLM 9.3 Service Pack 2 (SP2 or 9.3.0.2) is released along with AUT 1.6.2.0 and AutoVue 20 for

    - by Shane Goodwin
    Oracle released Agile PLM 9.3 SP2 on June 14 and the Agile installer for AutoVue 20 for Agile PLM on April 30. Also available are the new versions of AUT and Averify - 1.6.3 for both tools. 9.3 SP2 is a combined English and NLS release for use on any version of 9.3.0. SP2 contains many bug fixes and rolls up several Hot Fixes - please review the Readme for all the details. In addition, this release also addresses some scalability issues when working with very large Exports and Reports. When exporting very large BOMs, the export module will now release objects more efficiently to reduce the amount of memory consumed on the Application Server. Adminstrators can also control the maximum row limits for Users verses system processes, like ACS. Several out of the box BOM reports have also been changed to use a new row limit option. The combination of all these changes will provide more stability on the application server for customers managing very large datasets. 9.3 SP2 also adds support for Oracle Database 11gR2 for Windows, Oracle Internet Directory (OID) and Oracle Access Manager (OAM). Please note that currently the Variant Patch is not intended to be released for SP2. Customers running the Variant Patch should remain on 9.3.0.0 or 9.3.0.1. Back in April, we also released the AutoVue 20 for Agile PLM installer. AutoVue 20 has many new features which will help Agile PLM customers. Large multi-page Word documents and 2D CAD documents will open more quickly to the first page or first rendition. Memory usage is less when working with 3D Models. There are many new formats supported for MCAD, 2D Cad, and EDA. AutoVue 20 is immediately available for Windows and Linux platforms. The new software can be found in Edelivery or Metalink / Oracle Support: - AutoVue 20 for Agile PLM is on E-Delivery with part number B58963-01 - Oracle Agile PLM 9.3 Service Pack 2 (9.3.0.2) My Oracle Support Patch ID 9782736 - AVERIFY 1.6.3 My Oracle Support Patch ID 9791892 - AUT 1.6.3 My Oracle Support Patch ID 9791908 - Agile PLM 9.3 SP2 Documentation is available on the OTN Agile Documentation Page

    Read the article

  • Set up a TFS Server/Service demo environment in less than 1 minute now!

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/  | Download | Source Code | Report a Bug | Ideas To Demonstrate the capabilities of TFS 2012 Server/Service Task board you would need to set up TFS with some teams, a few team members, some sample stories, tasks, etc. That’s too many steps if you as me! Hi! My name is Tarun Arora, I am a Microsoft MVP in Visual Studio ALM & a Visual Studio ALM Ranger, as a consultant I have had to demo TFS Preview to potential customers several times a day. I usually create the team project during the demo to show off how quick and efficient it is, but setting up teams, team members, tasks usually takes longer I don’t prefer carrying out these steps during the demo. I have developed a .net based console application which uses the TFS API to create a standard demo environment saving me from all these manual steps. The console application reads the set up information from an XML file, leaving the setup process highly customizable. Figure 1 – Demo Dictionary, change values here for unique setup The console application today sets up, 1. Create a new Team 2. Set the team as the default team 3. Configure team settings      a. Set Backlog Iteration path      b. Set Team Iterations and start & finish dates      c. Set Team Area path 4. Add Team Members 5. Add Product Backlog Items & linked Tasks. Image 2 – The team website before (on the left) and after (on the right) running the console app Image 3 – Team configuration before (on left) and after (on right) with new team Demo and 2 members Image 4 – Iteration configuration before (on left) and after (on right) with new backlog iteration path & sprint dates set Image 5 – Area configuration (on left) and after (on right) with area path configured for the team   Image 6 – A demo ready Task Board and Task Board for Team Members Credits, - Mattias Sköld [Visual Studio ALM Ranger] – I have used TfsTeamTools to perform team creation & add members - Ivan Popek – TFS 2012 API blog posts had some fantastic reusable samples.  - Shai Raiten [Microsoft ALM MVP] – Great collection of posts on TFS API. Enjoy!

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • Need WIF Training?

    - by Your DisplayName here!
    I spend numerous hours every month answering questions about WIF and identity in general. This made me realize that this is still quite a complicated topic once you go beyond the standard fedutil stuff. My good friend Brock and I put together a two day training course about WIF that covers everything we think is important. The course includes extensive lab material where you take standard application and apply all kinds of claims and federation techniques and technologies like WS-Federation, WS-Trust, session management, delegation, home realm discovery, multiple identity providers, Access Control Service, REST, SWT and OAuth. The lab also includes the latest version of the thinktecture identityserver and you will learn how to use and customize it. If you are looking for an open enrollment style of training, have a look here. Or contact me directly! The course outline looks as follows: Day 1 Intro to Claims-based Identity & the Windows Identity Foundation WIF introduces important concepts like conversion of security tokens and credentials to claims, claims transformation and claims-based authorization. In this module you will learn the basics of the WIF programming model and how WIF integrates into existing .NET code. Externalizing Authentication for Web Applications WIF includes support for the WS-Federation protocol. This protocol allows separating business and authentication logic into separate (distributed) applications. The authentication part is called identity provider or in more general terms - a security token service. This module looks at this scenario both from an application and identity provider point of view and walks you through the necessary concepts to centralize application login logic both using a standard product like Active Directory Federation Services as well as a custom token service using WIF’s API support. Externalizing Authentication for SOAP Services One big benefit of WIF is that it unifies the security programming model for ASP.NET and WCF. In the spirit of the preceding modules, we will have a look at how WIF integrates into the (SOAP) web service world. You will learn how to separate authentication into a separate service using the WS-Trust protocol and how WIF can simplify the WCF security model and extensibility API. Day 2 Advanced Topics:  Security Token Service Architecture, Delegation and Federation The preceding modules covered the 80/20 cases of WIF in combination with ASP.NET and WCF. In many scenarios this is just the tip of the iceberg. Especially when two business partners decide to federate, you usually have to deal with multiple token services and their implications in application design. Identity delegation is a feature that allows transporting the client identity over a chain of service invocations to make authorization decisions over multiple hops. In addition you will learn about the principal architecture of a STS, how to customize the one that comes with this training course, as well as how to build your own. Outsourcing Authentication:  Windows Azure & the Azure AppFabric Access Control Service Microsoft provides a multi-tenant security token service as part of the Azure platform cloud offering. This is an interesting product because it allows to outsource vital infrastructure services to a managed environment that guarantees uptime and scalability. Another advantage of the Access Control Service is, that it allows easy integration of both the “enterprise” protocols like WS-* as well as “web identities” like LiveID, Google or Facebook into your applications. ACS acts as a protocol bridge in this case where the application developer doesn’t need to implement all these protocols, but simply uses a service to make it happen. Claims & Federation for the Web and Mobile World Also the web & mobile world moves to a token and claims-based model. While the mechanics are almost identical, other protocols and token types are used to achieve better HTTP (REST) and JavaScript integration for in-browser applications and small footprint devices. Also patterns like how to allow third party applications to work with your data without having to disclose your credentials are important concepts in these application types. The nice thing about WIF and its powerful base APIs and abstractions is that it can shield application logic from these details while you can focus on implementing the actual application. HTH

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >