Search Results

Search found 10706 results on 429 pages for 'pkg config'.

Page 106/429 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Imlpementations of an Interface with Different Types?

    - by b3njamin
    Searched as best I could but unfortunately I've learned nothing relevant; basically I'm trying to work around the following problem in C#... For example, I have three possible references (refA, refB, refC) and I need to load the correct one depending on a configuration option. So far however I can't see a way of doing it that doesn't require me to use the name of said referenced object all through the code (the referenced objects are provided, I can't change them). Hope the following code makes more sense: public ??? LoadedClass; public Init() { /* load the object, according to which version we need... */ if (Config.Version == "refA") { Namespace.refA LoadedClass = new refA(); } else if (Config.Version == "refB") { Namespace.refB LoadedClass = new refB(); } else if (Config.Version == "refC") { Namespace.refC LoadedClass = new refC(); } Run(); } private void Run(){ { LoadedClass.SomeProperty... LoadedClass.SomeMethod(){ etc... } } As you can see, I need the Loaded class to be public, so in my limited way I'm trying to change the type 'dynamically' as I load in which real class I want. Each of refA, refB and refC will implement the same properties and methods but with different names. Again, this is what I'm working with, not by my design. All that said, I tried to get my head around Interfaces (which sound like they're what I'm after) but I'm looking at them and seeing strict types - which makes sense to me, even if it's not useful to me. Any and all ideas and opinions are welcome and I'll clarify anything if necessary. Excuse any silly mistakes I've made in the terminology, I'm learning all this for the first time. I'm really enjoying working with an OOP language so far though - coming from PHP this stuff is blowing my mind :-)

    Read the article

  • Sending emails with CodeIgniter in a local XAMPP server

    - by KeyStroke
    Hi, I'm trying to send emails through localhost (XAMPP Windows 1.7.3 installation), but I've been trying for hours with no success. This is the last code I tried: $config = Array( 'protocol' => 'smtp', 'smtp_host' => 'ssl://smtp.gmail.com', 'smtp_port' => 465, 'smtp_user' => '[email protected]', 'smtp_pass' => 'mypassword', ); $this->load->library('email', $config); $this->email->set_newline("\r\n"); $this->email->from('[email protected]', 'My Name'); $this->email->to('[email protected]'); $this->email->subject('Email Test'); $this->email->message('Testing the email class.'); if($this->email->send()) { echo 'Your email was sent.'; } else { show_error($this->email->print_debugger()); } Whenever I tried to load this the page shows that it's loading but nothing happens. Is there anything I need to setup in my server to insure email delivery? I messed with php.ini and sendmail's config a bit with no luck. And openSSL isn't available in case that matters. Any idea what's wrong?

    Read the article

  • KO3: How to deal with stylesheets and scriptfiles

    - by Svish
    I'm using Kohana 3 and it's template controller. My main site template controller currently looks something like this: <?php defined('SYSPATH') or die('No direct script access.'); abstract class Controller_SiteTemplate extends Controller_Template { public function before() { parent::before(); // Initialize default template variables $this->template->styles = Kohana::config('site.styles'); $this->template->scripts = Kohana::config('site.scripts'); $this->template->title = ''; $this->template->content = ''; } } And then in my template view I do: <?php # Styles foreach($styles as $file => $media) echo HTML::style($file, array('media' => $media)).PHP_EOL ?> <?php # Scripts foreach($scripts as $file) echo HTML::script($file).PHP_EOL ?> This works alright. The problem is that it requires the style- and script files to be added in the controller, which shouldn't really have to care about those. It also makes it a hassle if the views are done by someone else than me since they would have to fool around with the controller just to add a new stylesheet or a new script file. How can this be done in a better way? Just to clearify, what I am wondering is how to deal with page specific stylesheets and scripts. The default and site-wide ones I have no problem with fetching from a config file or just put directly in the template view. My issue is how to add custom ones for specific pages in a good way.

    Read the article

  • Named pipe stalls threads?

    - by entens
    I am attempting to push updates into a process via a named pipe, but in doing so my process loop now seams to stall on while ((line = sr.ReadLine()) != null). I'm a little mystified as to what might be wrong as this is my first foray into named pipes. void RefreshThread() { using (NamedPipeServerStream pipeStream = new NamedPipeServerStream("processPipe", PipeDirection.In)) { pipeStream.WaitForConnection(); using (StreamReader sr = new StreamReader(pipeStream)) { for (; ; ) { if (StopThread == true) { StopThread = false; return; // exit loop and terminate the thread } // push update for heartbeat int HeartbeatHandle = ItemDictionary["Info.Heartbeat"]; int HeartbeatValue = (int)Config.Items[HeartbeatHandle].Value; Config.Items[HeartbeatHandle].Value = ++HeartbeatValue; SetItemValue(HeartbeatHandle, HeartbeatValue, (short)0xC0, DateTime.Now); string line = null; while ((line = sr.ReadLine()) != null) { // line is in the format: item, value, timestamp string[] parts = line.Split(','); // push update and store value in item cache int handle = ItemDictionary[parts[0]]; object value = parts[1]; Config.Items[handle].Value = int.Parse(value); DateTime timestamp = DateTime.FromBinary(long.Parse(parts[2])); SetItemValue(handle, value, (short)0xC0, timestamp); } Thread.Sleep(500); } } } }

    Read the article

  • Update Web Reference in Visual Studio

    - by NeilD
    Hi, I have inherited a web site project that makes use of a number of WCF Web Services hosted on a BizTalk server. We have two environments that I need to deploy this project to, with different URLs for the different BizTalk servers. i.e. In the Staging environment, I need to point the services at xx.xx.xx.101 In the Live environment, I need to point them at xx.xx.xx.102, or whatever. Currently, we've got all of the URLs stored in keys in the web.config file, so that we can change them dynamically... Unfortunately this isn't working! If I change the URL in the web.config to something other than what the project was compiled with, I get an error when calling the service: Server did not recognize the value of HTTP Header SOAPAction: xx.xx.xx.101\ServiceName\MethodName I'm told that the only way they've known to deploy this is to update the web.config URLs, change all of the web references in Visual Studio to match, click on "update web reference" for each reference in Visual Studio, and then compile. It's driving me mad! I've written a pre-build NAnt script to go through and replace all instances of the URL found anywhere in the project directory, and even that isn't making any difference. There must be something else being pulled down from the service when I click the "update reference", but I'm new to working with web services, and so I'm not sure what. Does anyone have any ideas? Is there a way to do this programatically? Thanks.

    Read the article

  • Path String Concatenation Question.

    - by Nano HE
    Hi. Please see my code below. ifstream myLibFile ("libs//%s" , line); // Compile failed here ??? I want to combine the path string and open the related file again. #include <iostream> #include <fstream> #include <string> using namespace std; int main () { string line; ifstream myfile ("libs//Config.txt"); // There are several file names listed in the COnfig.txt file line by line. if (myfile.is_open()) { while (! myfile.eof() ) { getline (myfile,line); cout << line << endl; // Read details lib files based on the each line file name. string libFileLine; ifstream myLibFile ("libs//%s" , line); // Compile failed here ??? if (myLibFile.is_open()) { while (! myLibFile.eof() ) { cout<< "success\n"; } myLibFile.close(); } } myfile.close(); } else cout << "Unable to open file"; return 0; } Assume my [Config.txt] include the content below. And all the *.txt files located in libs folder. file1.txt file2.txt file3.txt

    Read the article

  • Is it possible to persist two profiles with the Profile Provider Model?

    - by NickGPS
    I have a website that needs to store two sets of user data into separate data stores. The first set of data is used by the SiteCore CMS and holds information about the user. The second set of data is used by a personalisation application that stores its own user data. The reason they aren't stored together in the same profile object is because the personalisation application is used across multiple websites that do not all use SiteCore. I am able to create multiple Profile Providers - I currently have one from SiteCore and a custom provider that I have written and these both work in isolation. The problem exists when you try to configure both in the same web.config file. It seems you can only specify a single Profile object in the web.config file, rather than one for each provider. This means that regardless of which provider is being used the .Net framework sends through the profile object that is specified in the "inherits" parameter in the profile section of the web.config file. My questions are - Is it possible to specify a different Profile object for each Profile Provider? If so, how and where is this specified? Thanks, Nick

    Read the article

  • Cannot generate/run migrations on rails 2.3.4

    - by Brian Roisentul
    I used to work with rails 2.3.2 before and then I decided to upgrade to version 2.3.4. Today I tried to generate a migration(I could do this fine with version 2.3.2) and I got the following error message: C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:812:in `const_missing': uninitialized constant ActiveSupport (NameError) from D:/Proyectos/Cursometro/www/config/environment.rb:33 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:111:in `run' from D:/Proyectos/Cursometro/www/config/environment.rb:15 from D:/Proyectos/Cursometro/www/config/environment.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script\generate:3 I don't know why this is happening. Everything worked fine in 2.3.2 and now it doesn't.

    Read the article

  • Spring.net customer namespace parser

    - by ListenToRick
    I have a customer parser which looks like this: [NamespaceParser( Namespace = "http://mysite/schema/cache", SchemaLocationAssemblyHint = typeof(CacheNamespaceParser ), SchemaLocation = "/cache.xsd" ) ] public class CacheNamespaceParser : NamespaceParserSupport { public override void Init() { RegisterObjectDefinitionParser("cache", new CacheParser ()); } } public class CacheParser : AbstractSimpleObjectDefinitionParser { protected override Type GetObjectType(XmlElement element) { return typeof(CacheDefinition); } protected override void DoParse(XmlElement element, ObjectDefinitionBuilder builder) { } protected override bool ShouldGenerateIdAsFallback { get { return true; } } } in the web config i have the following configuration.... <spring> <parsers> <parser type="Spring.Data.Config.DatabaseNamespaceParser, Spring.Data"/> <parser type="App.Web.CacheNamespaceParser, WebApp" /> </parsers> When I run the project I get the following error: An error occurred creating the configuration section handler for spring/parsers: Invalid resource name. Name has to be in 'assembly:<assemblyName>/<namespace>/<resourceName>' format. I put a break point in the CacheNamespaceParser init method and it is called. If I remove from the web config all is well! Any ideas whats wrong

    Read the article

  • Is it possible, in ASP.net, to reference private assembly in a non-virtual subdirectory?

    - by Bago
    Is it possible to reference a private assembly in asp .net from a sub-folder that is not setup as a virtual directory? In other words, my page is setup in ~/subdir, I don't have access to ~/, and I am not an IIS admin. Can I reference a private assembly? How would I do this? I've tried <%@ Assembly Src="/subdir/bin/Assembly.dll % and <%@ Assembly Src="/subdir/bin/Assembly.dll % , but I get the messages "There is no build provider to match the extension .dll" or "Failed to map to path" respectively. Here is my folder structure: / | -subdir | | - Bin | | | *Assembly.dll | | *Default.aspx I've heard that in web.config might do the trick, but when I've tried it, it doesn't seem to work. Furthermore, I've read that only works in the application .config file. (i.e., the one in ~/). Anyhow, I already tried adding the following to web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="/subdir/bin" /> <dependentAssembly> <codeBase href="/subdir/bin/Assembly.dll"/> </dependentAssembly> </assemblyBinding> For more background on my problem, I am simply using a shared host, all I have is access to is that subdirectory, and I am trying to use fckeditor.

    Read the article

  • Is it possible, in ASP .net, to reference private assembly in a non-virtual subdirectory?

    - by Bago
    Is it possible to reference a private assembly in asp .net from a sub-folder that is not setup as a virtual directory? In other words, my page is setup in ~/subdir, I don't have access to ~/, and I am not an IIS admin. Can I reference a private assembly? How would I do this? I've tried <%@ Assembly Src="/subdir/bin/Assembly.dll % and <%@ Assembly Src="/subdir/bin/Assembly.dll % , but I get the messages "There is no build provider to match the extension .dll" or "Failed to map to path" respectively. Here is my folder structure: / | -subdir | | - Bin | | | *Assembly.dll | | *Default.aspx I've heard that in web.config might do the trick, but when I've tried it, it doesn't seem to work. Furthermore, I've read that only works in the application .config file. (i.e., the one in ~/). Anyhow, I already tried adding the following to web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="/subdir/bin" /> <dependentAssembly> <codeBase href="/subdir/bin/Assembly.dll"/> </dependentAssembly> </assemblyBinding> For more background on my problem, I am simply using a shared host, all I have is access to is that subdirectory, and I am trying to use fckeditor.

    Read the article

  • ADO.NET Data Services Media type requires a ';' character before a parameter definition.

    - by idahosaedokpayi
    I am experimenting with ADO.NET and I am seeing this error on the second attempt to browse the service: <?xml version="1.0" encoding="utf-8" standalone="yes" ?> <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code /> <message xml:lang="en-US">Media type requires a ';' character before a parameter definition.</message> </error> The first attempt is normal. I am working with an exactly identical service on an internal development network and it is fine. I am including my connection string: <add name="J4Entities" connectionString="metadata=res://*;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=MNSTSQL01N;Initial Catalog=J4;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> and my Data service class: using System; using System.Data.Services; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; public class Data : DataService< J4Model.J4Entities > { // This method is called only once to initialize service-wide policies. public static void InitializeService(IDataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); } } Is there something obvious I am not doing?

    Read the article

  • Converting to MVC3 - some views still want 'System.Web.Mvc, Version=1.0.0.0,

    - by justSteve
    I've used the directions from the release notes and have been able to navigate most pages - my unit tests are not comprehensive but most all pass. However...when I attempt to edit an existing or create a new user I'm getting the error pasted below - notice that it's references version=1... - this project started life as a v1 and was converted to mvc2 at the RTM. I'm still working with V2 projects but no longer any v1. Am i due for a GAC cleansing? Server Error in '/' Application. Could not load file or assembly 'System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified. === Pre-bind state information === LOG: User = STUDIO11\mUser LOG: DisplayName = System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 (Fully-specified) LOG: Appbase = file:///C:/Users/C:\Users\[path to project]/ LOG: Initial PrivatePath = C:\Users\[path to project]\bin Calling assembly : App_Web_qcjylaoc, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null. === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Users\[path to project]\web.config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config. LOG: Post-policy reference: System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 LOG: The same bind was seen before, and was failed with hr = 0x80070002.

    Read the article

  • Consuming a WCF Service

    - by Lijo
    Hi I created a WCF service which is hosted in windows service. I created a proxy using svcutil “svcutil.exe http://localhost:8000/ServiceModelSamples/FreeServiceWorld?wsdl” It generated an output.config file and proxy class. The output.config has the following element <client> <endpoint address="http://localhost:8000/ServiceModelSamples/FreeServiceWorld" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IWeather" contract="IWeather" name="WSHttpBinding_IWeather"> <identity> <servicePrincipalName value="host/D471DTRV.ustr.com" /> </identity> </endpoint> </client> I created a website (as client) and added a new C# file (MyFile.cs) into it. I copied the contents of the proxy class into MyFile.cs. [The output.config is not copied to the web site] In the code behnid of aspx, I am using the following code WeatherClient client= new WeatherClient("WSHttpBinding_IWeather"); It throws an exception as “Could not find endpoint element with name 'WSHttpBinding_IWeather' and contract 'IWeather' in the ServiceModel client configuration section.” Could you please help me to understand the missing link here? Thanks Lijo

    Read the article

  • Mis-spelling in the .NET configuration system, a design flaw?

    - by smwikipedia
    I just wrote some .NET code to get connection string from the config file. The config file is as below: <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="key1" value="hello,world!"/> </appSettings> <connectionStrings> <add name="conn1" connectionString="abcd"/> </connectionStrings> </configuration> .NET Framework provide the following types to get the connection string: 1- ConnectionStringsSection : stands for the config section containing several connection strings 2- ConnectionStringSettingsCollection : stands for the connection string collection 3- ConnectionStringSettings : stands for a certain connection string. .NET Framework also provide the following types to get the App Settings: 4- AppSettingsSection 5- KeyValueConfigurationCollection 6- KeyValueConfigurationElement Compare 2 to 5, 3 to 6, why are there extra "s" in ConnectionStringSetting[s]Collection and ConnectionStringSetting[s]? This mis-spelling is really mis-leading. I think it's a design flaw. Has anyone noticed that?

    Read the article

  • Running Sitecore Production Site under a Virtual Directory

    - by danswain
    We are using Sitecore 6 on a Windows Server 2003 (32bit) dev machine. I know it's not recommended for the CMS editing site, but we've been told it is possible to get the front-end Sitecore websites to run from within a virtual directory. Here's the issue: we'd like to achieve what the below poor mans diagram shows. We have a website (.net 1.1) /WebSiteRoot (.net 1.1) | | |---- Custom .net 1.1 Web Application | |---- SiteCore frontend WebApplication (.net 2.0) | |---- Custom .net 2.0 WebApplication The Sitecore webApplication would contain the Sitecore pipeline in its web.config and we'd make use of the section to configure the virtual folder to allow for where our Sitecore app sits and point it to the appropriate place in the Content Tree. Is it possible to pull this off? This is just the customer facing website, there will be no CMS editing functionality on these servers, that will be done from a more standard Sitecore install inside the firewall on a different server. The errors we're encountering are centered around loading the the various config files in the App_Config folder. It seems to do a Server.MapPath on "/" initially (which is wrong for us) so we've tried putting absolute paths in the web.config and still no joy (I think there must be some hardcoded piece that looks for the Include directory). Any help would be greatly appreciated. Thanks

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • Cannot generate migrations on rails 2.3.4

    - by Brian Roisentul
    I used to work with rails 2.3.2 before and then I decided to upgrade to version 2.3.4. Today I tried to generate a migration(I could do this fine with version 2.3.2) and I got the following error message: C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:812:in `const_missing': uninitialized constant ActiveSupport (NameError) from D:/Proyectos/Cursometro/www/config/environment.rb:33 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/initializer.rb:111:in `run' from D:/Proyectos/Cursometro/www/config/environment.rb:15 from D:/Proyectos/Cursometro/www/config/environment.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:1 from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/generate.rb:31:in `require' from C:/Program Files (x86)/NetBeans 6.8/ruby2/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script\generate:3 I don't know why this is happening. Everything worked fine in 2.3.2 and now it doesn't.

    Read the article

  • Is it a good practice for a .js file to rely on variables declared in the including html

    - by Bozho
    In short: <script type="text/javascript"> var root = '${config.root}'; var userLanguage = '${config.language}'; var userTimezone = '${config.timezone}'; </script> <script type="text/javascript" src="js/scripts.js"></script> And then, in scripts.js, rely on these variables: if (userLanguage == 'en') { .. } The ${..} is simply a placeholder for a value in the script that generates the page. It can be php, jsp, asp, whatever. The point is - it is dynamic, and hence it can't be part of the .js file (which is static). So, is it OK for the static javascript file to rely on these externally defined configuration variables? (they are mainly configuration, of course). Or is it preferred to make the .js file be served dynamically as well (i.e. make it a .php / .jsp, with the proper Content-Type), and have these values defined in there.

    Read the article

  • IIS publish of WCF service -- fails with no error message

    - by tavistmorph
    I havea WCF service which I publish from Visual Studio 2008 to an IIS 6. According to the output window of VS, the publish succeeded, no error messages or warnings. When I look at IIS, the virtual directory was created, but there is no .svc listed in the directory. The directory just has my web.config and a bin. Any attempts to call my WCF service fail cause they don't exist. How can I see an error message of what's going wrong? By trial-and-error, I discovered changing my app.config before publishing will make the service show up. Namely my app.config file has these lines: <binding ...> <security mode="Transport"> <transport clientCreditionalType="None"/> </security> </binding> If I switch "Transport" to "None", then my service shows up on IIS. But I do have a certificate installed on IIS on the server, and as far as I can tell, everything is configured correctly on the server. There is no error message in the event log. How can I get a find more error messages about why the service is failing to show up?

    Read the article

  • how to unzip uploaded zip file?

    - by Jaydeepsinh Jadeja
    I am trying to upload a zipped file using codeigniter framework with following code function do_upload() { $name=time(); $config['upload_path'] = './uploadedModules/'; $config['allowed_types'] = 'zip|rar'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload()) { $error = array('error' => $this->upload->display_errors()); $this->load->view('upload_view', $error); } else { $data = array('upload_data' => $this->upload->data()); $this->load->library('unzip'); // Optional: Only take out these files, anything else is ignored $this->unzip->allow(array('css', 'js', 'png', 'gif', 'jpeg', 'jpg', 'tpl', 'html', 'swf')); $this->unzip->extract('./uploadedModules/'.$data['upload_data']['file_name'], './application/modules/'); $pieces = explode(".", $data['upload_data']['file_name']); $title=$pieces[0]; $status=1; $core=0; $this->addons_model->insertNewModule($title,$status,$core); } } But the main problem is that when extract function is called, it extract the zip but the result is empty folder. Is there any way to overcome this problem?

    Read the article

  • Compile Apache 2.4.3 on Centos 6.2 (64bit)

    - by RiseCakoPlusplus
    I attempt to compile Apache 2.4.3 with apr-1.4.6 and apr-util-1.5.1 on Centos 6.2 (64bit). ./configure --build=x86_64-unknown-linux-gnu --host=x86_64-unknown-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/var/lib --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --without-pear --with-bz2 --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --with-xpm-dir=/usr --enable-gd-native-ttf --with-t1lib=/usr --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --with-kerberos --enable-ucd-snmp-hack --enable-shmop --enable-calendar --with-libxml-dir=/usr --enable-xml --with-system-tzdata --with-mhash --with-apxs2=/usr/sbin/apxs --libdir=/usr/lib64/php --enable-pdo=shared --with-mysql=shared,/usr --with-mysqli=shared,/usr/lib64/mysql/mysql_config --with-pdo-mysql=shared,/usr/lib64/mysql/mysql_config --without-pdo-sqlite --without-gd --disable-dom --disable-dba --without-unixODBC --disable-xmlreader --disable-xmlwriter --without-sqlite3 --disable-phar --disable-fileinfo --disable-json --without-pspell --disable-wddx --without-curl --disable-posix --disable-sysvmsg --disable-sysvshm --disable-sysvsem ./configure --with-included-apr --with-included-apr-util and when I issue make this happen: /root/httpd-2.4.3/srclib/apr/libtool: line 5989: cd: yes/lib: No such file or directory libtool: link: cannot determine absolute directory name of yes/lib' make[3]: *** [libaprutil-1.la] Error 1 make[3]: Leaving directory/root/httpd-2.4.3/srclib/apr-util' make[2]: * [all-recursive] Error 1 make[2]: Leaving directory /root/httpd-2.4.3/srclib/apr-util' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory/root/httpd-2.4.3/srclib' make: * [all-recursive] Error 1 anything I missed?

    Read the article

  • Facebook API - Detect if session is active OR NOT active

    - by Tom Doe
    I'm sure I'm simply overlooking something in the Facebook API docs. Basically, after I've loaded the FB Graph, I need to know if the session is active... I cannot simply assume they're logged out and simply re-render if they're logged in once the 'auth.statusChange' event is triggered. I need to know right off the bat. Below is the code that I've used. Most importantly the FB.getLoginStatus/getAuthResponse/getAccessToken don't work like I'd expect; essentially where it indicates, when invoked, whether they're logged in or out. (function(d) { // Create fb-root var fb_root = document.createElement('div'); fb_root.id = "fb-root"; document.getElementsByTagName('body')[0].appendChild( fb_root ); // Load FB Async var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0]; if (d.getElementById(id)) {return;} js = d.createElement('script'); js.id = id; js.async = true; js.src = "//connect.facebook.net/en_US/all.js"; ref.parentNode.insertBefore(js, ref); // App config-data var config = { appId : XXXX, cookie: true, status: true, frictionlessRequests: true }; window.fbAsyncInit = function() { // This won't work. // I can't assume they're logged out and rely on this to tell me they're logged in. FB.Event.subscribe('auth.statusChange', function(response) {}); // Init FB.init(config); // These do not inidicate if the user is logged out :( FB.getLoginStatus(function(response) { }); FB.getAuthResponse(function(response) { }); FB.getAccessToken(function(response) { }); }; }(document)); Any help is much appreciated. :)

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >