Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 86/438 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • How can I find the original un-changed configuration file to compare with the *.rpmnew file?

    - by User
    While upgrading from CentOS 5.7 to 5.8 I've received the following warnings: warning: /etc/sysconfig/iptables-config created as /etc/sysconfig/iptables-config.rpmnew warning: /etc/ssh/sshd_config created as /etc/ssh/sshd_config.rpmnew warning: /etc/odbcinst.ini created as /etc/odbcinst.ini.rpmnew (To know the reason for such files, and what one can do with them read - Why do I have .rpmnew file after an update? ) I want to know what exactly has been change in the default config file by comparing the old default file (the original un-changed configuration file) with the new default file (*.rpmnew). Then, I can apply the changes to my modified file (aka diff merge). The problem is I don't know where can I find the original un-changed configuration file...

    Read the article

  • Error 18446744073709551615 when running iptables in OpenVZ container

    - by xsaero00
    This is related to the question I asked before. Now I am getting a different error. iptables: Unknown error 18446744073709551615 when trying to apply a simple rule in VZ container iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 I have done everything that was suggested to do on hardware node and container but the error persists. On hardware node: /etc/sysconfig/iptables-config IPTABLES_MODULES="ip_conntrack_netbios_ns ipt_REJECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp" /etc/vz/vz.conf IPTABLES="ipt_REJECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp" /etc/rc.local modprobe xt_tcpudp; modprobe ip_conntrack; modprobe xt_state container config IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ipt_state iptable_nat ip_nat_ftp " I have restarted HN and container numerous times, but the error is still there. It seems like all config is in place but something like lack of some resources is preventing the rule from being applied. Thanks for any help.

    Read the article

  • Is there a convenient method to pull files from a server in an SSH session?

    - by tel
    I often SSH into a cluster node for work and after processing want to pull several results back to my local machine for analysis. Typically, to do this I use a local shell to scp from the server, but this requires a lot of path manipulation. I'd prefer to use a syntax like interactive FTP and just 'pull' files from the server to my local pwd. Another possible solution might be to have some way to automatically set up my client computer as an ssh alias so that something like scp results home:~/results would work as expected. Is there any obscure SSH trick that'll do this for me? Working from grawity's answer, a complete solution in config files is something like local .ssh/config: Host ex HostName ssh.example.com RemoteForward 10101 localhost:22 ssh.example.com .ssh/config: Host home HostName localhost Port 10101 which lets me do commands exactly like scp results home: transferring the file results to my home machine.

    Read the article

  • How do I configure SSH on OS X?

    - by cwd
    I'm trying to SSH from one Mac running OS X 10.6 to another. Seems to work fine via a password, but I can't get it to use a RSA key instead. Where is the ssh configuration file on OS X and what is the command to reload SSH? Update What I'm asking is how to configured advanced options. For example, on Ubuntu there is a ssh config file at /etc/ssh/sshd_config and if you do something like change the port or disable password authentication for a particular user (PasswordAuthentication no) you need to run /etc/init.d/ssh reload to reload the config. I didn't see that file on OS X, so was just wondering where it was. I am aware of the ~/.ssh ~/.ssh/authorized_keys and `~/.ssh/config

    Read the article

  • Kernel Compiling from Vanilla to several machines

    - by Linux Pwns Mac
    When compiling kernels for machines is there a safe or correct way to create a template for say servers? I work with a lot of RHEL servers and want to compile them with GRSEC. However, I do not wish to always rebuild off of the .config for each machine and go in and remove a bunch of unrelated modules like wireless, bluetooth, ect... which you typically do not need in servers. I want to create a template .config that can be used on any machine, but is there a safe way to do that when hardware changes? I know with Linux, at least from my experience, you can cross jump hardware way easier then Windows/OSX. I assume that as long as I leave MOST of all the main hardware modules/CPU in that this could create a .config that would work for all or just about any machine?

    Read the article

  • Setting Environment Variable for Tomcat 6 Servlet

    - by amaevis
    I'm using Ubuntu's default installation of Tomcat 6. I'm deploying a ROOT.war, and trying to set an environment variable specific to it, i.e. accessible from System.getenv() in the Servlet.init(config). According to the docs (http://tomcat.apache.org/tomcat-6.0-doc/config/context.html), I can specify this in a Context element in conf/Catalina/localhost/ROOT.xml. I've created that with these contents: <Context> <Environment name="FOO" value="bar" type="java.lang.String" override="false"/> </Context> And I've deployed the webapp as usual, i.e. to webapps/ROOT.war. Server.getenv("FOO") in the Servlet.init(config) still returns null. What am I missing?

    Read the article

  • How to get Apache to follow symlink instead of downloading it?

    - by user792445
    I am just using the standard apache config file which mentions that it follows symlinks, but when I hit the url http://localhost/test it downloads the symlink file instead of following it. What config do I need to change to get apache to follow the symlink instead of downloading it? This is an ls on the directory: $ ls -al total 10 drwx------+ 1 SYSTEM SYSTEM 0 Oct 20 10:55 . drwx------+ 1 SYSTEM SYSTEM 0 Aug 26 12:27 .. -rw-r--r--+ 1 me None 47 Oct 20 10:14 index.html lrwxrwxrwx 1 me None 29 Oct 19 17:10 test -> /home/me/projects/test This is in my apache config file: <Directory "D:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory>

    Read the article

  • Code-First Database Creation During TFS 2010 CI Build

    - by jedimindtrickster
    I would like to automate code-first database generation during the automated CI build of a web project in Team Foundation Server 2010. When run locally the tests create a code-first database specified by the connection string in the app.config of the tests project. How do I configure the TFS Build Configuration to mimic this behaviour on the TFS build server? Edit The problem, it turns out, was that the TFS build server was successfully running the test which was using the default connection string in the app.config which pointed to the local SQL Server, not where I expected it. The solution was to use SlowCheetah on the TFS server as a means to transform the App.config file using the QA transform as per this blog article.

    Read the article

  • Managing multiple ssh keys

    - by Mathijs Kwik
    I have a lot of ssh keys, they are all passphrase protected and managed by ssh-agent. As a result of this, I am now getting "Too many authentication failures" on some connections. As has been explained on this site before, this is because ssh will try all keys the agent throws at it. The proposed solution is to use IdentitiesOnly in the config, together with an IdentityFile. While this indeed stops offering wrong keys, it seems it completely disables the agent in full, so now I have to type the passphrase on every connection. I could not find clear info about this. Does IdentitiesOnly just disable getting keys from ssh-agent in full? Or should it just block out the keys that aren't mentioned? Thanks, Mathijs # here's my config ~% cat .ssh/config Host bluemote HostName some.host.com IdentitiesOnly yes IdentityFile /home/mathijs/.ssh/keys/bluebook_ecdsa # I had the key loaded into the agent, shown here ~% ssh-add -L ecdsa-sha2-nistp521 SOME_LONG_BASE64_NUMBER== /home/mathijs/.ssh/keys/bluebook_ecdsa # but it doesn't seem to get used ~% ssh bluemote Enter passphrase for key '/home/mathijs/.ssh/keys/bluebook_ecdsa':

    Read the article

  • EF4 POCO WCF Serialization problems (no lazy loading, proxy/no proxy, circular references, etc)

    - by kdawg
    OK, I want to make sure I cover my situation and everything I've tried thoroughly. I'm pretty sure what I need/want can be done, but I haven't quite found the perfect combination for success. I'm utilizing Entity Framework 4 RTM and its POCO support. I'm looking to query for an entity (Config) that contains a many-to-many relationship with another entity (App). I turn off lazy loading and disable proxy creation for the context and explicitly load the navigation property (either through .Include() or .LoadProperty()). However, when the navigation property is loaded (that is, Apps is loaded for a given Config), the App objects that were loaded already contain references to the Configs that have been brought to memory. This creates a circular reference. Now I know the DataContractSerializer that WCF uses can handle circular references, by setting the preserveObjectReferences parameter to true. I've tried this with a couple of different attribute implementations I've found online. It is needed to prevent the "the object graph contains circular references and cannot be serialized" error. However, it doesn't prevent the serialization of the entire graph, back and forth between Config and App. If I invoke it via WcfTestClient.exe, I get a stackoverflow (ha!) exception from the client and I'm hosed. I get different results from different invocation environments (C# unit test with a local reference to the web service appears to work ok though I still can drill back and forth between Configs and Apps endlessly, but calling it from a coldfusion environment only returns the first Config in the list and errors out on the others.) My main goal is to have a serialized representation of the graph I explicitly load from EF (ie: list of Configs, each with their Apps, but no App back to Config navigation.) NOTE: I've also tried using the ProxyDataContractResolver technique and keeping the proxy creation enabled from my context. This blows up complaining about unknown types encountered. I read that the ProxyDataContractResolver didn't fully work in Beta2, but should work in RTM. For some reference, here is roughly how I'm querying the data in the service: var repo = BootStrapper.AppCtx["AppMeta.ConfigRepository"] as IRepository<Config>; repo.DisableLazyLoading(); repo.DisableProxyCreation(); //var temp2 = repo.Include(cfg => cfg.Apps).Where(cfg => cfg.Environment.Equals(environment)).ToArray(); var temp2 = repo.FindAll(cfg => cfg.Environment.Equals(environment)).ToArray(); foreach (var cfg in temp2) { repo.LoadProperty(cfg, c => c.Apps); } return temp2; I think the crux of my problem is when loading up navigation properties for POCO objects from Entity Framework 4, it prepopulates navigation properties for objects already in memory. This in turn hoses up the WCF serialization, despite every effort made to properly handle circular references. I know it's a lot of information, but it's really standing in my way of going forward with EF4/POCO in our system. I've found several articles and blogs touching upon these subjects, but for the life of me, I cannot resolve this issue. Feel free to simply ask questions and help me brainstorm this situation. PS: For the sake of being thorough, I am injecting the WCF services using the HEAD build of Spring.NET for the fix to Spring.ServiceModel.Activation.ServiceHostFactory. However I don't think this is the source of the problem.

    Read the article

  • Why isn't the Spring AOP XML schema properly loaded when Tomcat loads & reads beans.xml

    - by chrisbunney
    I'm trying to use Spring's Schema Based AOP Support in Eclipse and am getting errors when trying to load the configuration in Tomcat. There are no errors in Eclipse and auto-complete works correctly for the aop namespace, however when I try to load the project into eclipse I get this error: 09:17:59,515 WARN XmlBeanDefinitionReader:47 - Ignored XML validation warning org.xml.sax.SAXParseException: schema_reference.4: Failed to read schema document 'http://www.springframework.org/schema/aop/spring-aop-2.5.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not . Followed by: SEVERE: StandardWrapper.Throwable org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 39 in XML document from /WEB-INF/beans.xml is invalid; nested exception is org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'aop:config'. Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'aop:config'. Based on this, it seems the schema is not being read when Tomcat parses the beans.xml file, leading to the <aop:config> element not being recognised. My beans.xml file is as follows: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jaxws="http://cxf.apache.org/jaxws" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> <!--import resource="classpath:META-INF/cxf/cxf.xml" /--> <!--import resource="classpath:META-INF/cxf/cxf-extension-soap.xml" /--> <!--import resource="classpath:META-INF/cxf/cxf-servlet.xml" /--> <!-- NOTE: endpointName attribute maps to wsdl:port@name & should be the same as the portName attribute in the @WebService annotation on the IWebServiceImpl class --> <!-- NOTE: serviceName attribute maps to wsdl:service@name & should be the same as the serviceName attribute in the @WebService annotation on the ASDIWebServiceImpl class --> <!-- NOTE: address attribute is the actual URL of the web service (relative to web app location) --> <jaxws:endpoint xmlns:tns="http://iwebservices.ourdomain/" id="iwebservices" implementor="ourdomain.iwebservices.IWebServiceImpl" endpointName="tns:IWebServiceImplPort" serviceName="tns:IWebService" address="/I" wsdlLocation="wsdl/I.wsdl"> <!-- To have CXF auto-generate WSDL on the fly, comment out the above wsdl attribute --> <jaxws:features> <bean class="org.apache.cxf.feature.LoggingFeature" /> </jaxws:features> </jaxws:endpoint> <aop:config> <aop:aspect id="myAspect" ref="aBean"> </aop:aspect> </aop:config> </beans> The <aop:config> element in my beans.xml file is copy-pasted from the Spring website to try and remove any possible source of error Can anyone shed any light on why this error is occurring and what I can do to fix it?

    Read the article

  • uploading zip files in codeigniter won't work

    - by krike
    I have created a helper that requires some parameters and should upload a file, the function works for images however not for zip files. I searched on google and even added a MY_upload.php - http://codeigniter.com/bug_tracker/bug/6780/ however I still have the problem so I used print_r to display the array of the uploaded files, the image is fine however the zip array is empty: Array ( [file_name] => [file_type] => [file_path] => [full_path] => [raw_name] => [orig_name] => [file_ext] => [file_size] => [is_image] => [image_width] => [image_height] => [image_type] => [image_size_str] => ) Array ( [file_name] => 2385b959279b5e3cd451fee54273512c.png [file_type] => image/png [file_path] => I:/wamp/www/e-commerce/sources/images/ [full_path] => I:/wamp/www/e-commerce/sources/images/2385b959279b5e3cd451fee54273512c.png [raw_name] => 2385b959279b5e3cd451fee54273512c [orig_name] => 1269770869_Art_Artdesigner.lv_.png [file_ext] => .png [file_size] => 15.43 [is_image] => 1 [image_width] => 113 [image_height] => 128 [image_type] => png [image_size_str] => width="113" height="128" ) this is the function helper function multiple_upload($name = 'userfile', $upload_dir = 'sources/images/', $allowed_types = 'gif|jpg|jpeg|jpe|png', $size) { $CI =& get_instance(); $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = $allowed_types; $config['max_size'] = $size; $config['overwrite'] = FALSE; $config['encrypt_name'] = TRUE; $ffiles = $CI->upload->data(); echo "<pre>"; print_r($ffiles); echo "</pre>"; $CI->upload->initialize($config); $errors = FALSE; if(!$CI->upload->do_upload($name))://I believe this is causing the problem but I'm new to codeigniter so no idea where to look for errors $errors = TRUE; else: // Build a file array from all uploaded files $files = $CI->upload->data(); endif; // There was errors, we have to delete the uploaded files if($errors): @unlink($files['full_path']); return false; else: return $files; endif; }//end of multiple_upload() and this is the code in my controller if(!$s_thumb = multiple_upload('small_thumb', 'sources/images/', 'gif|jpg|jpeg|jpe|png', 1024)): //http://www.matisse.net/bitcalc/ $data['feedback'] = '<div class="error">Could not upload the small thumbnail!</div>'; $error = TRUE; endif; if(!$main_file = multiple_upload('main_file', 'sources/items/', 'zip', 307200)): $data['feedback'] = '<div class="error">Could not upload the main file!</div>'; $error = TRUE; endif;

    Read the article

  • WCF Endpoints & Binding Configuration Issues

    - by CodeAbundance
    I am running into a very strange issue here folks. For simplicity I created a project for the sole purpose of testing the issue outside the framework of a larger application and still encountered what is either a bug in WCF within Visual Studio 2010 or something related to my WCF newbie skill set : ) Here is the issue: I have a WCF endpoint I created running inside of an MVC3 project called "SimpleMethod". The method runs inside of a .svc file on the root of the application and it returns a bool. Using the "WCF Service Configuration Editor" I have added the endpoint to my Web.Config along with a called "LargeImageBinding". Here is the service: [OperationContract] public bool SimpleMethod() { return true; } And the Web.Config generated by the Config Tool: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="LargeImageBinding" closeTimeout="00:10:00" /> </wsHttpBinding> </bindings> <services> <service name="WCFEndpoints.ServiceTestOne"> <endpoint address="/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="LargeImageBinding" contract="WCFEndpoints.IServiceTestOne" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> The service renders fine and you can see the endpoint when you navigate to: http://localhost:57364/ServiceTestOne.svc - Now the issue occurs when I create a separate project to consume the service. I add a service reference to a running instance of the above project, point it to: http://localhost:57364/ServiceTestOne.svc Here is the weird part. The service automatically generates just fine but In the Web.Config the endpoint that is generated looks like this: <client> <endpoint address="http://localhost:57364/ServiceTestOne.svc/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IServiceTestOne" contract="ServiceTestOne.IServiceTestOne" name="WSHttpBinding_IServiceTestOne"> As you can see it lists the "ServiceTestOne.svc" portion of the address twice! When I make a call to the the service I get the following error: The remote server returned an error: (404) Not Found. I tried removing the extra "/ServiceTestOne.svc" at the end of the endpoint address in the above config, and I get the same exact error. Now what DOES work is if I go back to the WCF application and remove the custom endpoint and binding references in the Web.Config (everything in the "services" and "bindings" tags) then go back to the consumer application, update the reference to the service and make the call to SimpleMethod()....BOOM works like a charm and I get back a bool set to true. The thing is, I need to make custom binding configurations in order to allow for access to the service outside of the defaults, and from what I can tell, any attempt to create custom bindings makes the endpoints seem to run fine, but fail when an actual method call is made. Can anyone see any flaw in how I am putting this together? Thank you for your time - I have been running in circles with this for about a week!

    Read the article

  • Why can i not upload images to my folder anymore?

    - by Hannah_B
    This was something I had working a few weeks back but after I made some changes to my view file images are now no longer being saved into my assets/uploads folder. I keep getting back the error - You did not select a file to upload. This is despite having made sure the path is definitely correct. What am i doing wrong here? Here is my controller: <?php class HomeProfile extends CI_Controller { function HomeProfile() { parent::__construct(); $this->load->model("profiles"); $this->load->model("profileimages"); $this->load->helper(array('form', 'url')); } function upload() { $config['path'] = './web-project-jb/assets/uploads/'; $config['allowed_types'] = 'gif|jpg|jpeg|png'; $config['max_size'] = '10000'; $config['max_width'] = '1024'; $config['max_height'] = '768'; $this->load->library('upload', $config); $img = $this->session->userdata('img'); $username = $this->session->userdata('username'); $this->profileimages->putProfileImage($username, $this->input->post("profileimage")); //fail show upload form if (! $this->upload->do_upload()) { $error = array('error'=>$this->upload->display_errors()); $username = $this->session->userdata('username'); $viewData['username'] = $username; $viewData['profileText'] = $this->profiles->getProfileText($username); $this->load->view('shared/header'); $this->load->view('homeprofile/homeprofiletitle', $viewData); $this->load->view('shared/nav'); $this->load->view('homeprofile/upload_fail', $error); $this->load->view('homeprofile/homeprofileview', $viewData, array('error' => ' ' )); $this->load->view('shared/footer'); //redirect('homeprofile/index'); } else { //successful upload so save to database $file_data = $this->upload->data(); $data['img'] = base_url().'./web-project-jb/assets/uploads/'.$file_data['file_name']; // you may want to delete the image from the server after saving it to db // check to make sure $data['full_path'] is a valid path // get upload_sucess.php from link above //$image = chunk_split( base64_encode( file_get_contents( $data['file_name'] ) ) ); $this->username = $this->session->userdata('username'); $data['profileimages'] = $this->profileimages->getProfileImage($username); $viewData['username'] = $username; $viewData['profileText'] = $this->profiles->getProfileText($username); $username = $this->session->userdata('username'); } } function index() { $username = $this->session->userdata('username'); $data['profileimages'] = $this->profileimages->getProfileImage($username); $viewData['username'] = $username; $viewData['profileText'] = $this->profiles->getProfileText($username); $this->load->view('shared/header'); $this->load->view('homeprofile/homeprofiletitle', $viewData); $this->load->view('shared/nav'); //$this->load->view('homeprofile/upload_form', $data); $this->load->view('homeprofile/homeprofileview', $data, $viewData, array('error' => ' ' ) ); $this->load->view('shared/footer'); } } Here is my view: <div id="maincontent"> <div id="primary"> <?//=$error;?> <?//=$img;?> <h3><?="Profile Image"?></h3> <img src="<?php echo'$img'?>" width='300' height='300'/> <?=form_open_multipart('homeprofile/upload');?> <input type="file" name="img" value=""/> <?=form_submit('submit', 'upload')?> <?=form_close();?> <?php if (isset($error)) echo $error;?> </div> </div> Your help is much appreciated

    Read the article

  • Managing multiple reverse proxies for one virtual host in apache2

    - by Chris Betti
    I have many reverse proxies defined for my js-host VirtualHost, like so: /etc/apache2/sites-available/js-host <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPreserveHost On ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ [...] ProxyPass /serviceZ http://192.168.100.75/ ProxyPassReverse /serviceZ http://192.168.100.75/ </VirtualHost> The js-host site is acting as shared config for all of the reverse proxies. This works, but managing the proxies involves edits to the shared config, and an apache2 restart. Is there a way to manage individual proxies with a2ensite and a2dissite (or a better alternative)? My main objective is to isolate each proxy config as a separate file, and manage it via commands. First Attempt I tried making separate files with their own VirtualHost entries for each service: /etc/apache2/sites-available/js-host-serviceA <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ </VirtualHost> /etc/apache2/sites-available/js-host-serviceB <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ </VirtualHost> The problem with this is apache2 loads the first VirtualHost for a particular ServerName, and ignores the rest. They aren't "merged" somehow as I'd hoped.

    Read the article

  • So i ran sudo apt-get install kubuntu-full on my Ubuntu... and saw all the apps...now I want it off...help?

    - by Alex Poulos
    I'm running 12.04 - I installed kubuntu to try it out and realized that with all the bloatware applications that I didn't want it anymore - I was able to uninstall the kubuntu-desktop but there are still packages left over... How can I make sure I get rid of EVERYTHING Kubuntu installed - even the kde leftovers? Here's some of what's left when I ran sudo apt-get autoremove kde then "tab" it displayed this: kdeaccessibility kdepim-runtime kdeadmin kde-runtime kde-baseapps kde-runtime-data kde-baseapps-bin kdesdk-dolphin-plugins kde-baseapps-data kde-style-oxygen kde-config-cron kdesudo kde-config-gtk kdeutils kde-config-touchpad kde-wallpapers kdegames-card-data kde-wallpapers-default kdegames-card-data-extra kde-window-manager kde-icons-mono kde-window-manager-common kdelibs5-data kde-workspace kdelibs5-plugins kde-workspace-bin kdelibs-bin kde-workspace-data kdemultimedia-kio-plugins kde-workspace-data-extras kdenetwork kde-workspace-kgreet-plugins kdenetwork-filesharing kde-zeroconf kdepasswd kdf kdepim-kresources kdm kdepimlibs-kio-plugins kdoctools Those are all installed by kubuntu... correct? I just want to go back to my Ubuntu 12.04LTS with Gnome2-classic and without all the kubuntu extras. I started it off by just removing unnecessary apps that came with kubuntu-full - then realized I didnt want the whole thing at all and uninstalled kubuntu-full but it still says I have these as well: alex@griever:~$ sudo apt-get --purge remove kubuntu- kubuntu-debug-installer kubuntu-netbook-default-settings kubuntu-default-settings kubuntu-notification-helper kubuntu-firefox-installer kubuntu-web-shortcuts

    Read the article

  • What is a “pretty and proper OO” way for handling sessions and authentication?

    - by asdfqwer
    Is coupling these two concepts a bad approach? As of right now I'm delegating all session handling and whether or not a user desires to logout in my config.inc file. As I was writing my Auth class I started wondering whether or not my Auth class should be taking care of most of the logic in my config.inc. Regardless, I'm sure there's a more elegant way of handling this... Here is what I have in my config.inc (also a large chunk of this code is based on a reply I found on SO except I can't find the source ._.): ini_set('session.name', 'SID'); # session management session_set_cookie_params(24*60*60); // set SID cookie lifetime session_start(); if(isset($_SESSION['LOGOUT']) { session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data #header('Location: '.DOCROOT); } elseif(isset($_SESSION['SID_AUTH'])) { // verify user has authenticated if (!isset($_SESSION['SID_CREATED'])) { $_SESSION['SID_CREATED'] = time(); } elseif (time() - $_SESSION['SID_CREATED'] > 6*60*60) { // session started more than 6 hours ago session_regenerate_id(); // reset SID value $_SESSION['SID_CREATED'] = time(); // update creation time } if (isset($_SESSION['SID_MODIFIED']) && (time() - $_SESSION['SID_MODIFIED'] > 12*60*60)) { // last request was more than 12 hours ago session_destroy(); // destroy session data $_SESSION = array(); // destroy session data sanity check setcookie('SID', '', time() - 24*60*60); // destroy session cookie data } $_SESSION['SID_MODIFIED'] = time(); // update last activity time stamp }

    Read the article

  • Set-Cookie Headers getting stripped in ASP.NET HttpHandlers

    - by Rick Strahl
    Yikes, I ran into a real bummer of an edge case yesterday in one of my older low level handler implementations (for West Wind Web Connection in this case). Basically this handler is a connector for a backend Web framework that creates self contained HTTP output. An ASP.NET Handler captures the full output, and then shoves the result down the ASP.NET Response object pipeline writing out the content into the Response.OutputStream and seperately sending the HttpHeaders in the Response.Headers collection. The headers turned out to be the problem and specifically Http Cookies, which for some reason ended up getting stripped out in some scenarios. My handler works like this: Basically the HTTP response from the backend app would return a full set of HTTP headers plus the content. The ASP.NET handler would read the headers one at a time and then dump them out via Response.AppendHeader(). But I found that in some situations Set-Cookie headers sent along were simply stripped inside of the Http Handler. After a bunch of back and forth with some folks from Microsoft (thanks Damien and Levi!) I managed to pin this down to a very narrow edge scenario. It's easiest to demonstrate the problem with a simple example HttpHandler implementation. The following simulates the very much simplified output generation process that fails in my handler. Specifically I have a couple of headers including a Set-Cookie header and some output that gets written into the Response object.using System.Web; namespace wwThreads { public class Handler : IHttpHandler { /* NOTE: * * Run as a web.config set handler (see entry below) * * Best way is to look at the HTTP Headers in Fiddler * or Chrome/FireBug/IE tools and look for the * WWHTREADSID cookie in the outgoing Response headers * ( If the cookie is not there you see the problem! ) */ public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; HttpResponse response = context.Response; // If ClearHeaders is used Set-Cookie header gets removed! // if commented header is sent... response.ClearHeaders(); response.ClearContent(); // Demonstrate that other headers make it response.AppendHeader("RequestId", "asdasdasd"); // This cookie gets removed when ClearHeaders above is called // When ClearHEaders is omitted above the cookie renders response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); // *** This always works, even when explicit // Set-Cookie above fails and ClearHeaders is called //response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); response.Write(@"Output was created.<hr/> Check output with Fiddler or HTTP Proxy to see whether cookie was sent."); } public bool IsReusable { get { return false; } } } } In order to see the problem behavior this code has to be inside of an HttpHandler, and specifically in a handler defined in web.config with: <add name=".ck_handler" path="handler.ck" verb="*" type="wwThreads.Handler" preCondition="integratedMode" /> Note: Oddly enough this problem manifests only when configured through web.config, not in an ASHX handler, nor if you paste that same code into an ASPX page or MVC controller. What's the problem exactly? The code above simulates the more complex code in my live handler that picks up the HTTP response from the backend application and then peels out the headers and sends them one at a time via Response.AppendHeader. One of the headers in my app can be one or more Set-Cookie. I found that the Set-Cookie headers were not making it into the Response headers output. Here's the Chrome Http Inspector trace: Notice, no Set-Cookie header in the Response headers! Now, running the very same request after removing the call to Response.ClearHeaders() command, the cookie header shows up just fine: As you might expect it took a while to track this down. At first I thought my backend was not sending the headers but after closer checks I found that indeed the headers were set in the backend HTTP response, and they were indeed getting set via Response.AppendHeader() in the handler code. Yet, no cookie in the output. In the simulated example the problem is this line:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); which in my live code is more dynamic ( ie. AppendHeader(token[0],token[1[]) )as it parses through the headers. Bizzaro Land: Response.ClearHeaders() causes Cookie to get stripped Now, here is where it really gets bizarre: The problem occurs only if: Response.ClearHeaders() was called before headers are added It only occurs in Http Handlers declared in web.config Clearly this is an edge of an edge case but of course - knowing my relationship with Mr. Murphy - I ended up running smack into this problem. So in the code above if you remove the call to ClearHeaders(), the cookie gets set!  Add it back in and the cookie is not there. If I run the above code in an ASHX handler it works. If I paste the same code (with a Response.End()) into an ASPX page, or MVC controller it all works. Only in the HttpHandler configured through Web.config does it fail! Cue the Twilight Zone Music. Workarounds As is often the case the fix for this once you know the problem is not too difficult. The difficulty lies in tracking inconsistencies like this down. Luckily there are a few simple workarounds for the Cookie issue. Don't use AppendHeader for Cookies The easiest and obvious solution to this problem is simply not use Response.AppendHeader() to set Cookies. Duh! Under normal circumstances in application level code there's rarely a reason to write out a cookie like this:response.AppendHeader("Set-Cookie", "WWTHREADSID=ThisIsThEValue; path=/"); but rather create the cookie using the Response.Cookies collection:response.Cookies.Add(new HttpCookie("WWTHREADSID", "ThisIsTheValue")); Unfortunately, in my case where I dynamically read headers from the original output and then dynamically  write header key value pairs back  programmatically into the Response.Headers collection, I actually don't look at each header specifically so in my case the cookie is just another header. My first thought was to simply trap for the Set-Cookie header and then parse out the cookie and create a Cookie object instead. But given that cookies can have a lot of different options this is not exactly trivial, plus I don't really want to fuck around with cookie values which can be notoriously brittle. Don't use Response.ClearHeaders() The real mystery in all this is why calling Response.ClearHeaders() prevents a cookie value later written with Response.AppendHeader() to fail. I fired up Reflector and took a quick look at System.Web and HttpResponse.ClearHeaders. There's all sorts of resetting going on but nothing that seems to indicate that headers should be removed later on in the request. The code in ClearHeaders() does access the HttpWorkerRequest, which is the low level interface directly into IIS, and so I suspect it's actually IIS that's stripping the headers and not ASP.NET, but it's hard to know. Somebody from Microsoft and the IIS team would have to comment on that. In my application it's probably safe to simply skip ClearHeaders() in my handler. The ClearHeaders/ClearContent was mainly for safety but after reviewing my code there really should never be a reason that headers would be set prior to this method firing. However, if for whatever reason headers do need to be cleared, it's easy enough to manually clear the headers out:private void RemoveHeaders(HttpResponse response) { List<string> headers = new List<string>(); foreach (string header in response.Headers) { headers.Add(header); } foreach (string header in headers) { response.Headers.Remove(header); } response.Cookies.Clear(); } Now I can replace the call the Response.ClearHeaders() and I don't get the funky side-effects from Response.ClearHeaders(). Summary I realize this is a total edge case as this occurs only in HttpHandlers that are manually configured. It looks like you'll never run into this in any of the higher level ASP.NET frameworks or even in ASHX handlers - only web.config defined handlers - which is really, really odd. After all those frameworks use the same underlying ASP.NET architecture. Hopefully somebody from Microsoft has an idea what crazy dependency was triggered here to make this fail. IAC, there are workarounds to this should you run into it, although I bet when you do run into it, it'll likely take a bit of time to find the problem or even this post in a search because it's not easily to correlate the problem to the solution. It's quite possible that more than cookies are affected by this behavior. Searching for a solution I read a few other accounts where headers like Referer were mysteriously disappearing, and it's possible that something similar is happening in those cases. Again, extreme edge case, but I'm writing this up here as documentation for myself and possibly some others that might have run into this. © Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   IIS7   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Novas video aulas gratuitas

    - by renatohaddad
    Olá Pessoal, Para você que estudou os meus treinamentos ou tem curiosidade em aprender um determinado tópico, acabei de colocar no meu site quatro novas Aulas Free com os seguintes tópicos: Alterar Web.Config em tempo de execução Veja como alterar os valores das chaves no web.config diretamente via código, permitindo ao administrador da aplicação alterar qualquer item no web.config sem precisar fazer download e upload do arquivo para efetuar as devidas alterações. Uso do DbSet no Entity Framework 4.1 Veja como instalar o EF 4.1, criar duas classes vinculadas, definir o contexto com o DbSet para que na execução do programa, o EF4.1 crie o banco de dados baseado nas classes. Uso de Tipos Complexos no Entity Framework 4 Sabia o que é e como aplicar um tipo complexo no Entity Framework 4. Desta forma você conseguirá criar propriedades complexas para otimizar a estrutura das classes, assim como aprender como que o tipo complexo é gerado no banco de dados sql server. Relacionamento muitos para muitos no Entity Framework 4 Aprenda como o Entity Framework 4 trabalha com um relacionamento muitos para muitos, desde a definição no ORM no EDMX, definir o tipo de associação, como incluir e ler dados das tabelas geradas no banco de dados do sql server. Deixe o "indiano" trabalhar com o compilador para nos ajudar, com certeza ele não irá errar nas tarefas. Bons estudos e fique à vontade para me dar feedbacks. Abração! Renato Haddad

    Read the article

  • How to start a task before networking?

    - by user1252434
    I've written an upstart task that modifies /etc/network/interfaces. (Actually a file sourced into it.) Which start on condition do I need to declare to let my task run before any networking jobs? I've tried start on starting networking, but that's apparently too late. When I log in after booting I can see that the changes were written, but obviously they are not used: the new config states a static IP, but the boot process waits for a non-existing DHCP server (old config) to time out. I've also tried start on starting network-interface INTERFACE=eth0, which didn't work either. IIRC there was an error in the log that the change couldn't be written. Background: I need a VM template that can be cloned and the clones configured through a script. Among other settings, I need to give them a static IP address to access them from the host. I use guestfish to write a config file to one of the virtual disks and let a script apply these settings to the system. I don't want that disk to contain an actual system settings file. I can't modify /etc directly, because that disk is shared (copy-on-write/diff) among the clones and guestfish apparently doesn't support that type of image. I could also let them use DHCP and setup a server that assigns IP by MAC, but I'm afraid of the complexity. I could also add just another virtual disk for configuration files, but if possible I'd prefer to store settings directly on the system disk image. Used software: Ubuntu Server 12.04, VirtualBox. The configuration modifier is a self written ruby script.

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • .wine-pipelight folder not present

    - by DaimyoKirby
    Following the instructions on the pipelight installation page, I installed pipelight on Ubuntu 14.04. However, upon opening firefox the .wine-pipelight folder isn't present in my home folder, and I get the following errors: [PIPELIGHT:LIN:unknown] attached to process. [PIPELIGHT:LIN:unknown] checking environment variable PIPELIGHT_SILVERLIGHT5_1_CONFIG. [PIPELIGHT:LIN:unknown] searching for config file pipelight-silverlight5.1. [PIPELIGHT:LIN:unknown] trying to load config file from '/home/alden/.config/pipelight-silverlight5.1'. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:427:checkSilverlightGraphicDriver(): error in execlp command - probably silverlightGraphicDriverCheck not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:441:checkSilverlightGraphicDriver(): GPU driver check - Your driver is not in the whitelist, hardware acceleration disabled. [PIPELIGHT:LIN:silverlight5.1] using wine prefix directory /home/alden/.wine-pipelight. [PIPELIGHT:LIN:silverlight5.1] checking plugin installation - this might take some time. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:374:checkPluginInstallation(): error in execvp command - probably dependencyInstaller/sandbox not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:384:checkPluginInstallation(): Plugin installer did not run correctly (exitcode = 1). [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:108:attach(): plugin not correctly installed - aborting. I've reinstalled quite a few times and ran through many of the common fixes offered on the pipelight Launchpad pages and here on AskUbunta and still it fails to run. Is there a reason why this folder isn't present, or why I'm getting these errors? Edit: Oddly enough, the .wine-pipelight folder is created wtih I open Nitro, although this still doesn't fix the issue.

    Read the article

  • Versioning and Continuous Integration with project settings files

    - by Michael Stephenson
    I came across something which was a bit of a pain in the bottom the other week. Our scenario was that we had implemented a helper style assembly which had some custom configuration implemented through the project settings. I'm sure most of you are familiar with this where you end up with a settings file which is viewable through the C# project file and you can configure some basic settings. The settings are embedded in the assembly during compilation to be part of a DefaultValue attribute. You have the ability to override the settings by adding information to your app.config and if the app.config doesn’t override the settings then the embedded default is used. All normal C# stuff so far… Where our pain started was when we implement Continuous Integration and we wanted to version all of this from our build. What I was finding was that the assembly was versioned fine but the embedded default value was maintaining the non CI build version number. I ended up getting this to work by using a build task to change the version numbers in the following files: App.config Settings.settings Settings.designer.cs I think I probably could have got away with just the settings.designer.cs, but wanted to keep them all consistent incase we had to look at the code on the build server for some reason. I think the reason this was painful was because the settings.designer.cs is only updated through Visual Studio and it writes out the code to this file including the DefaultValue attribute when the project is saved rather than as part of the compilation process. The compile just compiles the already existing C# file. As I said we got it working, and it was a bit of a pain. If anyone has a better solution for this I'd love to hear it

    Read the article

  • Handling Deployment to Multiple Environments

    - by JayGee
    How should I handle deploying web applications to multiple servers? Constraints I have a dev, test and prod environment. No build server is available. Developers can't deploy to prod. The people that do deploy to prod copy files from test to prod. They don't have VS installed. Currently The way it's handled is using web.config transform. However, to deploy to prod involves putting prod code on the test server where it's copied over. Problem Sometimes simple mistakes are made, such as forgetting to change test back to the right environment after deployment. Or the test config gets moved to prod instead of the prod config. Solution So the question is, what is the best way to prevent mistakes from happening? My first thought is let the app determine which server it's on at runtime and use the appropriate settings/connection strings/etc... However, the server names could change in the not too distant future. So if multiple apps are hard coded, that would mean updating all of them. The easiest way to handle that situation would be to place a DLL in the GAC that determines the environment. Are there any drawbacks or possible complications that this would cause? Or is there a better solution to the problem than this?

    Read the article

  • lxc containers fail to autoboot in 14.04 trusty using 'lxc.start.auto = 1'

    - by user273046
    In trusty 14.04 containers fail to autoboot despite all settings being set as 14.04 requires. They show all as STOPPED I have correctly configured 2 LXC containers: calypso encelado They run perfectly if I run sudo lxc-autostart then sudo lxc-ls --fancy results in: ubuntu@saturn:/etc/init$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART calypso RUNNING 192.168.1.161 - YES encelado RUNNING 192.168.1.162 - YES The problem is trying to run them at boot. I have at: /var/lib/lxc/calypso/config: # Template used to create this container: /usr/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.conf(5) # Distribution configuration lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /var/lib/lxc/calypso/rootfs lxc.utsname = calypso # Network configuration lxc.network.type = veth lxc.network.flags = up #lxc.network.link = lxcbr0 lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:64:0b:6e # Assegnazione IP Address lxc.network.ipv4 = 192.168.1.161/24 lxc.network.ipv4.gateway = 192.168.1.1 # Autostart lxc.start.auto = 1 lxc.start.delay = 5 lxc.start.order = 100 and I have LXC_AUTO="false" as required inside /etc/default/lxc: LXC_AUTO="false" USE_LXC_BRIDGE="false" # overridden in lxc-net [ -f /etc/default/lxc-net ] && . /etc/default/lxc-net LXC_SHUTDOWN_TIMEOUT=120 Any idea on why the containers don't start at boot? At reboot they are always in the STOPPED state: ubuntu@saturn:~$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART calypso STOPPED - - YES encelado STOPPED - - YES and then again they can be started manually, using sudo lxc-autostart

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >