Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 301/422 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • Kauffman Foundation Selects Stackify to Present at Startup@Kauffman Demo Day

    - by Matt Watson
    Stackify will join fellow Kansas City startups to kick off Global Entrepreneurship WeekOn Monday, November 12, Stackify, a provider of tools that improve developers’ ability to support, manage and monitor their enterprise applications, will pitch its technology at the Startup@Kauffman Demo Day in Kansas City, Mo. Hosted by the Ewing Marion Kauffman Foundation, the event will mark the start of Global Entrepreneurship Week, the world’s largest celebration of innovators and job creators who launch startups.Stackify was selected through a competitive process for a six-minute opportunity to pitch its new technology to investors at Demo Day. In his pitch, Stackify’s founder, Matt Watson, will discuss the current challenges DevOps teams face and reveal how Stackify is reinventing the way software developers provide application support.In October, Stackify had successful appearances at two similar startup events. At Tech Cocktail’s Kansas City Mixer, the company was named “Hottest Kansas City Startup,” and it won free hosting service after pitching its solution at St. Louis, Mo.’s Startup Connection.“With less than a month until our public launch, events like Demo Day are giving Stackify the support and positioning we need to change the development community,” said Watson. “As a serial technology entrepreneur, I appreciate the Kauffman Foundation’s support of startup companies like Stackify. We’re thrilled to participate in Demo Day and Global Entrepreneurship Week activities.”Scheduled to publicly launch in early December 2012, Stackify’s platform gives developers insights into their production applications, servers and databases. Stackify finally provides agile developers safe and secure remote access to look at log files, config files, server health and databases. This solution removes the bottleneck from managers and system administrators who, until now, are the only team members with access. Essentially, Stackify enables development teams to spend less time fixing bugs and more time creating products.Currently in beta, Stackify has already been named a “Company to Watch” by Software Development Times, which called the startup “the next big thing.” Developers can register for a free Stackify account on Stackify.com.###Stackify Founded in 2012, Stackify is a Kansas City-based software service provider that helps development teams troubleshoot application problems. Currently in beta, Stackify will be publicly available in December 2012, when agile developers will finally be able to provide agile support. The startup has already been recognized by Tech Cocktail as “Hottest Kansas City Startup” and was named a “Company to Watch” by Software Development Times. To learn more, visit http://www.stackify.com and follow @stackify on Twitter.

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • ASP.NET Hosting :: ASP.NET File Upload Control

    - by mbridge
    The asp.net FileUpload control allows a user to browse and upload files to the web server. From developers perspective, it is as simple as dragging and dropping the FileUpload control to the aspx page. An extra control, like a Button control, or some other control is needed, to actually save the file. <asp:FileUploadID="FileUpload1"runat="server"/> <asp:ButtonID="B1"runat="server"Text="Save"OnClick="B1_Click"/> By default, the FileUpload control allows a maximum of 4MB file to be uploaded and the execution timeout is 110 seconds. These properties can be changed from within the web.config file’s httpRuntime section. The maxRequestLength property determines the maximum file size that can be uploaded. The executionTimeout property determines the maximum time for execution. <httpRuntimemaxRequestLength="8192"executionTimeout="220"/> From code behind, the mime type, size of the file, file name and the extension of the file can be obtained. The maximum file size that can be uploaded can be obtained and modified using the System.Web.Configuration.HttpRuntimeSection class. Files can be alternatively saved using the System.IO.HttpFileCollection class. This collection class can be populated using the Request.Files property. The collection contains HttpPostedFile class which contains a reference to the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Configuration; using System.Web.Configuration;   namespace WebApplication1 {     public partial class WebControls : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           //Using FileUpload control to upload and save files         protected void B1_Click(object sender, EventArgs e)         {             if (FileUpload1.HasFile && FileUpload1.PostedFile.ContentLength > 0)             {                 //mime type of the uploaded file                 string mimeType = FileUpload1.PostedFile.ContentType;                   //size of the uploaded file                 int size = FileUpload1.PostedFile.ContentLength; // bytes                   //extension of the uploaded file                 string extension = System.IO.Path.GetExtension(FileUpload1.FileName);                                  //save file                 string path = Server.MapPath("path");                                 FileUpload1.SaveAs(path + FileUpload1.FileName);                              }             //maximum file size allowed             HttpRuntimeSection rt = new HttpRuntimeSection();             rt.MaxRequestLength = rt.MaxRequestLength * 2;             int length = rt.MaxRequestLength;                     //execution timeout             TimeSpan ts = rt.ExecutionTimeout;             double secomds = ts.TotalSeconds;           }           //Using Request.Files to save files         private void AltSaveFile()         {             HttpFileCollection coll = Request.Files;             for (int i = 0; i < coll.Count; i++)             {                 HttpPostedFile file = coll[i];                   if (file.ContentLength > 0)                     ;//do something             }         }     } }

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • ASP.NET Membership Password Hash -- .NET 3.5 to .NET 4 Upgrade Surprise!

    - by David Hoerster
    I'm in the process of evaluating how my team will upgrade our product from .NET 3.5 SP1 to .NET 4. I expected the upgrade to be pretty smooth with very few, if any, upgrade issues. To my delight, the upgrade wizard said that everything upgraded without a problem. I thought I was home free, until I decided to build and run the application. A big problem was staring me in the face -- I couldn't log on. Our product is using a custom ASP.NET Membership Provider, but essentially it's a modified SqlMembershipProvider with some additional properties. And my login was failing during the OnAuthenticate event handler of my ASP.NET Login control, right where it was calling my provider's ValidateUser method. After a little digging, it turns out that the password hash that the membership provider was using to compare against the stored password hash in the membership database tables was different. I compared the password hash from the .NET 4 code line, and it was a different generated hash than my .NET 3.5 code line. (Tip -- when upgrading, always keep a valid debug copy of your app handy in case you have to step through a lot of code.) So it was a strange situation, but at least I knew what the problem was. Now the question was, "Why was it happening?" Turns out that a breaking change in .NET 4 is that the default hash algorithm changed to SHA256. Hey, that's great -- stronger hashing algorithm. But what do I do with all the hashed passwords in my database that were created using SHA1? Well, you can make two quick changes to your app's web.config and everything will be OK. Basically, you need to override the default HashAlgorithmTypeproperty of your membership provider. Here are the two places to do that: 1. At the beginning of your element, add the following element: <system.web> <machineKey validation="SHA1" /> ... </system.web> 2. On your element under , add the following hashAlgorithmType attribute: <system.web> <membership defaultProvider="myMembership" hashAlgorithmType="SHA1"> ... </system.web> After that, you should be good to go! Hope this helps.

    Read the article

  • Refresh banshee album art

    - by kmassada
    I usually just copy ~/.config/banshee-1, and ~/.gconf/apps/banshee-1 when i'm moving from one computer to the other, if I keep the path of the folders. I get to keep my music library intact with the playlists I have. The problem with this method is that, the album arts doesn't carry over nicely. You'd have to play every album to get the album art to appear. Anyone knows a workaround, to maybe force banshee to reload all album art? I saw this, but not quite what my issue is? I tried banshee --fetch-artwork, but didn't work too well kenneth@dv7:~$ banshee --fetch-artwork [Warn 11:23:38.200] DBus support could not be started. Disabling for this sessi on. - System.Exception: Error 111: Connection refused (in `dbus-sharp') at DBus.Unix.UnixSocket.Connect (System.Byte[] remote_end) [0x00000] in <filen ame unknown>:0 at DBus.Transports.UnixNativeTransport.OpenAbstractUnix (System.String path) [ 0x00000] in <filename unknown>:0 at DBus.Transports.UnixNativeTransport.Open (System.String path, Boolean abstr act) [0x00000] in <filename unknown>:0 at DBus.Transports.UnixTransport.Open (DBus.AddressEntry entry) [0x00000] in < filename unknown>:0 at DBus.Transports.Transport.Create (DBus.AddressEntry entry) [0x00000] in <fi lename unknown>:0 at DBus.Connection.OpenPrivate (System.String address) [0x00000] in <filename unknown>:0 at DBus.Connection..ctor (System.String address) [0x00000] in <filename unknow n>:0 at DBus.Bus..ctor (System.String address) [0x00000] in <filename unknown>:0 at DBus.Bus.Open (System.String address) [0x00000] in <filename unknown>:0 at DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 System.Exception: Unable to open the session message bus. (in `dbus-sharp') at DBus.Bus.get_Session () [0x00000] in <filename unknown>:0 at DBus.BusG.Init () [0x00000] in <filename unknown>:0 at Banshee.ServiceStack.DBusConnection.Connect (System.String serviceName, Boo lean init) [0x00000] in <filename unknown>:0 at Banshee.ServiceStack.DBusConnection.GrabDefaultName () [0x00000] in <filena me unknown>:0 [Info 11:23:38.286] Running Banshee 2.6.0: [Ubuntu 12.10 (linux-gnu, x86_64) @ 2012-10-11 06:19:37 UTC] (Banshee:21865): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Failed to connect to socket /tmp/dbus-vLxS6Riwsn: Connection refused [Warn 11:23:38.948] Could not read GConf key core.send_anonymous_usage_data - G Lib.GException: No D-BUS daemon running (in `gconf-sharp') at GConf.Client.Get (System.String key) [0x00000] in <filename unknown>:0 at Banshee.GnomeBackend.GConfConfigurationClient.TryGet[Boolean] (System.Strin g namespace, System.String key, System.Boolean& result) [0x00000] in <filename u nknown>:0 (Banshee:21865): GConf-WARNING **: Client failed to connect to the D-BUS daemon: Failed to connect to socket /tmp/dbus-vLxS6Riwsn: Connection refused [Warn 11:23:39.239] Could not read GConf key core.send_anonymous_usage_data - G Lib.GException: No D-BUS daemon running (in `gconf-sharp') at GConf.Client.Get (System.String key) [0x00000] in <filename unknown>:0 at Banshee.GnomeBackend.GConfConfigurationClient.TryGet[Boolean] (System.Strin g namespace, System.String key, System.Boolean& result) [0x00000] in <filename u nknown>:0

    Read the article

  • ATI (fglrx) Dual monitor / laptop hot-plugging

    - by Brendan Piater
    I feel like I've gone back 5 years on my desktop today. I'll try not dump to much frustration here... I been running 12.04 since alpha with the ATI open source drivers and the gnome 3 desktop. I been generally very happy with them with only small issues along the way. Now of course it does not support 3D acceleration 100%, so games like my newly purchased Amnesia from the Humble bundle would not play. OK, no worries, the ATI driver is in the repos so let me have a go I thought. With all this testing that's been done with multi-monitor support, what could go wrong...? How I use my computer: It's laptop, with a HD 3670 card in it. I spend about 50% of the time working directly on the laptop (at home) and about 50% of the time working with an additional display connected (at work), multi desktop environment. What happening now: installed drivers things seemed to working, save some small other bugs (not critical) this morning I take my machine and plug the additional monitor into it, and nothing happens... ok fine. open "displays" try configure dual display, won't work open ati config "thing" (cause it is a thing, a crap thing) and set-up monitors there reboot it says (oh ffs, really.... ok) reboot, login and wow, I got a gnome 2 desktop (presume gnome 3 fall back) and no multi-monitor...great. (screenshot: http://ubuntuone.com/5tFe3QNFsTSIGvUSVLsyL7 ) after getting into a situation where I had to Ctrl + Alt + Del to get out of a frozen display, I eventually manage to set-up a single display desktop on the "main" monitor ok.. time to go home... unplug monitor... nothing happens.. oh boy here we go... try displays again, nothing, just hangs the display.. great. crash all the apps and reboot... So it's been a trying day... What I really hope is that someone else has figured out how to avoid this PAIN. Please help with a solution that: allows me run fglrx (so I can run the games I want) allows me to hot-plug a monitor to my laptop and remove it again allows me to change the display so include the hot-plugged monitor (preferable automatically like it did with the open drivers) Next best if that's not possible: switch between laptop only display and monitor only display easily (i.e. not having to reboot/logout/suspened etc) Really appreciate the time of anyone that has a solution. Thanks in advance. Regards Brendan PS: I guess I should file a bug about this too, so some direction as to the best place to file this would be appreciated too.

    Read the article

  • Credentials Not Passed From SharePoint WebPart to WCF Service

    - by Jacob L. Adams
    I have spent several hours trying to resolve this problem, so I wanted to share my findings in case someone else might have the same problem. I had a web part which was calling out to a WCF service on another server to get some data. The code I had was essentially using System.ServiceModel; using System.ServiceModel.Channels; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result = someOtherService.SomeServiceMethod(); This code would run fine on my local instance of SharePoint 2010 (Windows 7 64-bit). However, when I would deploy it to the testing environment, I would get a yellow screen of death  with the following message: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate,NTLM'. I then went through the usual checklist of Windows Authentication problems: Check WCF bindings to make sure authentication is set correctly Check IIS to make sure Windows Authentication is enabled and anonymous authentication was disabled. Check to make sure the SharePoint server trusted the server hosting the WCF service Verify that the account that the IIS application pool is running under has access to the other server I then spend lot of time digging into really obscure IIS, machine.config, and trust settings (as well of lots of time on Google and StackOverflow). Eventually I stumbled upon a blog post by Todd Bleeker describing how to run code under the application pool identity. Wait, what? The code is not already running under application pool identity? Another quick Google search led me to an MSDN page that imply that SharePoint indeed does not run under the app pool credentials by default. Instead SPSecurity.RunWithElevatedPrivileges is needed to run code under the app pool identity. Therefore, changing my code to the following worked seamlessly using System.ServiceModel; using System.ServiceModel.Channels; using Microsoft.SharePoint; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result; SPSecurity.RunWithElevatedPrivileges(()=> { result = someOtherService.SomeServiceMethod(); });

    Read the article

  • Lesser Known NHibernate Session Methods

    - by Ricardo Peres
    The NHibernate ISession, the core of NHibernate usage, has some methods which are quite misunderstood and underused, to name a few, Merge, Persist, Replicate and SaveOrUpdateCopy. Their purpose is: Merge: copies properties from a transient entity to an eventually loaded entity with the same id in the first level cache; if there is no loaded entity with the same id, one will be loaded and placed in the first level cache first; if using version, the transient entity must have the same version as in the database; Persist: similar to Save or SaveOrUpdate, attaches a maybe new entity to the session, but does not generate an INSERT or UPDATE immediately and thus the entity does not get a database-generated id, it will only get it at flush time; Replicate: copies an instance from one session to another session, perhaps from a different session factory; SaveOrUpdateCopy: attaches a transient entity to the session and tries to save it. Here are some samples of its use. ISession session = ...; AuthorDetails existingDetails = session.Get<AuthorDetails>(1); //loads an entity and places it in the first level cache AuthorDetails detachedDetails = new AuthorDetails { ID = existingDetails.ID, Name = "Changed Name" }; //a detached entity with the same ID as the existing one Object mergedDetails = session.Merge(detachedDetails); //merges the Name property from the detached entity into the existing one; the detached entity does not get attached session.Flush(); //saves the existingDetails entity, since it is now dirty, due to the change in the Name property AuthorDetails details = ...; ISession session = ...; session.Persist(details); //details.ID is still 0 session.Flush(); //saves the details entity now and fetches its id ISessionFactory factory1 = ...; ISessionFactory factory2 = ...; ISession session1 = factory1.OpenSession(); ISession session2 = factory2.OpenSession(); AuthorDetails existingDetails = session1.Get<AuthorDetails>(1); //loads an entity session2.Replicate(existingDetails, ReplicationMode.Overwrite); //saves it into another session, overwriting any possibly existing one with the same id; other options are Ignore, where any existing record with the same id is left untouched, Exception, where an exception is thrown if there is a record with the same id and LatestVersion, where the latest version wins SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Ubuntu 10.10 forgets desktop theme.

    - by Marcelo Cantos
    (I posed this question on superuser.com and haven't received any answers or comments, then I came across this site, so my apologies to anyone who has seen this already.) I am running Ubuntu in VirtualBox (on a Windows 7 host). Several times now, the top-level menu bar, the task bar — and seemingly every system dialog — have forgotten the out-of-the-box "Ambiance" theme they conform to when I first installed the system. Window captions still preserve the theme, but pretty much nothing else does. I have searched high and low on Google for assistance with this problem. Everything I've found suggests either running some gconf reset or deleting .gconf* .gnome* and other similar directories. I have followed all this advice and nothing works. I still get a boring Windows-95-style gray 3D look and feel. On previous occasions, after much messing around I've given up and rebooted the VM instance, and been pleasantly suprised to see the original "Ambience" theme restored throughout the UI, but invariably it disappears again some time later, usually after a reboot, so I can never figure out what I did that broke it. Here's a sample from Ubuntu's site of what I want it to look like. And here's a screenshot of my system as it currently looks. Also note that my GNOME Terminals normally have a nice purple semi-translucent look, and as can be seen from the screenshot, they are now just a solid matt white. This last time (just yesterday), trying numerous combinations all the usual tricks and rebooting several times hasn't fixed it, so here I am on SU wondering: How do I recover the out-of-the-box theme for my Gnome/Ubuntu desktop, noting that blowing away all config files — as suggested in many places online — fails to achieve this? It might help to know that it seems to fail either after I resize the VM instance, forcing the Ubuntu desktop to resize itself, or after I play around with Compiz settings. I haven't been able to figure out which of these it is, and it could be neither. Given the amount of pain I have had to go through to get things back to normal (and given that I am at a loss as to how to do so), it has proven difficult to definitively isolate the cause.

    Read the article

  • Squid: caching *.swf with variables

    - by stfn
    I'd recently upgraded my Ubuntu 11.10 x64 server to 12.04. In this process Squid was updated from 2.7 to 3.1. Squid 3.1 has many different options witch broke my setup. So I completely removed squid 2.7 and 3.1 and started from scratch. Everything is now working as before except for 1 thing: caching of .swf files with ?/variables. Squid 3 sees a ? as dynamic content and does not cache it. For example, Squid 2.7 was caching the .swf file at http://ninjakiwi.com/Games/Tower-Defense/Play/Bloons-Tower-Defense-5.html and 3.1 is not. <object id="mov" name="movn" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="800" height="620"> <param name="movie" value="http://www.ninjakiwifiles.com/Games/gameswfs/btd5.swf?v=160512-2"> <param name="allowscriptaccess" value="always"> <param name="bgcolor" value="#000000"> <param name="flashvars" value="file=http://www.ninjakiwifiles.com/Games/gameswfs/btd5-dat.swf?v=280512"> <p>Get Flash play Ninja Kiwi games.</p> </object> It is because of the "?v=160512-2" and "?v=280512" part. This line should be responsible for that: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 But disabling it still doesn't cache the .swf files. How do I configure Squid 3.1 to cache those files? My current config is: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl localnet src 192.168.2.0-192.168.2.255 acl localnet src 192.168.3.0-192.168.3.255 http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all http_port 3128 cache_dir ufs /var/spool/squid 10240 16 256 maximum_object_size 100 MB coredump_dir /var/spool/squid3 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.index.(html|htm)$ 0 40% 10080 refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320 refresh_pattern Packages\.bz2$ 0 20% 4320 refresh-ims refresh_pattern Sources\.bz2$ 0 20% 4320 refresh-ims refresh_pattern Release\.gpg$ 0 20% 4320 refresh-ims refresh_pattern Release$ 0 20% 4320 refresh-ims refresh_pattern . 0 40% 40320 cache_effective_user proxy cache_effective_group proxy

    Read the article

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • How to generate xorg.conf? (X -configure segfaults)

    - by Nicolas Raoul
    My video card is working fine, I have no screen problem. I am trying to generate an xorg.conf so I did: [ Logout ] sudo service gdm stop [ Move away xorg.conf.back and xorg.conf.fglrx-0 that were in /etc/X11 ] sudo dpkg-reconfigure xserver-xorg sudo X -configure But this last command segfaults: X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux nico 2.6.32-25-generic #44-Ubuntu SMP Fri Sep 17 20:26:08 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-25-generic root=UUID=7447ab16-3406-442d-81e5-bb6a2d795205 ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Fri Oct 15 16:06:11 2010 List of video drivers: i740 ark geode siliconmotion mach64 s3 r128 apm intel neomagic vesa trident chips s3virge fglrx sis savage rendition i128 tseng ztv mga openchrome radeon ati nv v4l vmware cirrus tdfx nouveau sisusb voodoo fbdev (EE) Can't load FireGL DRM library (libfglrxdrm.so). Backtrace: 0: X (xorg_backtrace+0x3b) [0x80e938b] 1: X (0x8048000+0x61c8d) [0x80a9c8d] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0x34d410] 3: X (xf86CallDriverProbe+0x182) [0x80b82d2] 4: X (DoConfigure+0x1c8) [0x816b898] 5: X (InitOutput+0x1da) [0x80b98aa] 6: X (0x8048000+0x1ebbb) [0x8066bbb] 7: /lib/tls/i686/cmov/libc.so.6 (__libc_start_main+0xe6) [0x467bd6] 8: X (0x8048000+0x1e961) [0x8066961] Segmentation fault at address (nil) Caught signal 11 (Segmentation fault). Server aborting Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Note the line Can't load FireGL DRM library (libfglrxdrm.so) Note: I do have file /usr/lib/fglrx/xorg/modules/linux/libfglrxdrm.so It is strange that it segfaults whereas I can use Gnome with no problem but well... Might be related: I tried to install the driver from ATI's website recently, and from then glxgears crashes at start. How can I generate xorg.conf in those conditions? It might or might not involve solving the segfault problem.

    Read the article

  • SOASuite 11.1.1.4 : Error Logging into BPM11g Composer?

    - by angelo.santagata
    Hey all, I’ve just installed SOA Suite 11.1.1.4 and noticed a few funnies which people might hit, thankfully each of them have an easy solution.   1. Some applications are installed but dont appear to work? If when you install SOASuite you may notice that the following applications dont appear to work, however they do appear as deployments in Weblogic Server Console e.g.  SOA Composer (composer), FMW Welcome Page Application (11.1.0.0.0) and some of the adaptors. If they appear in the deployments list state as “installed” and not Active, then its likely that they haven't been targeted to a specific server.   The solution is to target the application the desired managed server , e.g. AdminServer in a development environment. This is done by selecting the application, tab “Targets”, select all components, Button[change Targets] and select the appropriate server. This change can be done without restarting the Weblogic Server 2. You might find that when you try to log into the BPM Composer at http://machine:7001/bpm/composer , the login screen will appear but you cant log in. The error log might mention the following   The solution to this is two fold, a) When creating the domain, avoid using 127.0.0.1 as the Listener address, or “Any Addresses”, if this is a development machine create an alias in your /etc/hosts file and then use this alias in the domain creation wizard. e.g.   my host file contains an entry     mypc     127.0.0.1   And in the Fusion middleware Configuration wizard   Twill then work!   if it still doesnt you can try setting the ServerURL attribute to http://mypc in the SoaInfraMBean instead of blank. This is accomplished by using Enterprise Manager. Use the System MBean Browser to navigate to Application Defined MBeans->oracle.as.soainfra.config->[ server]->SoaInfraConfig->soa-infra. Then changing the value of 'ServerURL' to http://mypc   Failing that give support a call….

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Webcast Replay : SANS Institute Product Review of Oracle Identity Manager

    - by B Shashikumar
    Thanks to everyone who attended the SANS Institute webinar covering the product review of Oracle Identity Manager. And a special thanks to our guest speakers from SuperValu - Phillip Black and Patrick Abreo. If you missed the webcast, you can catch a replay here  And here are the slides that were used in the webcast.  There were many questions that we could not answer as we ran out of time. We have captured some of the questions with responses below. Is Oracle Identity Analytics still offered as a separate product or is it part of Oracle Identity Manager? Oracle Identity Manager and Oracle Identity Analytics are now offered as part of Oracle Identity Governance Suite. OIA and OIM share a common UI architecture, common data model and common support for connected and disconnected resources.  When requesting new access/entitlements is there an approval process? Yes. We leverage SOA BPEL-based workflows for approvals  Are the identity self service capabilities based on Oracle ADF? Yes they are completely based on Oracle ADF  Can you give some examples of personalization and customization with Oracle Identity Manager 11gR2? With the new UI config framework we can enable different levels of UI customization. Customers now have the ability to Point & click to customize; or drag and drop customization without any need for coding. So users can easily personalize the interface of their application within the browser. For example, they can change the logo, Rearrange, hide Home Page regions; regularly searched items can be saved and re-used; Searchable & search results columns can be configured; Sorting preferences are remembered and so on. For more sophisticated customization, Customers can also edit the standard JSF within the page to alter business rules, modify page flows, page layouts and other items. Can you explain the role of sandboxes in customization? Customers can make their custom changes within a sandbox so that it doesn’t impact their production environment. They can make their changes, validate those changes, stage and then commit those changes without affecting production users. This is similar to how source code control systems like perforce work To watch a replay of the webcast, click here

    Read the article

  • Samba fails to install

    - by jschoen
    I am running XBMC, which is built around Ubuntu 10.04. It does not come with samba pre-installed, and I need to share some media with a couple other boxes. I followed the Think Geek directions found here. I had it all set up a couple days ago, and thought I was in the clear. I rebooted this evening and when it came back up Samba was not started. I determined this by trying access the samba shares, and it would return there was an connecting to the server. I can ssh into it, so I know it is connected. In my inifinite wisdom, I figured I just messed something up and would just uninstall and reinstall. So I did: sudo apt-get purge samba and sudo apt-get purge smbfs. Then tried to follow the tutorial above again. The what I get after running sudo apt-get install samba smbfs is Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: openbsd-inetd inet-superserver smbldap-tools ldb-tools ufw smbclient The following NEW packages will be installed: samba smbfs 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 0B/8,131kB of archives. After this operation, 22.6MB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package samba. (Reading database ... 57098 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb)... Selecting previously deselected package smbfs. Unpacking smbfs (from .../smbfs_2%3a3.4.7~dfsg-1ubuntu3.2_i386.deb) ... Processing triggers for ureadahead ... Setting up samba (2:3.4.7~dfsg-1ubuntu3.2) ... Generating /etc/default/samba... update-alternatives: using /usr/bin/smbstatus.samba3 to provide /usr/bin/smbstatus (smbstatus) in auto mode. smbd start/running, process 2963 **start: Job failed to start** Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.2) ... The bold is my own emphasis. So I am not sure what I messed up here, or how to get back to where it was. Though I am pretty sure I made it worse than it is. I found where the logs are located, /var/logs, and found this line that seems to be the culprit. Jan 29 11:59:34 XBMCLive smbd[2806]: error opening config file So it seems to not create the configuration files. Is there a way to get samba to try to recreate them again?

    Read the article

  • How migrate my keyring (containing ssh passprases, nautilus remote filesystem, pgp passwords) and network manager connections?

    - by con-f-use
    I changed the disk on my laptop and installed Ubuntu on the new disk. Old disk had 12.04 upgraded to 12.10 on it. Now I want to copy my old keyring with WiFi passwords, ftp passwords for nautilus and ssh key passphrases. I have the whole data from the old disk available (is now a USB disk and I did not delete the old data yet or do anything with it - I could still put it in the laptop and boot from it like nothing happened). On the new disc that is now in my laptop, I have installed 12.10 with the same password, user-id and username as on the old disk. Then I copied a few important config files from the old disk (e.g. ~/.firefox/, ~/.mozilla, ~/.skype and so on, which all worked fine... except for the key ring: The old methods of just copying ~/.gconf/... and ~/.gnome2/keyrings won't work. Did I miss something? 1. Edit: I figure one needs to copy files not located in the users home directory as well. I copied the whole old /home/confus (which is my home directory) to the fresh install to no effect. That whole copy is now reverted to the fresh install's home directory, so my /home/confus is as it was the after fresh install. 2. Edit: The folder /etc/NetworkManager/system-connections seems to be the place for WiFi passwords. Could be that /usr/share/keyrings is important as well for ssh keys - that's the only sensible thing that a search came up with: find /usr/ -name "*keyring* 3. Edit: Still no ssh and ftp passwords from the keyring. What I did: Convert old hard drive to usb drive Put new drive in the laptop and installed fresh version of 12.10 there (same uid, username and passwort) Booted from old hdd via USB and copied its /etc/NetwrokManager/system-connections, ~/.gconf/ and ~/.gnome2/keyrings, ~/.ssh over to the new disk. Confirmed that all keys on the old install work Booted from new disk Result: No passphrase for ssh keys, no ftp passwords in keyring. At least the WiFi passwords are migrated. 4. Edit: Boutny! Ending soon... 5. Edit: Keyring's now in ./local/share/keyrings/. Also interesting .gnupg

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Configuring keyboard input to eliminate unused diacritics

    - by David Cesarino
    I'd like to change the way diacritics work under Xubuntu. My problem My native language is pt-BR and my notebook has an american keyboard. Thus, I use ' and " followed by keys like u and c to achieve things like ú, ç and ü. It all works well. However, in the case of apostrophes and quotes, that creates a problem when I use ' followed by: letters that won't accept the acute accent ( ´ , ACUTE ACCENT -- 0x00B4) at all, like t; and letters that won't accept the acute in pt-BR, like r. In 1, ' with t does absolutely nothing. In 2, ' with r creates r (LATIN SMALL LETTER R WITH ACUTE -- 0x0155), which is used, afaik, for some eastern european languages like slovak. It isn't used in portuguese, just like ?, s, z, ?, ?, ?, n and all consonants (portuguese do not take diacritics in consonants). Question That said, is there a way to better support the portuguese-brazilian language using an american keyboard? It is very common here --- I actually prefer the american keyboard to our own, known as “ABNT”. Desired solution I'd like to deactivate unused diacritics, so case 2 would behave just like case 1. Additionally, if possible, I'd like case 1 to behave like it does under Windows. As an example, typing ' followed by t should write 't (acute followed by T) instead of doing nothing. About 2, in my humble opinion, doing nothing is counterproductive. I realize the behavior is reasonable according to logic ("there isn't t-acute, so please tell the computer to typeset apostrophe --- ', SPACE --- instead of acute). But from a human, practical point of view, I think it makes more sense (to me, at least). Additional comments I believe this also applies to spanish, french, italian and other western european latin languages. On the console (Ctrl+Alt+F?), case 1 is not a problem. I don't need to press space as apostrophes are automatically added. However, there, I'm unable to access cedilla (ç). Two completely different behaviors. If it's just a matter of customizing text config files (possibly creating a custom layout or whatever), I'd be glad to share my efforts. I just need guide on the "howto" part. Somehow Google only points me to the "enough" people (those who cope with the situation and think that it works "well enough"). And since I have definitely migrated to Linux/Xubuntu after years, I'd like to leave just as I like it (and I'm sure others as well). For example, if there is some kind of scripting or definition to tell the computer to do what I described, so be it.

    Read the article

  • Unable to Mount an external hard drive (NTFS)

    - by Mediterran81
    Ubuntu 11.10. When I plug my external Drive Western Digital MyPassport (500Go NTFS) I named WD. I get the following error: Unable to mount WD Error mounting: mount exited with exit code 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on /media/WD mount failed I have no problem with the internal NTFS partitions that auto-mounts on startup (ntfs-config does that). If I plug the WD before I boot Ubuntu, upon login, it's recognized and I can access without no problem. But if I remove it using (Safely remove) and then replug it, I get the error above. Here is my fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda5 : UUID=24540d0f-5803-493c-ace9-e3b3c0cedb26 / ext4 errors=remount-ro 0 1 #Entry for /dev/sda3 : UUID=E4C43F7EC43F51D2 /media/OS ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sda2 : UUID=6A0070F10070C61B /media/RECOVERY ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=EA6854D268549F5F /media/WD ntfs-3g defaults,nosuid,nodev,locale=en_US.UTF-8 0 0 #Entry for /dev/sda6 : UUID=ed077c52-c50e-406c-9120-9cb6f86ec204 none swap sw 0 0 Here is my mtab /dev/sda5 / ext4 rw,errors=remount-ro,commit=0 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /media/OS fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sda2 /media/RECOVERY fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sdb1 /media/WD fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/hanine/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=hanine 0 0 Appearently it cannot be mounted because upon login, it finds that it is already mounted. Some sort of conflict. Does anyone have a clue on how to solve this. Thanks.

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >