Search Results

Search found 15835 results on 634 pages for 'static routes'.

Page 561/634 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • Intermittent internet connectivity

    - by Rob Oplawar
    UPDATED: I recently built a new computer and set it up to dual-boot Windows 7 and Ubuntu 11.10. In Windows, using the same hardware, my LAN connectivity is solid. In Ubuntu, however, my network interface periodically dies and resets itself; I'll have a solid connection for 30 seconds, and then it will go out for 30 seconds. When I tail the log: tail -f /var/log/kern.log I see "eth0 link up" messages appear periodically, corresponding with the return of connectivity. I posted the original question months ago, and misinterpreted what was going on. With a working Internet connection in Windows, I ignored the problem for some months. See my answer below for the solution (drivers). ORIGINAL POST In Ubuntu, although I maintain a solid connection to my LAN (pinging the router IP address consistently returns a good result), my internet connectivity drops in and out. When I continuously ping 74.125.227.18 (a google.com server), I get responses for a while, then I start getting "Destination Host Unreachable" for a while, then I get responses again. This happens consistently, dropping the connection for about 30 seconds out of every minute or two. Whether I configure my network via the network manager or via /etc/network/interfaces seems to make no difference. I configure with the following settings: address 192.168.1.101 network 192.168.1.0 gateway 192.168.1.99 (my router's IP address) netmask 255.255.255.0 (confirmed as the right netmask for the router) broadcast 192.168.1.255 (also confirmed with the router). ifconfig confirms that these settings are working: eth0 Link encap:Ethernet HWaddr 50:e5:49:40:da:a6 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe40:daa6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11557 errors:0 dropped:11557 overruns:0 frame:11557 TX packets:13117 errors:0 dropped:211 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9551488 (9.5 MB) TX bytes:1930952 (1.9 MB) Interrupt:41 Base address:0xa000 I get the same issue when I use automatic DHCP address settings, although I did confirm that there is no other machine on the network with the static IP address I want to use. As I said, the connection to the local network stays solid - I never have any trouble pinging 192.168.1.* - it's internet addresses that I intermittently cannot reach. It's not a DNS issue because pinging known IP addresses directly shows the same behavior. Also, I don't think it's a hardware issue, as I never have any internet connectivity problems on the same machine in Windows. The network hardware is built into the motherboard: Gigabyte Z68XP-UD3P. I managed to bring the OS fully up to date, according to the update manager, but it didn't fix the issue, and with my limited understanding of network architecture I'm at my wit's end. The only clue I can see is that ifconfig is reporting a lot of dropped packets, but I'm not sure what to do about it. UPDATE: It seems my problem is a little more generic than I described; now when I try pinging my router and google simultaneously, they both go unreachable at the same time. Running ifdown eth0 and then ifup eth0 brings it back temporarily; if I just wait it comes back after a couple of minutes. I'll broaden my search through intermittent network connectivity problems.

    Read the article

  • Admin Panel like Custom Framework

    - by bhuvin
    I want to Create a Framework , like Admin panel , which can rule almost all the aspects of what is shown on the frontend. For an (most basic) example: If suppose the links which are to be shown in a navigation area is passed from the server, with the order and the url , etc. The whole aim is to save the time on the tedious tasks. You can just start creating menus and start assigning pages to it. Give a url, actual files which are to be rendered (in case of static files.), in case of dynamic files, giving the file accordingly. And all this is fully server side manageable using different portlets, sort of things. So basic Roadmap is having : Areas like: Header Area - Which can contain logos, links etc. Navigation Area - Which can contains links and submenus. Content Area - Now this is where the tricky part is that that it has zones like: left, center & right. It contains Order in which it has to be displayed. So, when someday we want to change the way the articles appear on the page, we can do so easily, without any deployments. Now these zones can have n number of internal elements, like the word cloud, or the advertisement area. Footer Area: Again similar as Header Area. Currently there is a preexisting custom framework, which uses XSLT files for pulling out data from the server side. And it has the above capabilities. For example: If there's a grid it will be having a <table> tag embedded in the XSLT file. Now whatever might be the source of the data, we serialize this as XML and give it to the XSLT file and the html is derived from this and is appended to the layer in a page. The problem with this approach is: The XSLT conversion is occurring on the server side, so the server is responsible for getting the data, running XSLT transform, and append the html generated to the layer div. So, according to me, firstly this isn't the server's concern to do so. Secondly for larger applications this might be slower. Debugging isn't possible for XSLT transformation. So, whenever we face problems with data its always a bit of a trial & error method. Maintaining it is a bit of an eerie job i.e. styling changes, and other stuff. Adding dynamic values. Like JavaScript can't actually be very easily used in this. Secondly, we can't use JQuery or any other libraries with this since this is all occurring on the server. For now what I have thought about is using Templating - Javascript - JSON combination in place of XSLT, this will be offloaded to the client and the rendering will take place accordingly. This could solve the above problems and also could add mobile support for the same. Only problem which I could think of is that: It is much work and adding new portlets on the go needs to be looked into. What could be the alternatives for this? What kind of problems are there with the JavaScript approach? What are the different ways to implement the same? Are there any existing frameworks for similar usage?

    Read the article

  • Collision 2D Quads

    - by Vico Pelaez
    I want to detect collision between two 2D squares, one square is static and the other one moves according to keyboard arrows. I have implemented some code, however nothing happens when they overlap each other and what I tried to achieve in the code was to detect an overlapping between them. I think I am either not understanding the concept really well or that because one of the squares is moving this is not working. Please I would really appreciate your help. Thank you! float x1=0.05 ,Y1=0.05; float x2=0.05 ,Y2=0.05; float posX1 =0.5, posY1 = 0.5; float movX2 = 0.0 , movY2 = 0.0; struct box{ int width=0.1; int heigth=0.1; }; void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } void quad1(){ glTranslatef(posX1, posY1, 0.0); glBegin(GL_POLYGON); glColor3f(0.5, 1.0, 0.5); glVertex2f(-x1, -Y1); glVertex2f(-x1, Y1); glVertex2f(x1,Y1); glVertex2f(x1,-Y1); glEnd(); } void quad2(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(movX2, movY2, 0.0); glBegin(GL_POLYGON); glColor3f(1.5, 1.0, 0.5); glVertex2f(-x2, -Y2); glVertex2f(-x2, Y2); glVertex2f(x2,Y2); glVertex2f(x2,-Y2); glEnd(); glPopMatrix(); } void reset(){ //Reset position of square??? movX2 = 0.0; movY2 = 0.0; collisionB = false; } bool collision(box A, box B){ int leftA, leftB; int rightA, rightB; int topA, topB; int bottomA, bottomB; //Calculate the sides of box A leftA = x1; rightA = x1 + A.width; topA = Y1; bottomA = Y1 + A.heigth; //Calculate the sides of box B leftB = x2; rightB = x2 + B.width; topB = Y1; bottomB = Y1+ B.heigth ; if( bottomA <= topB ) return false; if( topA >= bottomB ) return false; if( rightA <= leftB ) return false; if( leftA >= rightB ) return false; return true; } float move_unit = 0.1; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: movY2 += move_unit; break; case GLUT_KEY_RIGHT: movX2 += move_unit; break; case GLUT_KEY_LEFT: movX2 -= move_unit; break; case GLUT_KEY_DOWN: movY2 -= move_unit; break; default: break; } glutPostRedisplay(); } void display(){ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); cuad1(); if (!collision) { cuad2(); } else{ reset(); } glFlush(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Collision Practice"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop(); }

    Read the article

  • Can anybody help me in designing my UITableView into MVC Pattern ?

    - by user2877880
    I have written a ViewController in which i get data from the internet and display it in a UItableview using a json parser which uses object for key to identify its objects. What i would like your help in is to convert it into MVC pattern to make it less clumsy instead of including everything in the same controller class. Please try explaining it to me in terms of my code. THANKS IN ADVANCE. The code is as given below #import "ViewController.h" #import "AFNetworking.h" #import "ModelTableArray.h" @implementation ViewController @synthesize tableView = _tableView, activityIndicatorView = _activityIndicatorView, movies = _movies; - (void)viewDidLoad { [super viewDidLoad]; // Setting Up Table View self.tableView = [[UITableView alloc] initWithFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height) style:UITableViewStylePlain]; self.tableView.dataSource = self; self.tableView.delegate = self; self.tableView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; self.tableView.hidden = YES; [self.view addSubview:self.tableView]; // Setting Up Activity Indicator View self.activityIndicatorView = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray]; self.activityIndicatorView.hidesWhenStopped = YES; self.activityIndicatorView.center = self.view.center; [self.view addSubview:self.activityIndicatorView]; [self.activityIndicatorView startAnimating]; // Initializing Data Source self.movies = [[NSArray alloc] init]; NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rocky&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; UIRefreshControl *refreshControl = [[UIRefreshControl alloc] init]; [refreshControl addTarget:self action:@selector(refresh:) forControlEvents:UIControlEventValueChanged]; [self.tableView addSubview:refreshControl]; [refreshControl endRefreshing]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; } - (void)refresh:(UIRefreshControl *)sender { NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rambo&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; [sender endRefreshing]; } - (void)viewDidUnload { [super viewDidUnload]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } // Table View Data Source Methods - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (self.movies && self.movies.count) { return self.movies.count; } else { return 0; } } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *cellID = @"Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellID]; if (!cell) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:cellID]; } NSDictionary *movie = [self.movies objectAtIndex:indexPath.row]; cell.textLabel.text = [movie objectForKey:@"trackName"]; cell.detailTextLabel.text = [movie objectForKey:@"artistName"]; NSURL *url = [[NSURL alloc] initWithString:[movie objectForKey:@"artworkUrl100"]]; [cell.imageView setImageWithURL:url placeholderImage:[UIImage imageNamed:@"placeholder"]]; return cell; } @end

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • BlockingCollection having issues with byte arrays

    - by MJLaukala
    I am having an issue where an object with a byte[20] is being passed into a BlockingCollection on one thread and another thread returning the object with a byte[0] using BlockingCollection.Take(). I think this is a threading issue but I do not know where or why this is happening considering that BlockingCollection is a concurrent collection. Sometimes on thread2, myclass2.mybytes equals byte[0]. Any information on how to fix this is greatly appreciated. MessageBuffer.cs public class MessageBuffer : BlockingCollection<Message> { } In the class that has Listener() and ReceivedMessageHandler(object messageProcessor) private MessageBuffer RecievedMessageBuffer; On Thread1 private void Listener() { while (this.IsListening) { try { Message message = Message.ReadMessage(this.Stream, this); if (message != null) { this.RecievedMessageBuffer.Add(message); } } catch (IOException ex) { if (!this.Client.Connected) { this.OnDisconnected(); } else { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } catch (Exception ex) { Logger.LogException(ex.ToString()); this.OnDisconnected(); } } } Message.ReadMessage(NetworkStream stream, iTcpConnectClient client) public static Message ReadMessage(NetworkStream stream, iTcpConnectClient client) { int ClassType = -1; Message message = null; try { ClassType = stream.ReadByte(); if (ClassType == -1) { return null; } if (!Message.IDTOCLASS.ContainsKey((byte)ClassType)) { throw new IOException("Class type not found"); } message = Message.GetNewMessage((byte)ClassType); message.Client = client; message.ReadData(stream); if (message.Buffer.Length < message.MessageSize + Message.HeaderSize) { return null; } } catch (IOException ex) { Logger.LogException(ex.ToString()); throw ex; } catch (Exception ex) { Logger.LogException(ex.ToString()); //throw ex; } return message; } On Thread2 private void ReceivedMessageHandler(object messageProcessor) { if (messageProcessor != null) { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(messageProcessor); } } else { while (this.IsListening) { Message message = this.RecievedMessageBuffer.Take(); message.Reconstruct(); message.HandleMessage(); } } } PlayerStateMessage.cs public class PlayerStateMessage : Message { public GameObject PlayerState; public override int MessageSize { get { return 12; } } public PlayerStateMessage() : base() { this.PlayerState = new GameObject(); } public PlayerStateMessage(GameObject playerState) { this.PlayerState = playerState; } public override void Reconstruct() { this.PlayerState.Poisiton = this.GetVector2FromBuffer(0); this.PlayerState.Rotation = this.GetFloatFromBuffer(8); base.Reconstruct(); } public override void Deconstruct() { this.CreateBuffer(); this.AddToBuffer(this.PlayerState.Poisiton, 0); this.AddToBuffer(this.PlayerState.Rotation, 8); base.Deconstruct(); } public override void HandleMessage(object messageProcessor) { ((MessageProcessor)messageProcessor).ProcessPlayerStateMessage(this); } } Message.GetVector2FromBuffer(int bufferlocation) This is where the exception is thrown because this.Buffer is byte[0] when it should be byte[20]. public Vector2 GetVector2FromBuffer(int bufferlocation) { return new Vector2( BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation), BitConverter.ToSingle(this.Buffer, Message.HeaderSize + bufferlocation + 4)); }

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • F# Objects &ndash; Integration with the other .Net Languages &ndash; Part 2

    - by MarkPearl
    So in part one of my posting I covered the real basics of object creation. Today I will hopefully dig a little deeper… My expert F# book brings up an interesting point – properties in F# are just syntactic sugar for method calls. This makes sense… for instance assume I had the following object with the property exposed called Firstname. type Person(Firstname : string, Lastname : string) = member v.Firstname = Firstname I could extend the Firstname property with the following code and everything would be hunky dory… type Person(Firstname : string, Lastname : string) = member v.Firstname = Console.WriteLine("Side Effect") Firstname   All that this would do is each time I use the property Firstname, I would see the side effect printed to the screen saying “Side Effect”. Member methods have a very similar look & feel to properties, in fact the only difference really is that you declare that parameters are being passed in. type Person(Firstname : string, Lastname : string) = member v.FullName(middleName) = Firstname + " " + middleName + " " + Lastname   In the code above, FullName requires the parameter middleName, and if viewed from another project in C# would show as a method and not a property. Precomputation Optimizations Okay, so something that is obvious once you think of it but that poses an interesting side effect of mutable value holders is pre-computation of results. All it is, is a slight difference in code but can result in quite a huge saving in performance. Basically pre-computation means you would not need to compute a value every time a method is called – but could perform the computation at the creation of the object (I hope I have got it right). In a way I battle to differentiate this from lazy evaluation but I will show an example to explain the principle. Let me try and show an example to illustrate the principle… assume the following F# module namespace myNamespace open System module myMod = let Add val1 val2 = Console.WriteLine("Compute") val1 + val2 type MathPrecompute(val1 : int, val2 : int) = let precomputedsum = Add val1 val2 member v.Sum = precomputedsum type MathNormalCompute(val1 : int, val2 : int) = member v.Sum = Add val1 val2 Now assume you have a C# console app that makes use of the objects with code similar to the following… using System; using myNamespace; namespace CSharpTest { class Program { static void Main(string[] args) { Console.WriteLine("Constructing Objects"); var myObj1 = new myMod.MathNormalCompute(10, 11); var myObj2 = new myMod.MathPrecompute(10, 11); Console.WriteLine(""); Console.WriteLine("Normal Compute Sum..."); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(myObj1.Sum); Console.WriteLine(""); Console.WriteLine("Pre Compute Sum..."); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.WriteLine(myObj2.Sum); Console.ReadKey(); } } } The output when running the console application would be as follows…. You will notice with the normal compute object that the system would call the Add function every time the method was called. With the Precompute object it only called the compute method when the object was created. Subtle, but something that could lead to major performance benefits. So… this post has gone off in a slight tangent but still related to F# objects.

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • Application using JOGL stays in Limbo when closing

    - by Roy T.
    I'm writing a game using Java and OpenGL using the JOGL bindings. I noticed that my game doesn't terminate properly when closing the window even though I've set the closing operation of the JFrame to EXIT_ON_CLOSE. I couldn't track down where the problem was so I've made a small reproduction case. Note that on some computers the program terminates normally when closing the window but on other computers (notably my own) something in the JVM keeps lingering, this causes the JFrame to never be disposed and the application to never exit. I haven't found something in common between the computers that had difficulty terminating. All computers had Windows 7, Java 7 and the same version of JOGL and some terminated normally while others had this problem. The test case is as follows: public class App extends JFrame implements GLEventListener { private GLCanvas canvas; @Override public void display(GLAutoDrawable drawable) { GL3 gl = drawable.getGL().getGL3(); gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); gl.glClear(GL3.GL_COLOR_BUFFER_BIT); gl.glFlush(); } // The overrides for dispose (the OpenGL one), init and reshape are empty public App(String title, boolean full_screen, int width, int height) { //snipped setting the width and height of the JFRAME GLProfile profile = GLProfile.get(GLProfile.GL3); GLCapabilities capabilities = new GLCapabilities(profile); canvas = new GLCanvas(capabilities); canvas.addGLEventListener(this); canvas.setSize(getWidth(), getHeight()); add(canvas); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); //!!! setVisible(true); } @Override public void dispose() { System.out.println("HELP"); // } public static void main( String[] args ) { new App("gltut 01", false, 1280, 720); } } As you can see this doesn't do much more than adding a GLCanvas to the frame and registering the main class as the GLEventListener. So what keeps lingering? I'm not sure. I've made some screenshots. The application running normally. The application after the JFrame is closed, note that the JVM still hasn't exited or printed a return code. The application after it was force closed. Note the return code -1, so it wasnt just the JVM standing by or something the application really hadn't exited yet. So what is keeping the application in Limbo? Might it be the circular reference between the GLCanvas and the JFrame? I thought the GC could figure that out. If so how should I deal with that when I want to exit? Is there any other clean-up required when using JOGL? I've tried searching but it doesn't seem to be necessary. Edit, to clarify: there are 2 dispose functions dispose(GLAutoDrawable arg) which is a member of GLEventListener and dispose() which is a member of JFrame. The first one is called correctly (but I wouldn't know what to there, destroying the GLAutoDrawable or the GLCanvas gives an infinite exception loop) the second one is never called.

    Read the article

  • Solaris OpenStack Horizon customizations

    - by GirishMoodalbail-Oracle
    In Oracle Solaris OpenStack Havana, we have customized the Horizon BUI by modifying existing dashboard and panels to reflect only those features that we support. The modification mostly involves:  --  disabling an widget (checkbox, button, textarea, and so on) --  removal of a tab from a panel --  removal of options from pull-down menus The following table lists the customizations that we have made. |-----------------------------+-----------------------------------------------------| | Where                       | What                                                | |-----------------------------+-----------------------------------------------------| | Project => Instances =>     | Post-Creation tab is removed.                       | | Launch Instance             |                                                     | |                             |                                                     | | Project => Instances =>     | Security Groups tab is removed.                     | | Actions => Edit Instance    |                                                     | |                             |                                                     | | Project => Instances =>     | Console tab is removed.                             | | Instance Name               |                                                     | |                             |                                                     | | Project => Instances =>     | Following actions Console, Edit Security Groups,    | | Actions                     | Pause Instance, Suspend Instance, Resize Instance,  | |                             | Rebuild Instance, and Migrate Instance are removed. | |                             |                                                     | | Project =>                  | Security Groups tab is removed.                     | | Access and Security         |                                                     | |                             |                                                     | | Project =>                  | Create Volume action is removed.                    | | Images and Snapshots =>     |                                                     | | Images => Actions           |                                                     | |                             |                                                     | | Project => Networks =>      | Admin State is disabled and its value is always     | | Create Network              | true.                                               | |                             |                                                     | | Project => Networks =>      | Disable Gateway checkbox is disabled, and its       | | Create Network =>           | value is always false.                              | | Subnet                      |                                                     | |                             |                                                     | | Project => Networks =>      | Allocation Pools and Host Routes text area are      | | Create Network =>           | disabled.                                      | | Subnet Detail               |                                                     | |                             |                                                     | | Project => Networks =>      | Edit Subnet action is removed.                      | | Network Name => Subnet =>   |                                                     | | Actions                     |                                                     | |                             |                                                     | | Project => Networks =>      | Edit Port action is removed.                        | | Network Name => Ports =>    |                                                     | | Actions                     |                                                     | |                             |                                                     | | Admin => Instnaces =>       | Following actions Console, Pause Instance,          | | Actions                     | Suspend Instance, and Migrate Instance are removed. | |                             |                                                     | | Admin => Networks =>        | Edit Network action is removed                      | | Actions                     |                                                     | |                             |                                                     | | Admin => Networks =>         | Edit Subnet action is removed                       | | Subnets => Actions          |                                                     | |                             |                                                     | | Admin => Networks =>         | Edit Port action is removed                         | | Ports => Actions            |                                                     | |                             |                                                     | | Admin => Networks =>         | Admin State and Shared check box are disabled.      | | Create Network              | Network's Admin State is always true, and Shared is | |                             | always false.                                       | |                             |                                                     | | Admin => Networks =>        | Admin State check box is disabled and its value     | | Network Name => Create Port | is always true.                                     | |-----------------------------+-----------------------------------------------------|

    Read the article

  • JDK7?????

    - by Todd Bao
    ??JSR 334,JDK 7?????????????,???:1. switch????String????:    public static void switchString(String s){        switch (s){        case "db": ...        case "wls": ...        case "idm": ...        case "soa": ...        case "fa": ...        default: ...        }    }2. ?????????? - "0b"???"_"???,?????????? a. ???????????, 0b: ????????????:        byte b1 = 0b00100001;     // New        byte b2 = 0x21;        // Old        byte b3 = 33;        // Old b. ??????????????,?????,????????????:    long phone_nbr = 021_1111_2222;3. ???????????? - "?? new",????????:        ArrayList<String> al1 = new ArrayList<String>();    // Old        ArrayList<String> al2 = new ArrayList<>();        // New4. ????reflect?????????? - ReflectOperationException,????:    ClassNotFoundException,     IllegalAccessException,     InstantiationException,     InvocationTargetException,     NoSuchFieldException,     NoSuchMethodException5. catch????????,?????????,????????:    try{        // code    }    catch (SQLException | IOException ex) {        // ...    }6. ?????? - ??????????,??????????style:    public void test() throws NoSuchMethodException, NoSuchFieldException{    // ??        try{            // code        }        catch (RelectiveOperationException ex){    // ??            throws ex;        }    }7. ???try()?? - Try with Resources,?????????????(???????????java.lang.AutoCloseable??),???????try?????"("??"{":    try(BufferedReader br = new BufferedReader(new FileReader("/home/oracle/temp.txt"))){        ... br.readLine() ...    }try-with-resources?????catch,??????????catch???????Todd

    Read the article

  • CentOS 5.5 : Postfix, Dovecot & MySQL

    - by GruffTech
    I'm hoping someone has seen this issue before because I'm at quite a loss. We're building a new outbound smtp server for our clients that features anti-spam scanning and virus scanning for outbound emails, something we had not previously done. So with CentOS 5.5 x64, Installed and patched completely. Postfix & Dovecot both installed via base repo. [grufftech@outgoing postfix]# rpm -qa | grep postfix postfix-2.3.3-2.1.el5_2 [grufftech@outgoing postfix]# rpm -qa | grep dovecot dovecot-1.0.7-7.el5 [grufftech@outgoing ~]# dovecot --build-options Build options: ioloop=poll notify=inotify ipv6 openssl SQL drivers: mysql postgresql Passdb: checkpassword ldap pam passwd passwd-file shadow sql Userdb: checkpassword ldap passwd prefetch passwd-file sql static /etc/dovecot.conf auth default { mechanisms = plain login digest-md5 cram-md5 passdb sql { args = /etc/dovecot-mysql.conf } userdb sql { args = /etc/dovecot-mysql.conf } userdb prefetch { } user = nobody socket listen { master { path = /var/run/dovecot/auth-master mode = 0660 user = postfix group = postfix } client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } All the server is doing is auth for postfix, so no reason to have imap / pop / dict. /etc/dovecot-mysql.conf driver = mysql connect = host=10.0.32.159 dbname=mail user=****** password=******** default_pass_scheme = plain user_query = select 1 password_query = select password from users where username = '%n' and domain = '%d' So drop in my configuration, (which is working on another server identical to this one.) [grufftech@outgoing ~]# /etc/init.d/dovecot start Starting Dovecot Imap: [ OK ] Sweet. Booted up nicely, thats good.... (incoming problem in 3....2....1....) May 21 08:09:01 outgoing dovecot: Dovecot v1.0.7 starting up May 21 08:09:02 outgoing dovecot: auth-worker(default): mysql: Connect failed to 10.0.32.159 (mail): Can't connect to MySQL server on '10.0.32.159' (13) - waiting for 1 seconds before retry well what the crap. went and checked permissions on my MySQL database, and its fine. [grufftech@outgoing ~]# mysql vpopmail -h 10.0.32.159 -u ****** -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 127828558 Server version: 4.1.22 Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql>\q So! My server can talk to my database server. but dovecot, for whatever reason, isn't able to. I've fiddled with it for the last six hours, grabbed slightly-older copies of the RPM (ones that matched our production server exactly) to test those, copied configs, searched google, searched server fault, chatted in IRC, banged my head against the table, I've done it all. Surely I'm doing something wrong or forgetting something, can anyone tell me what the elephant in the room is? This stuff is supposed to work.

    Read the article

  • SQL Server Agent was not running on Server Dynamics CRM 2013

    - by No1_Melman
    I'm trying to install Dynamics CRM 2013 on a server. This server is on a VM. There are several other VMs, an ADDS & DNS, a MSSQL and a WebServer VM. Each server is a Windows Server 2012 R2. The SQL Server is 2012 Enterprise. Each VM is part of the main Domain, set by the ADDS & DNS. NSLookup confirms I can see the computer at the right IP address. Each separate VM has its own static IP, the DNS is set to the ADDS & DNS. I use the domain administrator to log into all the servers, and make the that domain administrator a local administrator. I've set up all the domain users for the CRM and gave them appropriate permissions, I have also added the accounts to the appropriate places, such that the CRM Deployment user is in the SQL security. The SQL Agent is running. SQL server configuration manager has SQL server network configuration TCP/IP enabled to allow remote connections. The SQL server has the domain user as a administrator, which is the same user being used to install the CRM. In the CRM setup i point to the [Servername]\[Instance] and I have also tried just the [Servername]. to make this easier I called the server MSSQL and left the instance name to the default. I even install the MSSQL instance as the domain administrator. CRM can find the ReportServer url. I have enable all the ports required, including: 135, 1433, 1434, 2382, 2383, 4022. 1434 UDP. I feel like I have absolutely done everything, I have google many times and tried all the different methods, and for the life of me I cant seem to get the CRM setup to find the SQL server agent. It passes everything else perfectly fine. I can even ping the MSSQL server. What is the problem, why does the CRM still keep giving the error: SQLSERVERAGENT (SQLSERVERAGENT) service is not running on the server MSSQL On the MSSQL server, the name of the sql server agent service is: SQL Server Agent (MSSQLSERVER)

    Read the article

  • Hyper-V + RRAS NAT + Port Forwarding + RDP, can I get it all working together?

    - by Tom Bull
    I am running a Windows 2008 R2 server with various services running natively and two virtualised servers running on Hyper-V. The hardware server, I'm going to call it REAL1, has one external NIC, to which I can assign any of the following IP addresses: 1.2.3.4, 1.2.3.5, 1.2.3.6, etc... I need to achieve the following: I would like to be able to connect to REAL1 via remote desktop (RDP / port 3389) on one IP address (say 1.2.3.4), but also to the virtualised servers (I'm going to call them VIRTUAL1 and VIRTUAL2) on the other available IP addresses (say 1.2.3.5 and 1.2.3.6). The easiest way of doing this is to connect the virtual servers directly to the external interface and assign them each their own IP address. REAL1 will have 1.2.3.4, VIRTUAL1 will have 1.2.3.5 and VIRTUAL2 will have 1.2.3.6. Unfortunately, although I don't directly manage the two virtual servers, I have responsibility for their security. I would like to have some kind of firewall between the virtual servers an the internet. I have tried running a virtual machine firewall, but have found the performance on Hyper-V pretty terrible. The alternative I am now trying is Routing and Remote Access (RRAS): I have set up a virtual network called 'Internal' and REAL1 has a virtual network adapter connected to this virtual network I have connected each of the virtual servers to this network too I have assigned each server static IP addresses on this virtual network (REAL1 has 10.1.1.1, VIRTUAL1 has 10.1.1.2 and VIRTUAL2 has 10.1.1.3) I have installed RRAS and set up a NAT. The external interface is the external NIC, the internal interface is the virtual NIC connected to the internal network I have assigned all the available external IP addresses to the external NIC on REAL1. The virtual servers have been set up appropriately such that their default gateway is pointing to 10.1.1.1 and they can both access externally. Success! The RRAS is routing packets. The problem I have is that when I try to port forward services from the external IP address on REAL1, it only works if there is not already a service bound to the port. Remote desktop 'greedily' binds to every available IP address on port 3389 on REAL1 so I can't selectively forward incoming traffic for 1.2.3.5:3389 to 10.1.1.2:3389. RRAS will allow me to set up this port forwarding, and no errors come up. It just doesn't work. So the question I have is: Is there a better way of doing this? Or at least is there a way of resolving the apparant conflict between RRAS and everything else on the physical server?

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • Creating a dynamic lacp trunk from HP Procurve 2412zl to Proliant DL380 G7

    - by Maalobs
    I'm configuring an IEEE 802.3ad (LACP) dynamic trunk from a HP Procurve 2412zl (firmware version K.15.07) switch to a HP Proliant DL380 G7 server. The DL380 has 4 NICs and is running Win2008 R2, so I'm teaming the NICs together and leaving everything on the recommended "automatic" setting in the HP NIC configuration tool. The server is one of two, they'll be connected on interfaces F17-F20 and F21-F24 respectively on the switch. I need the servers in a separate VLAN, here is the configuration for the VLAN: vlan 10 name "Lab_Mgmt" untagged B2,F17-F24 ip address 172.22.71.3 255.255.255.0 tagged B21 exit There is a DHCP-relay into the VLAN 10 from another device beyond interface B21. The Advanced Traffic Management Guide says that in order to run a dynamic LACP trunk on another VLAN besides the DEFAULT_VLAN, you need to first enable GVRP and then use "forbid" to stop the interfaces from automatically joining DEFAULT_VLAN when the dynamic trunk is created. GVRP brings some other stuff with it that I don't want or need, so I disable it with "unknown-vlans disable" on all other interfaces. Here is how I do it: procurve-5412zl-1(config)# gvrp procurve-5412zl-1(config)# interface A1-A24,B1-B24,C1-C24,D1-D10,D13-D24,E1-E24, F1-F16,K1,K2 unknown-vlans disable procurve-5412zl-1(config)# vlan 1 forbid F17-F24 procurve-5412zl-1(config)# interface F17-F20 lacp active The result afterwards looks all successful: procurve-5412zl-1(config)# show trunks Load Balancing Method: L3-based (Default), L2-based if non-IP traffic Port | Name Type | Group Type ---- + -------------------------------- --------- + ------ -------- F17 | XYZTEAM3_NIC1 100/1000T | Dyn2 LACP F18 | XYZTEAM3_NIC2 100/1000T | Dyn2 LACP F19 | XYZTEAM3_NIC3 100/1000T | Dyn2 LACP F20 | XYZTEAM3_NIC4 100/1000T | Dyn2 LACP procurve-5412zl-1(config)# vlan 10 procurve-5412zl-1(vlan-10)# show lacp LACP LACP Trunk Port LACP Admin Oper Port Enabled Group Status Partner Status Key Key ---- ------- ------- ------- ------- ------- ------ ------ F17 Active Dyn2 Up Yes Success 0 0 F18 Active Dyn2 Up Yes Success 0 0 F19 Active Dyn2 Up Yes Success 0 0 F20 Active Dyn2 Up Yes Success 0 0 On the Proliant server, the NIC configuration Tool is also indicating that a 802.3ad dynamic trunk has been established. Everything should be good, but it isn't. The server is not getting an IP-address from the DHCP, which it does if I'm not enabling LACP. If I configure the server to a static IP-address on the VLAN 10 subnet, it can't even ping the switch IP-address, much less anything outside of the VLAN. The switch can't ping the server either. I did another attempt with F17-F20 tagged, and checking the box "Default Native Tag (VLAN 10)" in the NIC configuration tool on the server, but there was no difference. Does anyone have any idea what I might have missed?

    Read the article

  • Automating video generation by adding an intro and a trailing video to the main video

    - by DevDewboy
    I have a video project I am trying to compile. Here is the overview: I have many videos which are 5 minute training sessions - Main video. The Intro Video will be a standard 5 second video that will have the Video title and Author. This will be concatenated to the main video. The Trailing Video will pretty much be a stock video that will be concatenated to the main video and have all the legaleze etc. The Intro Vid will smoothly fade into the main vid as well as when you get to end of the main video it will fade into the Trailing video nicely. The product is a new video with a Intro, Main & Trailer video all in one! The concept is really that simple. In fact I found an example of a person who has solved this and is doing exactly what I want. This solution is a Bash script that takes a config file that has the title, author, etc. and generates the Intro, the Ending and creates the resulting video with them concatenated. I am using Ubuntu 12.04 Server. I have been trying to take this as a sample and just running it with no luck because of incompatibility errors. I even attempted to convert it using .MP4 containers or .MKV. I am running into error after error or incompatibility issues. I went as far as changing out the ffmpeg binary using the 25 Oct 2013 version from http://ffmpeg.gusari.org/static/64bit/ which I like as I don't have to worry about rebuilding the binary. Almost successful but again I have some error which I cannot solve. I know part of the problem is the fact that video production, codecs, formats is a completely new field for me so I am attempting to work through this new territory. Perhaps an expert here has something similar that I can use as a guideline that uses MP4 or h.264 format. Or take the solution above from the URL and make it work with a more up-to-date version of ffmpeg. I will include the script and its parameter file and the output (abbreviated because of limitation) below. Basically as the script stands right now, when run I get the error [matroska,webm @ 0x27bbee0] Read error. This error is return from the 'reasembleVideo' routine from the first ffmpeg command. The following is the Parameter File: #!/bin/bash INPUTFILE="ssh_main.mp4" LOGO="logo.png" LOGOLENGTH="1" SPEAKER="Jason" TITLE="Basic SSH Video" DATE="October 28, 2013" SCENESTART="00:00:01" SCENEDURATION="00:00:09" OUTPUTFILE="ssh_basic_1" } The following is the script I am running. The ${OUTPUTFILE} being used is a small 2 minute video I create in screen-o-matic in MP4 format. Script on PasteBin (too long for Super User post)

    Read the article

  • ubuntu bind9 AppArmor read permission denied (chroot jail)

    - by Richard Whitman
    I am trying to run bind9 with chroot jail. I followed the steps mentioned at : http://www.howtoforge.com/debian_bind9_master_slave_system I am getting the following errors in my syslog: Jul 27 16:53:49 conf002 named[3988]: starting BIND 9.7.3 -u bind -t /var/lib/named Jul 27 16:53:49 conf002 named[3988]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jul 27 16:53:49 conf002 named[3988]: adjusted limit on open files from 4096 to 1048576 Jul 27 16:53:49 conf002 named[3988]: found 4 CPUs, using 4 worker threads Jul 27 16:53:49 conf002 named[3988]: using up to 4096 sockets Jul 27 16:53:49 conf002 named[3988]: loading configuration from '/etc/bind/named.conf' Jul 27 16:53:49 conf002 named[3988]: none:0: open: /etc/bind/named.conf: permission denied Jul 27 16:53:49 conf002 named[3988]: loading configuration: permission denied Jul 27 16:53:49 conf002 named[3988]: exiting (due to fatal error) Jul 27 16:53:49 conf002 kernel: [74323.514875] type=1400 audit(1343433229.352:108): apparmor="DENIED" operation="open" parent=3987 profile="/usr/sbin/named" name="/var/lib/named/etc/bind/named.conf" pid=3992 comm="named" requested_mask="r" denied_mask="r" fsuid=103 ouid=103 Looks like the process can not read the file /var/lib/named/etc/bind/named.conf. I have made sure that the owner of this file is user bind, and it has the read/write access to it: root@test:/var/lib/named/etc/bind# ls -atl total 64 drwxr-xr-x 3 bind bind 4096 2012-07-27 16:35 .. drwxrwsrwx 2 bind bind 4096 2012-07-27 15:26 zones drwxr-sr-x 3 bind bind 4096 2012-07-26 21:36 . -rw-r--r-- 1 bind bind 666 2012-07-26 21:33 named.conf.options -rw-r--r-- 1 bind bind 514 2012-07-26 21:18 named.conf.local -rw-r----- 1 bind bind 77 2012-07-25 00:25 rndc.key -rw-r--r-- 1 bind bind 2544 2011-07-14 06:31 bind.keys -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.0 -rw-r--r-- 1 bind bind 271 2011-07-14 06:31 db.127 -rw-r--r-- 1 bind bind 237 2011-07-14 06:31 db.255 -rw-r--r-- 1 bind bind 353 2011-07-14 06:31 db.empty -rw-r--r-- 1 bind bind 270 2011-07-14 06:31 db.local -rw-r--r-- 1 bind bind 2994 2011-07-14 06:31 db.root -rw-r--r-- 1 bind bind 463 2011-07-14 06:31 named.conf -rw-r--r-- 1 bind bind 490 2011-07-14 06:31 named.conf.default-zones -rw-r--r-- 1 bind bind 1317 2011-07-14 06:31 zones.rfc1918 What could be wrong here?

    Read the article

  • Firefox 3.5.6 causes entire computer to freeze

    - by Anthony Aziz
    Here's the situation: Environment: Just installed a fresh copy of Win7 Pro 32-bit to NTFS partition on 750GB SATA drive Hardware: E8400 3GHz ASUS P5QL Pro 4GB DDR2 1066 RAM EVGA 9800 GTX+ Plenty of cooling, no problems with hardware before Data is stored on a separate partition, including My Documents No security software is yet installed No extensions installed yet Problem: While using Firefox, sometimes the entire computer will freeze/hang. I get no mouse or keyboard input, can't CTRL+ALT+DEL, no "not responding" indication, just a static image on my display. My drivers are all up to date as far as I'm aware (I just installed this copy of Windows last week). I first noticed this when trying to install Xmarks. I went to the Xmarks site and tried to install and it would freeze. I managed to get it installed (Safe mode and the Mozilla addon site worked), but when I go to configure it (log in, etc), the computer freezes. I don't think it's a matter of usage time or memory issues, because while testing, I browsed wallpaper galleries for about 30 minutes, sometimes as many as 12-15 tabs open at a time, without issue. Sometimes I won't even try to install Xmarks at it will hang. I can install (some) other extensions, the only one I've tried is download status bar (which works). What I've done to try to fix: Restarted (duh) Windows safe mode Completely remove Firefox and install it to a new directory, according to Mozilla's KB (I haven't tried the profile manager, though I assume this does the same thing, except perhaps more thoroughly) Some BIOS changes, including Power options, disabling oveclocking (it was a modest overclock on the CPU, which has run Win7 beta and RC for almost a year now) Memtest Used another Windows user profile, same tragic results I'm STUCK now, with no idea what to do. I'm using Chrome as my main browser at the moment, but that's not something I want to be stuck with. I like Firefox and want to use it. I'm going to try creating a new profile first. One thing I did notice: I started leaving task manager and performance monitor open when anticipating (but dreading) a freeze. firefox.exe had low CPU and low memory, but it looked like overall disk usage was seeing some spikes on the small graph Performance Monitor gives you. I saw on one blog post a fellow using XP moved his Local Settings directory from a separate drive to his main drive, and that solved it, but I don't think my AppData directory is on my D: drive, and that's on the same physical device anyways. Still, something that might be worth trying. I'd extremely appreciate any help. Thanks very much. I really don't want to reinstall Windows from scratch again :( Anthony Aziz

    Read the article

  • Confusion on networking service start/stop in Ubuntu

    - by Daniel Ball
    I'm preparing to move and took down two of my servers, leaving only one with some essential services running. What I neglected to consider was that one was the DHCP server(which I realized when somebody contacted me saying they couldn't connect. Whups). So because I only have a few hosts on this small network, I opted to just statically configure them for now. One of these is a new Ubuntu 11.04 server, where I have very little experience. I edited /etc/network/interfaces and /etc/hosts to reflect my changes. I ran $sudo /etc/init.d/networking stop *deconfiguring network interfaces ... So yay. Then I try to start, it gives me the mumbo jumbo about using services (why didn't it do that for the stop?) So instead I run ... $sudo service networking start networking stop/waiting Now, to me that says the status of the service is stopped. But when I ping another computer, I get a successful reply. So is it not actually stopped? More importantly, am I doing something wrong? Edit daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking stop stop: Unknown instance: daniel@FOOBAR:~$ sudo service networking status networking stop/waiting daniel@FOOBAR:~$ sudo service networking start networking stop/waiting daniel@FOOBAR:~$ sudo service networking status networking stop/waiting So you can see why I ran /etc/init.d/networking stop instead. For some reason upstart (that is what "services" is, right?) isn't working with stop. cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 FOOBAR 198.3.9.2 FOOBAR #Added entry July 19 2011 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # hostname FOOBAR auto eth0 iface eth0 inet static address 198.3.9.2 netmask 255.255.255.0 network 198.3.9.0 broadcast 198.3.9.255 gateway 198.3.9.15 No I didn't save backups, it was just a minor change so I just commented out the old DHCP setting. Edit I set everything back to original settings and set up a DHCP server. "starting" networking does the same thing. I can only assume this is normal, I just don't know WHY. It can't be anything to do with the configuration files, since they've been restored.

    Read the article

  • TrueCrypt drive letter not available

    - by Tono Nam
    With c# or a batch file I mount a trueCrypt volume located at A:\volumeTrueCrypt.tc With c# I do: static void Main(string[] args) { var p = Process.Start( fileName:@"C:\Program Files\TrueCrypt\TrueCrypt.exe", arguments:@"/v a:\volumetruecrypt.tc /lw /a /p truecrypt" ); p.WaitForExit(); } the alternative is to run the command on the command line as: C:\Windows\system32>"C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lw /a /p truecrypt Either way I get the error: Why do I get that error? I was able to run that command the first time. The moment I dismounted the volume and tryied to mount it again I got that error. I know that drive letter W is available because it shows as an available letter on true crypt if I where to open it manually: If I where then click on the button mount and then type the password truecrypt (truecrypt is the password) then it will successfully mount on drive w. Why I am not able to mount it from the command line!? If I change the drive letter on the command line it works. I want to use the drive W though. In other words executing "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /lz /a /p truecrypt will successfully mount that volume on drive z but I do not want to mount it on drive z I want to mount it on drive w. The first time I ran the batch it ran fine. Also if I restart my computer I believe it should work. More info on how to use trueCrypt through the command line can be found at: http://www.truecrypt.org/docs/?s=command-line-usage Edit I was also investivating when does this error occures. In order to generate this error you need to follow this steps. 1) execute the command: (note the /q argument at the end for quiet) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt /q "C...TrueCrypt.exe" = location where trueCrypt is located /v "path" = location where volume is located /n = drive letter n /p truecrypt = password is "trueCrypt" /q = execute in quiet mode. do not show window note I am mounting to drive letter n 2) now volume should be mounted. 3) Open trueCrypt and manually dismount that volume (without using command line) 4) Attempt to run the same command line (without the /q so you see the error) "C:\Program Files\TrueCrypt\TrueCrypt.exe" /v "a:\volumetruecrypt.tc" /ln /a /p truecrypt 5) an error should show up So the problem ocures when I manually dismount the volume. If I dismount it from the command line I get no errors. But I think this is a bug from trueCrypt

    Read the article

  • UNIX - mount: only root can do that

    - by Travesty3
    I need to allow a non-root user to mount/unmount a device. I am a total noob when it comes to UNIX, so please dumb it down for me. I've been looking all over teh interwebz to find an answer and it seems everyone is giving the same one, which is to modify /etc/fstab to include that device with the 'user' option (or 'users', tried both). Cool, well I did that and it still says "mount: only root can do that". Here are the contents of my fstab: # /etc/fstab: static file system information. # # Use 'vol_id --uuid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # proc /proc proc defaults 0 0 # / was on /dev/mapper/minicc-root during installation UUID=1a69f02a-a049-4411-8c57-ff4ebd8bb933 / ext3 relatime,errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=038498fe-1267-44c4-8788-e1354d71faf5 /boot ext2 relatime 0 2 # swap was on /dev/mapper/minicc-swap_1 during installation UUID=0bb583aa-84a8-43ef-98c4-c6cb25d20715 none swap sw 0 0 /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0 /dev/scd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sdb1 /mnt/sdcard auto auto,user,rw,exec 0 0 My thumb drive partition shows up as /dev/sdb1. I'm pretty sure my fstab is set up OK, but everyone on the other posts seems to fail to mention how they actually call the 'mount' command once this entry is in the fstab file. I think this is where my problem may be. The command I use to mount the drive is: $ mount /dev/sdb1 /mnt/sdcard. /bin/mount is owned by root and is in the root group and has 4755 permissions. /bin/umount is owned by root and is in the root group and has 4755 permissions. /mnt/sdcard is owned by me and is in one of my groups and has 0755 permissions. My mount command works fine if I use sudo, but I need to be able to do this without sudo (need to be able to do it from a PHP script using shell_exec). Any suggestions? Sorry for making you read so much...just trying to get as much info in the initial post as possible to preemptively answer questions about configuration stuff. If I missed anything tho, ask away. Thanks! -Travis

    Read the article

  • virsh XML interface allocation

    - by Kaushik Koneru
    I am trying to launch VM using a XML. This VM will be having 5 interfaces each connected to certain bridge. Issue here is allocation of these interfaces is random. My XML <interface type='bridge'> <mac address='52:54:00:9f:14:b3'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='e1000'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:9f:14:b4'/> <source bridge='br1'/> <target dev='vnet2'/> <model type='e1000'/> <alias name='net1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:9f:14:b5'/> <source bridge='br2'/> <target dev='vnet2'/> <model type='e1000'/> <alias name='net3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:9f:14:c4'/> <source bridge='br3'/> <target dev='vnet3'/> <model type='e1000'/> <alias name='net4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x18' function='0x0'/> </interface> Allocation of interfaces are random mean e th6 will be connected to br3 ; eth7 -- br4 eth8 -- br2 eth9 -- br0. Is there any way to make it static?? At the same time is there anyway of assigning IP Address to these eth interfaces through XML file itself??

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >