Search Results

Search found 6036 results on 242 pages for 'boost bind'.

Page 190/242 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • Why bother writing an Windows 8 app?

    - by Dennis Vroegop
    So you want to know more about development for Window 8. Great! There are lots of reasons you should be excited about this. Since I don’t know why YOU are interested in this, I’ll make a list of reasons people can choose from. (as a side note: whenever I talk about Win8 development I am referring to the Metro Style / WinRt side of things. Apps for the ‘classic’ desktop side of Win8 on Intel are business as usual…) So… Why would you care about making an app for Windows 8? 1. It’s cool. Let’s not beat around the bush: if you like development for a hobby then you’ll love to work on this new platform. You can create apps in a relative short time (short time as in compared to writing a new CRM system) and that makes it great for a hobby product. 2. You’ll stand out. Hey, we all need an ego boost every now and then. We all need to feel special. So if you can manage to be one of the first to have you app in the Store then you’ll likely to be noticed. Just close your eyes for a moment and image you standing in a bar. It’s crowded, and then you casually say “Oh yeah, I just had my app certified and it’s in the Win8 store now”. People will stop talking, will offer you drinks and beautiful women / gorgeous man / furry creatures from Alpha Centauri (whatever your preferences are) will propose. Or maybe not. Anyway…. 3. Make some cash! IDC predicts there will be about 350,000,000 Windows 8 licenses sold in the next year. Think about that number. 350,000,000. And they all have access to the Store. Where you’re app will be. With one little click they can select it, download and somehow magically $1.00 or $2.00 from their bank account is transferred to yours. Now, I am not saying that all of those people will download and buy your app but what if only 1% of them did? Remember: there aren’t that many apps available yet….. 4. Learn. Creating new small apps is a great way to learn new stuff. Yes, you could read about it (on this blog for instance) but the only way to learn something is to do it. So be prepared for the future and learn something new by doing it.Write an app! Now! 5. The biggie (for me at least): it’s fun. Even if you remove the points above it’s still fun to write for these devices and this platform. Now some of you will say : “But why not write a great app for IOS or Android?” I think this is a valid question. Of course the novelty of the platform wears out and points 2 and 3 from above list will not be as relevant as it is today. But still 1 4 and 5 remain. And don’t forget: if you already work on the Microsoft platform it’s not that hard to learn this new Win8 stuff. If you have done some XAML development (be it WPF or Silverlight) you are almost there in becoming a good Win8 developer. So you’ll be more productive much sooner than when you have to learn Objective C or Java. Even if you’re a HTML / Javascript developer (I say developer here, not designer) you’ll be up to speed on Win8 development pretty soon. Yes, you, that funky Web Developer who lives and breathes HTML5, CSS3 and JavaScript / Node.Js / JQuery: you too can be a Win8 developer. A first class Win8 developer! So.. Download the stuff you need from http://dev.windows.com install Windows 8 and Visual Studio 12 and by the time you’re ready I’ll be working on the next article: how to do all this? Happy coding!

    Read the article

  • Nifty popup fails to register

    - by Snailer
    I'm new to Nifty GUI, so I'm following a tutorial here for making popups. For now, I'm just trying to get a very basic "test" popup to show, but I get multiple errors and none of them make much sense. To show a popup, I believe it is necessary to first have a Nifty Screen already showing, which I do. So here is the ScreenController for the working Nifty Screen: public class WorkingScreen extends AbstractAppState implements ScreenController { //Main is my jme SimpleApplication private Main app; private Nifty nifty; private Screen screen; public WorkingScreen() {} public void equip(String slotstr) { int slot = Integer.valueOf(slotstr); System.out.println("Equipping item in slot "+slot); //Here's where it STOPS working. app.getPlayer().registerPopupScreen(nifty); System.out.println("Registered new popup"); Element ele = nifty.createPopup(app.getPlayer().POPUP); System.out.println("popup is " +ele); nifty.showPopup(nifty.getCurrentScreen(), ele.getId(), null); } @Override public void initialize(AppStateManager stateManager, Application app) { super.initialize(stateManager, app); this.app = (Main)app; } @Override public void update(float tpf) { /** jME update loop! */ } public void bind(Nifty nifty, Screen screen) { this.nifty = nifty; this.screen = screen; } When I call equip(0) the system prints Equipping item in slot 0, then a lot of errors and none of the subsequent println()'s. Clearly it botches somewhere in Player.registerPopupScreen(Nifty nifty). Here's the method: public final String POPUP = "Test Popup"; public void registerPopupScreen(Nifty nifty) { System.out.println("Attempting new popup"); PopupBuilder b = new PopupBuilder(POPUP) {{ childLayoutCenter(); backgroundColor("#000a"); panel(new PanelBuilder() {{ id("List"); childLayoutCenter(); height(percentage(75)); width(percentage(50)); control(new ButtonBuilder("TestButton") {{ label("TestButton"); width("120px"); height("40px"); align(Align.Center); }}); }}); }}; System.out.println("PopupBuilder success."); b.registerPopup(nifty); System.out.println("Registerpopup success."); } Because that first println() doesn't show, it looks like this method isn't even called at all! Edit After removing all calls on the Player object, the popup works. It seems I'm not "allowed" to access the player from the ScreenController. Unfortunate, since I need information on the player for the popup. Is there a workaround?

    Read the article

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • Binding MediaElement to a ViewModel in a Windows 8 Store App

    - by jdanforth
    If you want to play a video from your video-library in a MediaElement control of a Metro Windows Store App and tried to bind the Url of the video file as a source to the MediaElement control like this, you may have noticed it’s not working as well for you: <MediaElement Source="{Binding Url}" /> I have no idea why it’s not working, but I managed to get it going using  ContentControl instead: <ContentControl Content="{Binding Video}" /> The code behind for this is: protected override void OnNavigatedTo(NavigationEventArgs e) {     _viewModel = new VideoViewModel("video.mp4");     DataContext = _viewModel; } And the VideoViewModel looks like this: public class VideoViewModel {     private readonly MediaElement _video;     private readonly string _filename;       public VideoViewModel(string filename)     {         _filename = filename;         _video = new MediaElement { AutoPlay = true };         //don't load the stream until the control is ready         _video.Loaded += VideoLoaded;     }       public MediaElement Video     {         get { return _video; }     }       private async void VideoLoaded(object sender, RoutedEventArgs e)     {         var file = await KnownFolders.VideosLibrary.GetFileAsync(_filename);         var stream = await file.OpenAsync(FileAccessMode.Read);         _video.SetSource(stream, file.FileType);     } } I had to wait for the MediaElement.Loaded event until I could load and set the video stream.

    Read the article

  • From the Classroom to the Boardroom

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Pens and Paper...these are the only things being a student and being a graduate / professional have in common. Walking in to the offices of Oracle South Africa as a graduate the first thing you notice is how polished and sleek all the people who work here look. 80% of the ladies wear sky-scraper heels and walk with the greatest grace. This was the first of many rude awakenings to remind me that I am no longer a student but a graduate. My first struggle was having to wake up at wee hours of the morning to prepare for work. As a student going to class was almost an optional thing, if you missed a morning class you could always attend an evening class to make up for it or simply attend with another group. But in the workplace, you HAVE to show up every single morning at the same time, with no option of coming in when it suits you and there is definitely no coming in with the evening class/shift. As a student, the earliest hour I ever woke up was 7:00am, anything earlier than that was considered inhumane torture. My reason for waking up every morning as a student was “you have a degree to go get” but as a graduate having to go to work I have to say to myself “here’s to a new day of learning and growing”. My second struggle has come in having to change my beloved wardrobe. Everyone who knows me knows how passionate I am about fashion and shopping. For me Shopping is a BASIC HUMAN RIGHT, that should not be messed with. Therefore it was with great sadness that I swopped my rippled skinny jeans for pin-striped formal pants, my long chandelier earrings for simple studs, my flat shoes for heels, my sheer blouses for crisp white shirts, even my beloved wild hair had to make way for a simple ponytail. Our looks as ladies also came under great scrutiny, we had to acquaint ourselves with some serious grooming tools: the mascara, blush, lip-gloss, blush, a touch of lipstick and a manicure set. Language was a struggle of its own as well. Being a student you learn to relate to your peers in a informal way. In the workplace you have to address everyone with the same respect, including your peers. Words like “Hey buddy” had to make way for “good morning friend”. The month long winter school holiday was one of the things I looked forward to as a student. This was a time where we got to be at home and avoid the coldest month of the year, July. It was the most amazing thing ever, just sleeping and snuggling up to all sorts of warm things but sadly it is now a thing of the past. It is currently winter in South Africa and going to work has become the most unfashionable thing with all the jackets, boots, scarves and gloves. But summer is coming and I will miss those holidays too. As a student the school holidays were like a gift for us to catch a break and not think for a while which was why it was imaginable how someone would go on for the entire year without a break, with only the promise of a mere 21 days annual leave!! Right now I am sure we are all looking forward to taking that annual leave when the time is right. The worst rude awakening I must say, has to be presenting in front of clients and managers. As a student you have the same class mates for almost four years therefore presenting in front of them becomes the norm over the years and your lecturer will always go gently on you. What they don’t tell you at University is that in the real world, time is money and clients pay money to see you present therefore there is no room for error. Clients are not there to give you a score and boost your ego, they expect nothing less than 100% and they will let you know without a second thought. For a graduate this can feel like you are being fed to the sharks, you either get eaten or you swim for your life. At the end of the day, it is all an experience that is meant to groom us into better professional and make us a part of the Red Team. All the sacrifices are worth it and they lead us to being better and more polished professionals. So if you are interested in joining the ECEMEA Sales and Presales Internship Programme, please have a look at http://campus.oracle.com for more information and for our latest vacancies and internships. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Key Windows Phone Development Concepts

    - by Tim Murphy
    As I am doing more development in and out of the enterprise arena for Windows Phone I decide I would study for the 70-599 test.  I generally take certification tests as a way to force me to dig deeper into a technology.  Between the development and studying I decided it would be good to put a post together of key development features in Windows Phone 7 environment.  Contrary to popular belief the launch of Windows Phone 8 will not obsolete Windows Phone 7 development.  With the launch of 7.8 coming shortly and people who will remain on 7.X for the foreseeable future there are still consumers needing these apps so don’t throw out the baby with the bath water. PhoneApplicationService This is a class that every Windows Phone developer needs to become familiar with.  When it comes to application state this is your go to repository.  It also contains events that help with management of your application’s lifecycle.  You can access it like the following code sample. 1: PhoneApplicationService.Current.State["ValidUser"] = userResult; DeviceNetworkInformation This class allows you to determine the connectivity of the device and be notified when something changes with that connectivity.  If you are making web service calls you will want to check here before firing off. I have found that this class doesn’t actually work very well for determining if you have internet access.  You are better of using the following code where IsConnectedToInternet is an App level property. private void Application_Launching(object sender, LaunchingEventArgs e){ // Validate user access if (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None) { IsConnectedToInternet = true; } else { IsConnectedToInternet = false; } NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(NetworkChange_NetworkAddressChanged);}void NetworkChange_NetworkAddressChanged(object sender, EventArgs e){ IsConnectedToInternet = (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None);} Push Notification Push notification allows your application to receive notifications in a way that reduces the application’s power needs. This MSDN article is a good place to get the basics of push notification, but you can see the essential concept in the diagram below.  There are three types of push notification: toast, Tile and raw.  The first two work regardless of the state of the application where as raw messages are discarded if your application is not running.   Live Tiles Live tiles are one of the main differentiators of the Windows Phone platform.  They allow users to find information at a glance from their start screen without navigating into individual apps.  Knowing how to implement them can be a great boost to the attractiveness of your application. The simplest step-by-step explanation for creating live tiles is here. Local Database While your application really only has Isolated Storage as a data store there are some ways of giving you database functionality to develop against.  There are a number of open source ORM style solutions.  Probably the best and most native way I have found is to use LINQ to SQL.  It does take a significant amount of setup, but the ease of use once it is configured is worth the cost.  Rather than repeat the full concepts here I will point you to a post that I wrote previously. Tasks (Bing, Email) Leveraging built in features of the Windows Phone platform is an easy way to add functionality that would be expensive to develop on your own.  The classes that you need to make yourself familiar with are BingMapsDirectionsTask and EmailComposeTask.  This will allow your application to supply directions and give the user an email path to relay information to friends and associates. Event model Because of the ability for users to switch quickly to switch to other apps or the home screen is just one reason why knowing the Windows Phone event model is important.  You need to be able to save data so that if a user gets a phone call they can come back to exactly where they were in your application.  This means that you will need to handle such events as Launching, Activated, Deactivated and Closing at an application level.  You will probably also want to get familiar with the OnNavigatedTo and OnNavigatedFrom events at the page level.  These will give you an opportunity to save data as a user navigates through your app. Summary This is just a small portion of the concepts that you will use while building Windows Phone apps, but these are some of the most critical.  With the launch of Windows Phone 8 this list will probably expand.  Take the time to investigate these topics further and try them out in your apps. del.icio.us Tags: Windows Phone 7,Windows Phone,WP7,Software Development,70-599

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • MVVM and Animations in Silverlight

    - by Aligned
    I wanted to spin an icon to show progress to my user while some content was downloading. I'm using MVVM (aren't you) and made a satisfactory Storyboard to spin the icon. However, it took longer than expected to trigger that animation from my ViewModel's property.I used a combination of the GoToState action and the DataTrigger from the Microsoft.Expression.Interactions dll as described here.Then I had problems getting it to start until I found this approach that saved the day. The DataTrigger didn't bind right away because "it doesn’t change visual state on load is because the StateTarget property of the GotoStateAction is null at the time the DataTrigger fires.". Here's my XAML, hopefully you can fill in the rest.<Image x:Name="StatusIcon" AutomationProperties.AutomationId="StatusIcon" Width="16" Height="16" Stretch="Fill" Source="inProgress.png" ToolTipService.ToolTip="{Binding StatusTooltip}"> <i:Interaction.Triggers> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="True" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Downloading" /> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> <utilitiesBehaviors:DataTriggerWhichFiresOnLoad Value="False" Binding="{Binding IsDownloading, Mode=OneWay, TargetNullValue=True}"> <ei:GoToStateAction StateName="Complete"/> </utilitiesBehaviors:DataTriggerWhichFiresOnLoad> </i:Interaction.Triggers> <Image.Projection> <PlaneProjection/> </Image.Projection> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="VisualStateGroup"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="0" To="Downloading"> <VisualTransition.GeneratedEasingFunction> <QuadraticEase EasingMode="EaseInOut"/> </VisualTransition.GeneratedEasingFunction> <Storyboard RepeatBehavior="Forever"> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Projection).(PlaneProjection.RotationZ)" Storyboard.TargetName="StatusIcon"> <EasingDoubleKeyFrame KeyTime="0:0:1.5" Value="-360"/> <EasingDoubleKeyFrame KeyTime="0:0:2" Value="-360"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualTransition> <VisualTransition From="Downloading" GeneratedDuration="0"/> </VisualStateGroup.Transitions> <VisualState x:Name="Downloading"/> <VisualState x:Name="Complete"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups></Image>MVVMAnimations.zip

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • What are the best practices to use NHiberante sessions in asp.net (mvc/web api) ?

    - by mrt181
    I have the following setup in my project: public class WebApiApplication : System.Web.HttpApplication { public static ISessionFactory SessionFactory { get; private set; } public WebApiApplication() { this.BeginRequest += delegate { var session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(session); }; this.EndRequest += delegate { var session = SessionFactory.GetCurrentSession(); if (session == null) { return; } session = CurrentSessionContext.Unbind(SessionFactory); session.Dispose(); }; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); var assembly = Assembly.GetCallingAssembly(); SessionFactory = new NHibernateHelper(assembly, Server.MapPath("/")).SessionFactory; } } public class PositionsController : ApiController { private readonly ISession session; public PositionsController() { this.session = WebApiApplication.SessionFactory.GetCurrentSession(); } public IEnumerable<Position> Get() { var result = this.session.Query<Position>().Cacheable().ToList(); if (!result.Any()) { throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.NotFound)); } return result; } public HttpResponseMessage Post(PositionDataTransfer dto) { //TODO: Map dto to model IEnumerable<Position> positions = null; using (var transaction = this.session.BeginTransaction()) { this.session.SaveOrUpdate(positions); try { transaction.Commit(); } catch (StaleObjectStateException) { if (transaction != null && transaction.IsActive) { transaction.Rollback(); } } } var response = this.Request.CreateResponse(HttpStatusCode.Created, dto); response.Headers.Location = new Uri(this.Request.RequestUri.AbsoluteUri + "/" + dto.Name); return response; } public void Put(int id, string value) { //TODO: Implement PUT throw new NotImplementedException(); } public void Delete(int id) { //TODO: Implement DELETE throw new NotImplementedException(); } } I am not sure if this is the recommended way to insert the session into the controller. I was thinking about using DI but i am not sure how to inject the session that is opened and binded in the BeginRequest delegate into the Controllers constructor to get this public PositionsController(ISession session) { this.session = session; } Question: What is the recommended way to use NHiberante sessions in asp.net mvc/web api ?

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • Postgres - could not create any TCP/IP sockets

    - by Jacka
    I'm running a rails app in development with postgresql 9.3. When I tried to start passenger server today, I got: PG::ConnectionBad - could not connect to server: Connection refused Is the server running on host "localhost" (217.74.65.145) and accepting TCP/IP connections on port 5432? No big deal I thought, that happened before. Restarting postgres always solved the problem. So I ran sudo service postgresql restart and got: * Restarting PostgreSQL 9.3 database server * The PostgreSQL server failed to start. Please check the log output: 2014-06-11 10:32:41 CEST LOG: could not bind IPv4 socket: Cannot assign requested address 2014-06-11 10:32:41 CEST HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. 2014-06-11 10:32:41 CEST WARNING: could not create listen socket for "localhost" 2014-06-11 10:32:41 CEST FATAL: could not create any TCP/IP sockets ...fail! My postgresql.conf points to the defaults: localhost and port 5432. I tried changing the port but the error message is the same (except the port change). Both ps aux | grep postgresql and ps aux | grep postmaster return nothing. EDIT: In postgresql.conf I changed listen_addresses to 127.0.0.1 instead of localhost and it did the trick, server restarted. I also had to edit my applications' db config and point to 127.0.0.1 instead of localhost. However, the question is now, why is localhost considered to be 217.74.65.145 and not 127.0.0.1? That's my /etc/hosts: 127.0.0.1 local 127.0.1.1 jacek-X501A1 127.0.0.1 something.name.non.example.com 127.0.0.1 company.something.name.non.example.com

    Read the article

  • Plesk SSL Certificate (Default cert when SSL enabled, CORRECT cert when SSL is disabled)

    - by hztetra
    I'm running Plesk 8.6.0: I have an SSL cert installed through Plesk's admin interface. But I have a bit of an issue: When I enabled SSL for the site, and selected my cert, then restart httpd, Plesk defaults to using my self-signed default certificate. Conversely, when I disable SSL support for the domain, all of a sudden Plesk is using my new SSL certificate. Unfortunately, when I try to view any folder on the site (mydomain.tld/folder) I'm simply met with a 404 (with files placed both in httpdocs and httpsdocs). I switch SSL support back on, and Plesk defaults back to the default self-signed cert and I can then view the folders that were not previously accessible. Any ideas? One further note: I tried following http://kb.parallels.com/en/939 . Once I tried to restart httpd with the edited ssl.conf file, I received an httpd could not start error. I restored the original ssl.conf file, and still received the could not start error. So as of now, I am running without an ssl.conf file. The following is the error I receive when I attempt to reintroduce ssl.conf: Starting httpd: [Mon Aug 23 15:45:40 2010] [warn] module ssl_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address 0.0.0.0:443 no listening sockets available, shutting down Unable to open logs

    Read the article

  • Configuring external SMTP server on Azure VM - messages staying in queue

    - by Steph Locke
    I have an external SMTP provider: auth.smtp.1and1.co.uk I am trying to send SQL Server Reporting Services emails via this on an Windows 2012 Azure VM. It is configured sufficiently correctly for emails to be generated, but I've not configured something or mis-configured something as the emails then stay in the queue. Setup details Configured SMTP Virtual Server General: IP Address: Fixed value Access: Access Control: Authentication: ticked Anonymous access Access: Connection Control: All except the list below (which is empty) Access: relay restrictions: Only the list below (which contains 127.0.0.1), ticked 'allow all..' option Delivery: Outbound Security...:Basic Authentication with username and password completed, ticked TLS encryption Delivery: Outbound connections...:TCP port=587 Delivery: Advanced: FQDN=ServerName, smarthost=auth.smtp.1and1.co.uk I then set the following SSRS rsreportserver.config values: <SMTPServer>100.92.192.3</SMTPServer> <SendUsing>2</SendUsing> <SMTPServerPickupDirectory> c:\inetpub\mailroot\pickup </SMTPServerPickupDirectory> <From>[email protected]</From> Tried so far 1) turning the smtp service off and on again (just in case) 2) run SMTPDiag with no errors (also no emails) 3) tried turning off the firewall for the ports (and more generally to see if it made a difference) 4) tried generation from powershell which resulted with message in queue 5) added 25 and 857 as endpoint 6) perused the event log and found some warnings that appear to be about the recipient Message delivery to the remote domain 'gmail.com' failed for the following reason: Unable to bind to the destination server in DNS. Message delivery to the host '212.227.15.179' failed while delivering to the remote domain 'gmail.com' for the following reason: The remote server did not respond to a connection attempt. 7) tried pinging but this appears to be blocked on azure 8) tried more powershell sending on different domains variants (localhost, boxname, internal ip used in smtp properties, 127.0.0.1) - none resulting in success 9) tried adding a remote domain - no change Could anyone recommend what step 10 should be in fixing this issue please?

    Read the article

  • Restarting Haproxy Gracefully

    - by Anand Gupta
    As per various blogs, HAproxy can be gracefully restarted using the following command: sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) TO verify this, I had set up a apache bench script which contiguously sent message to haproxy. Ideally, whenever I restarted my server the script should not have an affect on the apache bunch execiton. But, it seems that whenever Haproxy is restarted apache bench scripts terminate and the connection to load balancer is lost. Here is the details of my HaProxy configuration file : global nbproc 4 log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon pidfile /var/run/haproxy.pid stats socket /home/ubuntu/haproxy.sock #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webstats bind 0.0.0.0:1000 stats enable mode http stats uri /lb?stats stats auth anand:aaaaaaaa #stats refresh listen web-farm 0.0.0.0:80 mode http balance roundrobin option httpchk HEAD /index.php HTTP/1.0 server server2.com 1.1.1.1:80 server serve1.com 1.1.1.2:80 ~ Please let me know what am I missing here.

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • How do I start nginx on port 80 at OS X login?

    - by Bryson
    I installed Nginx using homebrew and after completing the installation the following message was displayed: In the interest of allowing you to run `nginx` without `sudo`, the default port is set to localhost:8080. If you want to host pages on your local machine to the public, you should change that to localhost:80, and run `sudo nginx`. You'll need to turn off any other web servers running port 80, of course. You can start nginx automatically on login running as your user with: mkdir -p ~/Library/LaunchAgents cp #{prefix}/org.nginx.nginx.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/org.nginx.nginx.plist Though note that if running as your user, the launch agent will fail if you try to use a port below 1024 (such as http's default of 80.) But I want Nginx, on port 80, running at login and I don't want to have to open terminal and type in sudo nginx to do it. I want it to load from a plist file like Redis and PostgreSQL do. I moved the plist to /Library/LaunchAgents/ from the user folder equivalent and changed its ownership, also tried setting the user directive in the nginx.conf file and still the same error message in Console.app: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) (along with another message telling me that since nginx was being run without super-user privileges, the user directive was being ignored)

    Read the article

  • Problem: Munin Graph

    - by Pablo
    I've been trying to install Munin for 15 days, I looked for information, analized logs, I even deleted and reinstalled Munin using YUM. I'm hosted at Media Temple on a VPS with CentOS. The problem is still there and It's driving me nuts. Graphics are shown as following: http://imageshack.us/photo/my-images/833/capturadepantalla201106u.png/ This is the configuration of my munin.conf file dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin [localhost] address **.**.***.*** #IP VPS This is the configuration of my munin-node.conf file log_level 4 log_file /var/log/munin/munin-node.log port 4949 pid_file /var/run/munin/munin-node.pid background 1 setseid 1 # Which port to bind to; host * user root group root setsid yes # Regexps for files to ignore ignore_file ~$ ignore_file \.bak$ ignore_file %$ ignore_file \.dpkg-(tmp|new|old|dist)$ ignore_file \.rpm(save|new)$ allow ^127\.0\.0\.1$ Thanks so much, I appreciate all the answers UPDATE munin-graph.log Jun 22 16:30:02 - Starting munin-graph Jun 22 16:30:02 - Processing domain: localhost Jun 22 16:30:02 - Graphed service : open_inodes (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailtraffic (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : apache_processes (0.12 sec * 4) Jun 22 16:30:02 - Graphed service : entropy (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailstats (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : processes (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : apache_accesses (0.27 sec * 4) Jun 22 16:30:03 - Graphed service : apache_volume (0.15 sec * 4) Jun 22 16:30:03 - Graphed service : df (0.21 sec * 4) Jun 22 16:30:03 - Graphed service : netstat (0.19 sec * 4) Jun 22 16:30:03 - Graphed service : interrupts (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : swap (0.14 sec * 4) Jun 22 16:30:04 - Graphed service : load (0.11 sec * 4) Jun 22 16:30:04 - Graphed service : sendmail_mailqueue (0.13 sec * 4) Jun 22 16:30:04 - Graphed service : cpu (0.21 sec * 4) Jun 22 16:30:04 - Graphed service : df_inode (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : open_files (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : forks (0.13 sec * 4) Jun 22 16:30:05 - Graphed service : memory (0.26 sec * 4) Jun 22 16:30:05 - Graphed service : nfs_client (0.36 sec * 4) Jun 22 16:30:05 - Graphed service : vmstat (0.10 sec * 4) Jun 22 16:30:05 - Processed node: localhost (3.45 sec) Jun 22 16:30:05 - Processed domain: localhost (3.45 sec) Jun 22 16:30:05 - Munin-graph finished (3.46 sec)

    Read the article

  • Access All VLANS over XenServer Interface

    - by Garrett
    For my current setup, I have a physical NIC on a XenServer machine that receives traffic tagged with various VLAN IDs. I have a virtual machine that is running Vyatta that needs to be able to access both tagged and untagged traffic in order to route traffic. Here's the problem: 1) If I bind the NIC in XenCenter to the VM (which has no VLAN ID associated with it), the VM cannot see any tagged traffic. I have verified this using tcpdump. However, the tagged traffic is flowing into the XenServer machine perfectly fine. 2) I have more than 7 VLANs, so adding each VLAN as an interface within XenCenter isn't an option. 3) Even though tcpdump shows no tagged traffic coming in the VMs NIC, I have tried adding VLAN interfaces within Vyatta. This also doesn't work. I have tried using both Linux bridge and openvswitch setups and neither seem to work. I am running XenServer 6.0.3 free and Vyatta VC6.3. Please help! I've run out of ideas. I've googled for hours and can't seem to find anything.

    Read the article

  • Booting the server redis no errors

    - by Tylër
    The redis but usually begins with the following errors: tyler @ tyler-vortex: ~ / pens $. / src / redis-server [3690] Dec 01 10:56:05 # Warning: the specified config file, using the default config. In order to Specify a config file use 'redis-server / path / to / redis.conf' [3690] Dec 01 10:56:05 # Unable to set the max number of files limit to 10032 (Operation not permitted), setting the max configuration to 992 clients. Others errors founds: tyler@tyler-vortex:~/redis$ sudo ./utils/install_server.sh Welcome to the redis service installer This script will help you easily set up a running redis server Please select the redis port for this instance: [6379] Selecting default: 6379 Please select the redis config file name [/etc/redis/6379.conf] Selected default - /etc/redis/6379.conf Please select the redis log file name [/var/log/redis_6379.log] Selected default - /var/log/redis_6379.log Please select the data directory for this instance [/var/lib/redis/6379] Selected default - /var/lib/redis/6379 Please select the redis executable path [/usr/local/bin/redis-server] cat: ./redis.conf.tpl: Arquivo ou diretório não encontrado cat: ./redis_init_script.tpl: Arquivo ou diretório não encontrado ERROR: Could not write init script to /tmp/6379.conf. Aborting! Furthermore, I would like to know how to configure it not to consume so much RAM. Follow the memory configuration of our website, but the settings of "vm-*" does not exist in the file redis.conf. http://redis.io/topics/virtual-memory You have to create them? * Edit: I installed. After that, I believe that I no longer have access via. / Src / redis-server, because it happens: tyler@tyler-vortex:~$ cd redis/ tyler@tyler-vortex:~/redis$ ./src/redis-server [2616] 01 Dec 22:29:30 # Warning: no config file specified, using the default config. In order to specify a config file use 'redis-server /path/to/redis.conf' [2616] 01 Dec 22:29:30 # Opening port 6379: bind: Address already in use tyler@tyler-vortex:~/redis$ But there's another detail, the redistribution starts with the system .. redis 127.0.0.1:6379> exit tyler@tyler-vortex:~/redis$ ./src/redis-cli redis 127.0.0.1:6379> exit ... but how can I now see that the communication had before you installed from. sh?

    Read the article

  • Sending USR2 to mongrel_rails sometimes results in an “Address already in use” on the restart

    - by Ben
    We have a rolling-restart mode for our mongrel cluster that sends a USR2 signal to each running process. This works great, most of the time. But very occasionally the mongrel process will shutdown, and then fail to restart, with the following error: /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize_without_backlog': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/tcphack.rb:12:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel.rb:93:in `initialize' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `new' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.1.5/bin/../lib/mongrel/configurator.rb:139:in `listener' Looking though the mongrel source, the USR2 handler calls a synchronous stop on the running server, so it ought to block until the socket has been released. Has anyone seen this error? Does anyone have any ideas what might cause it? (I asked this question over on StackOverflow initially, but thought it might be more appropriate here)

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >