Search Results

Search found 21853 results on 875 pages for 'point'.

Page 378/875 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • Multiple Memcached server /etc/init.d startup script that works ?

    - by p4guru
    I install memcached server via source and can get standard start up script installed for 1 memcached server instance, but trying several scripts via google, can't find one that works to manager auto start up on boot for multiple memcached server instances. I've tried both these scripts and both don't work, service memcached start just returns to command prompt with no memcached server instances started lullabot.com/articles/installing-memcached-redhat-or-centos addmoremem.blogspot.com/2010/09/running-multiple-instances-of-memcached.html However this bash script works but doesn't start up memcached instances at start up though ? #!/bin/sh case "$1" in start) /usr/local/bin/memcached -d -m 16 -p 11211 -u nobody /usr/local/bin/memcached -d -m 16 -p 11212 -u nobody ;; stop) killall memcached ;; esac OS: Centos 5.5 64bit Memcached = v1.4.5 Memcache = v2.2.5 Anyone can point me to a working /etc/init.d/ startup script to manage multiple memcached servers ? Thanks

    Read the article

  • Terminal command to send data (plain text string) to a port at a remote computer.

    - by Eddy
    I am trying to send data (plain text string) to a port at a remote computer using terminal utility. The string would be used to trigger something on the remote computer running a program that would listen to that specific port. I used netcat command and tried a few combination of the following but can't seem to get the parameter right. Can someone point me out where am I doing wrong? eddy-2:Desktop eddy$ nc IPADDRESS PORT woc.txt eddy-2:Desktop eddy$ nc IPADDRESS PORT < woc.txt P.S: woc.txt contains plain text string of the said command. Edit: I am trying to send a string from OSX to Windows XP where the specific port is open by default.

    Read the article

  • How to setup a web server with remote SMTP

    - by IP
    I have 2 severs, both running server 2008 (R2) One is the web server, one is running as a mail server. The setup I want is that any mail sent from apps (php, asp and asp.net) on the web server are sending mail through the mail server's SMTP server...but this seems to be proving trickier than i'd hoped. The mail server is running MailEnable, and the web server IIS7 (maybe 7.5) What i don't want is to setup an open relay SMTP server on the web server, as this is going to be open to abuse (even if I just allow relay from local address). the problem is, there doesn't appear to be a way to specify credentials in php so if I point it at the mail server, then the mail server has to be set as an open relay, which is almost worse. Any ideas how I should be doing this?

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

  • Is there a version of the Arial or Tahoma font with monospaced digits and spaces?

    - by rossmcm
    The digits in the Arial font supplied with Windows are monospaced, in that they each take up the same horizontal space, but they seem to have neglected to provide a "monospaced" version of the space character. This means that you can't format a column of digits right-justified in (say) 12 spaces and have the right-hand edge be aligned. For example: 1 12 123 1234 12345 1234567 12345678 123456789 1234567890 works because the font used for code examples has spaces the same width as digits. This however doesn't work if the same text is displayed in Arial (I can't demonstrate because I can't figure out how to defeat SU's reformatting at the moment!). It just so happens that with Tahoma 8 point you can cheat because a space is exactly half the number of pixels as a digit, but that is messy and very specific.

    Read the article

  • Is "White-Board-Coding" inappropriate during interviews?

    - by Eoin Campbell
    This is a somewhat subjective quesiton but I'd love to hear feedback/opinions from either interviewers/interviewees on the topic. We split our technical part into 4 parts. Write Code, Read & Analyse Code, Design Session & Code on the white board. For the last part what we ask interviewees to do is write a small code snippet (4-5 lines) on the whiteboard and explain as they go through it. Let me be clear the purpose is not to catch people out. We're not looking for perfect syntax. Hell it can even be pseudo-code. but the point is to give them a very simple problem and see if their brain can communicate the solution to us. By simple problems I mean "Reverse a string", "FizzBuzz" etc... EDIT Just with regards the comment about Pseudo-Code. We always ask for an explicit language first. We;re a .NET C# house. we've only said "pseudo-code" where someone has been blanking/really struggling with the code. My question is "Is it innappropriate / unreasonable to expect a programmer to write a code snippet on a whiteboard during an interview ?"

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Why is "googlehosted.com" in the DNS records for our website after signing up for DDOS protection?

    - by Blake Nic
    Recently we had to get some DDOS protection for our website because of the large attacks we were seeing after getting a bit of popularity. We handed over our domain and hosting information to our DDOS protection provider. It worked perfectly but I have a question. On our DNS records we have the Host and Answer and Type. The host has our domain name there. The answer is this: SOMETEXTXXXX.dv.googlehosted.com. And when I copy and paste it into my browser it gives me a 404 error. But our website still loads and functions as it should. I don't understand why it would need this? I asked them about this and they said it is a method for DDOS protection and the other IPs are the reverse proxy (the other IPs give a 404 error too). Can anyone expand on this more please. How does all this tie in together and make the internet browser know where to point the person with all these reverse proxies and stuff I don't understand. Here is an image for reference:

    Read the article

  • Godaddy vs. Route53 for DNS

    - by tim peterson
    I have my website set up as an EC2 instance and my DNS is currently Godaddy. I'm considering switching to Amazon AWS Route53 for DNS. The one thing I noticed however is that Route53 charges monthly fees but I never get any bills from Godaddy. Obviously, nobody likes getting charged for something they can get for free. If Godaddy is cheaper, can anyone confirm that the page load speed of an EC2 instance is actually better via Route53 vs. Godaddy? If it is not faster or cheaper, can someone point out other reasons it might make sense to do this switch? thanks, tim

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • MWS2K8R2: Enabling Media Sharing using Streaming Media Services Role

    - by TheLizardKing
    So I have a Microsoft Windows Server 2008 R2 that stores a large collection of media (mostly mp3s) and I want to be able to deliver these files using a server/client setup with Windows Media Player being the client. I downloaded and installed Streaming Media Services Role. I even setup a publishing point with on-demand access. My issue is I can connect using WMP12 but it only connects as more of a stream and not a shared library. I can pause/play/skip as if it's a powerful radio station which is ok in my book but what I'd really like to do is allow me to control my music remotely, search and play for artists, maybe create playlists (not required but nice) and even connect it to an xbox. Is Streaming Media Services Role not what I should be using for this? Would installing WMP and sharing using that mechanism be a better option? Any Ideas?

    Read the article

  • New features for Expression Blend 4 Release Candidate

    - by kaleidoscope
    With Microsoft Expression Blend 4, you can create websites and applications based on Microsoft Silverlight 3 and Microsoft Silverlight 4, and desktop applications based on Windows Presentation Foundation (WPF) 3.5 with Service Pack 1 (SP1) and WPF4. Expression Blend provides new support for prototyping, interactivity through behaviors, special Silverlight functionality, and on-the-fly sample data generation. Expression Blend includes new behaviors that are quickly and easily configured Expression Blend offers new sample data, behaviors, and features of project templates to support the Model-View-ViewModel (MVVM) pattern The MVVM pattern is a way to structure a Silverlight or WPF application so that user interface (UI) objects are as decoupled as possible from the application's data and behavior. This makes it easier for design tasks and development tasks to be performed independently and without breaking each other. Essentially, your UI is the View. You bind objects in the View to properties and commands of the ViewModel, and the View can also call methods on the ViewModel. Compatible with Silverlight 3 and WPF 3.5 with Service Pack 1 (SP1) Interoperate able with Visual Studio. Included New Shapes: The Assets panel in Expression Blend contains a new Shapes category, including presets for the easy creation of arcs, arrows, callouts, and polygons. New Controls: Expression Blend has tooling support for the RichTextBox control in Silverlight. XAML cleanliness :Expression Blend generates less XAML with respect to animations and animation-related properties. MVVM project template: Expression Blend includes a new project template that offers a basic starting point for Model-View-ViewModel pattern applications. Run project with CTRL+F5:To improve consistency with Visual Studio, you can now invoke the Run Project command by pressing either CTRL+F5 or F5 Technorati Tags: Rituraj,Features of Expression Blend4 RC

    Read the article

  • phpmyadmin forbidden after changing config for my IP

    - by Jonathan Kushner
    I followed the phpmyadmin setup and changed the config to require ip my ipaddress and allow from my ipaddress and its still telling me forbidden You don't have permission to access /phpmyadmin on this server. when I try to access the page on my browser (my server is not located on my machine). I installed everything using root. I also chmod 775 the entire phpMyAdmin folder. Im running RHEL 6.1. Any idea what to do at this point? Here is my /etc/httpd/conf.d/phpMyAdmin.conf: <Directory /usr/share/phpMyAdmin/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip myserveripaddress Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Deny from All Allow from myserveripaddress Allow from ::1 </IfModule> </Directory>

    Read the article

  • Blogging from Office RT

    - by Dennis Vroegop
    During the last Build conference all attendees were given a brand new sparkling exciting Surface RT device (I love that machine despite its name but that's beside the point). On it came a version of Office 2013 RT, or better: the preview version. Now, I translated that term "Preview" to "Beta". Which is OK, since I've been using a lot of beta products from Microsoft and they all were great. And then I wanted to post a blogposting from Word. I knew I could, I have been doing this for a long time (I prefer Live Writer but that isn't available on Windows 8 RT). So I wrote the entry and hit "Publish". Instead of my blogsite I got a nice non-descriptive error telling me I couldn't post. So I fired up my other (Intel based) Win8 tablet, opened Word RT Preview, it loaded my blogpost (you've got to love the automatic synchronization through Skydrive) and tried from that machine. Same error. So, I installed Live Writer (remember, the other machine is Intel based) and posted from there. That worked like a charm. Apparently, there was something wrong with Word. I gave up and didn't think about it anymore. Yet… what you're reading now is written in Word 2013 RT on my Surface RT. So what did do? Simple: I updated from the Preview version to the final version. That's all there was to it. So…. If you're still on the preview I urge you to upgrade. You need to go to the "classic desktop update" window instead of going through the Windows Store App style update since Office is a desktop system, but once you do that you'll have the full version as well. Happy blogging!

    Read the article

  • How do I stop ubuntu from detaching minimize/maximuze/close buttons?

    - by Shahbaz
    Some time ago I managed to get ubuntu to keep the window menubars in the menu rather than the bar above (I'm not sure if this part is unity or compiz, or what's the difference). That was by removing indicator-appmenu Anyway, so now everything is fine except one thing: If I have a window that is full screen, the minimize/maximize/close buttons are still grabbed by the bar on the top. Usually this doesn't cause a problem because the upper-left corner of the full screen window and the whole screen are not too far apart. However, one thing happens to me a lot, and that is I am working on something (programming), then I need to check some things from other places so I open some windows, see what I want and switch back to my work. Those windows however are temporary so at some point I want to close them. Now here's what happens: I have the focus on some window and I can't close the maximized window behind it unless I click on the window first, so that the buttons appear and then close it. I couldn't find anything on the internet about this. Is this something that's hardcoded in unity/compiz/whatever or is there actually a way to configure this?

    Read the article

  • I Blame SNMP!

    - by brendonpage
    Anyone who has been reading my blog would have noticed that I have deviated slightly from my original post plan! This post was meant to be on uploading files in Silverlight, so what happened you may ask? Well last weekend I had some friends over for a LAN and one of them brought a managed switch with, which he had just been purchased for work. He proceeded show me how cool it was, how he planned on improving his work network and how it can be monitored remotely via SNMP. After this explanation he started to google for a free SNMP graphing tool. After a few hours of hearing disgruntled mutterings from him I asked what was wrong, he proceeded to rant about how he couldn’t find any tools that suited his needs. It was at this point I though the most dangerous thing a programmer can ever think “I wonder how hard it would be to make one”, of course the answer at the time is always “It can’t be that hard”, and so started my journey into SNMP. I am still in the early stages of this journey so I don’t have to much to report yet, but once I have finished the first version of my SNMP graphing tool I will definitely be posting about it! For now if there are any of you who are interested in doing any SNMP development in C# I would recommend looking at the #Sharp project on CodePlex (http://sharpsnmplib.codeplex.com/), it is the SNMP library I have decided to use and thus far it works beautifully.

    Read the article

  • Pythonic use of the isinstance function?

    - by Pace
    Whenever I find myself wanting to use the isinstance() function I usually know that I'm doing something wrong and end up changing my ways. However, in this case I think I have a valid use for it. I will use shapes to illustrate my point although I am not actually working with shapes. I am parsing XML configuration files that look like the following: <square> <width>7</width> </square> <rectangle> <width>5</width> <height>7</height> </rectangle> <circle> <radius>4</radius> </circle> For each element I create an instance of the Shape class and build up a list of Shape objects in a class called the ShapeContainer. Different parts of the rest of my application need to refer to the ShapeContainer to get certain shapes. Depending on what the code is doing it might need just rectangles, or it might operate on all quadrangles, or it might operate on all shapes. I have created the following function in the ShapeContainer class (the actual function uses a list comprehension but I have expanded it here for readability): def locate(self, shapeClass): result = [] for shape in self.__shapes: if isinstance(shape,shapeClass): result.append(shape) return result Is this a valid use of the isinstance function? Is there another way I can do this which might be more pythonic?

    Read the article

  • How can I convince my boss to invest into the developer environment?

    - by user95291
    Our boss said that developers should have fewer mistakes so the company would have money for displays, servers etc. An always mentioned example is a late firing of an underperforming colleague whose salary would have covered some of these expenses. On the other hand it happened a few times that it took a few days to free up some disk space on our servers since we can't get any more disk. The cost of mandays was definitely higher than the cost of a new HDD. Another example is that we use 14-15" notebooks for development and most of the developers get external displays after they spent one year at the company. The price of a 22-24" display is just a small fraction of a developers annual salary. Devs say that they like the company because of other reasons (high quality code, interesting projects etc.) but this kind of issues not just simply time-consuming but also demotivate them. In the point of view of the developers it seems that the boss always can find an issue in the past which they could have been done better so it's pointless to work better to get for a second display/HDD/whatever. How can I convince my boss to invest more into development environment? Is it possible to break this endless loop?

    Read the article

  • What is Cloud Computing?

    - by joelvarty
    This is a question that we discuss quite often at Edentity.  It’s one of those things, kind of like “web services” where the terminology has been thrown around by a ton of people and means a lot of different things. Here’s my favorite diagram so far, which is a visual breakdown of the material presented here by NIST, visualized by the folks at Cloud Security Alliance.     What I like about this diagram is that is shows several different ways that we can differentiate our definitions of cloud computing, from the essential characteristics, or which “Broad Network Access" and “On-Demand Self-Service” (which often are used on their own to define cloud computing) are but a couple of things that help make something “cloud”. The most important section from my point of view is the middle one – the Service Models.  This represents the different ways that cloud computing can be exposed from the ground up.  It can be an Infrastructure, a Platform or a piece of Software that an end user interacts with. This is the future, folks. more late - joel

    Read the article

  • Unable to log into Windows XP Pro Domain Not Available

    - by Belliez
    Trying to access an old laptop I have but at the windows login screen I attempt to log in and get the message "Unable to log in because is unavailable". This laptop is not on a domain or a network and I do not know the computer name. I have blanked the passwords of the local administrator and user account using Offline NT Password and Registry Editor but still unable to log in? Any advice would be grateful or if you can point me to the registry location I can edit or delete to remove the domain. Thanks

    Read the article

  • PASS Business Intelligence Virtual Chapter Upcoming Sessions (November 2013)

    - by Sergio Govoni
    Let me point out the upcoming live events, dedicated to Business Intelligence with SQL Server, that PASS Business Intelligence Virtual Chapter has scheduled for November 2013. The "Accidental Business Intelligence Project Manager"Date: Thursday 7th November - 8:00 PM GMT / 3:00 PM EST / Noon PSTSpeaker: Jen StirrupURL: https://attendee.gotowebinar.com/register/5018337449405969666 You've watched the Apprentice with Donald Trump and Lord Alan Sugar. You know that the Project Manager is usually the one gets firedYou've heard that Business Intelligence projects are prone to failureYou know that a quick Bing search for "why do Business Intelligence projects fail?" produces a search result of 25 million hits!Despite all this… you're now Business Intelligence Project Manager – now what do you do?In this session, Jen will provide a "sparks from the anvil" series of steps and working practices in Business Intelligence Project Management. What about waterfall vs agile? What is a Gantt chart anyway? Is Microsoft Project your friend or a problematic aspect of being a BI PM? Jen will give you some ideas and insights that will help you set your BI project right: assess priorities, avoid conflict, empower the BI team and generally deliver the Business Intelligence project successfully! Dimensional Modelling Design Patterns: Beyond BasicsDate: Tuesday 12th November - Noon AEDT / 1:00 AM GMT / Monday 11th November 5:00 PM PSTSpeaker: Jason Horner, Josh Fennessy and friendsURL: https://attendee.gotowebinar.com/register/852881628115426561 This session will provide a deeper dive into the art of dimensional modeling. We will look at the different types of fact tables and dimension tables, how and when to use them. We will also some approaches to creating rich hierarchies that make reporting a snap. This session promises to be very interactive and engaging, bring your toughest Dimensional Modeling quandaries. Data Vault Data Warehouse ArchitectureDate: Tuesday 19th November - 4:00 PM PST / 7 PM EST / Wednesday 20th November 11:00 PM AEDTSpeaker: Jeff Renz and Leslie WeedURL: https://attendee.gotowebinar.com/register/1571569707028142849 Data vault is a compelling architecture for an enterprise data warehouse using SQL Server 2012. A well designed data vault data warehouse facilitates fast, efficient and maintainable data integration across business systems. In this session Leslie and I will review the basics about enterprise data warehouse design, introduce you to the data vault architecture and discuss how you can leverage new features of SQL Server 2012 help make your data warehouse solution provide maximum value to your users. 

    Read the article

  • How to implement physical effect, perspective effect on Android

    - by asedra_le
    I'm researching about 2D game for Android to implement an Android Game Project. My project looks nearly like PaperToss. Instance of throwing a page, my game will throw a coin. Suppose that I have a coin put in three-dimensional that have coordinates at A(x,y,z). I throw that point ahead, after 1/100 second, that coin move from A(x,y,z) to A'(x',y',z'). By this way, I have two problems need to solve. Determine the formulas can be used to compute the coordinates of the coin at time t. This problem is under-researching. I have no idea to solve this problem. Mapping three-dimensional points to a two-dimensional and use those new coordinates (a two-dimensional coordinates) to draw our coin on screen. I have found two solutions for this problem: Orthographic projection & Perspective projection However, my old friend said that OpenGL supports to solve problems like my problems. Any body have experiences about my problems? Help me please :) Thank for reading my question.

    Read the article

  • Wget - if / else download condition?

    - by Kai
    I want wget to prefer a certain filetype over another, if the files have the same basename. For example: if foo.ogg available, don't download foo.mp3 the way i use wget so far to crawl/automatically download (if anyone is interested): wget -Dfoo.com -I /folder/ -r -l 1 -nc -A.ogg,.mp3 -i http://www.foo.com/folder/ but this, of course, gets me .mp3 AND .ogg files. It often also gets me image files like .png which i didn't want in the first place, and discards them afterwards. Any Ideas? (Syntax-Explanation: -D: download only from this Domain -I: download only from this subfolder of Domain -r: recursive (follow links and directory structure) -l 1: follow only 1 link deep -nc: no clobber = download only if file doesn't exist -A: accept/download only all *.ogg and *.mp3 (discard necessary html-files) -i: download-url/starting point)

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >