Search Results

Search found 24879 results on 996 pages for 'prime number'.

Page 505/996 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • Setup Linksys 3200 remote access

    - by Greg
    I'm trying to setup remote access for my linksys 3200 so that I can configure it through the WAN port. I have turned on remote access, however when I try to connect I get a 404 error. The settings I have are: When I try to access xxx.xxx.xxx.xxx:9999 I just get a 404 error. I have allowed RDP access to a computer behind the router and this works fine on the same IP address. Any idea's on what else I have to do to allow remote management access? UPDATE: I tried changing the port to 80 and it works. Change it back to any other number and it doesn't work. Modem is setup with a DMZ to the router's IP. Why does it only work on port 80? BTW I can't use port 80 because there is a website hosted behind the router.

    Read the article

  • What's a good, affortable ruter that will not give me problems when downloading torrents?

    - by Lirik
    I found several routers on newegg and they're in the $50-60 range, but I'm not sure if they'll handle the number of connections that are created when downloading torrents (100-300 seeds and around 50 peers). My roommate watches netflix movies, my brother and I download torrents, so my NETGEAR router ends up choking on the traffic and I have to restart it quite frequently. I've already posted a couple of questions on the topic and I've come to the conclusion that I need a new router. What are some routers that I should consider (my budget is in the $50 range)?

    Read the article

  • Differencing Disks in VirtualBox

    - by PhilPursglove
    I'm struggling to understand how to do differencing disks in VirtualBox v3.1.0. I've created a Windows 2008 Server, but now I want to use that as a base image for a number of other servers. The help file has a description of what differencing disks are, but I can't find where it actually tells you how to do it. In the Storage dialog for a server I found the Differencing Disks checkbox: but when I check it, I'd expect it to then ask which image should be the parent so I could select my base image. Any pointers you can offer would be greatly appreciated!

    Read the article

  • How to use nsupdate to create NAPTR record

    - by Jon Skarpeteig
    What's a working example for creating a NAPTR record using nsupdate against Bind9? man nsupdate sais: update add {domain-name} {ttl} [class] {type} {data...} Adds a new resource record with the specified ttl, class and data. But I can't seem to find the correct format for NAPTR My attempt: echo -e 'update add enum.example.com 60 IN NAPTR 1.1.1.1.1."u"."E2U+sip"."!^.*[email protected]!" .'"\nsend"|nsupdate results in: invalid rdata format: not a valid number syntax error

    Read the article

  • Nginx Fastcgi performance issues

    - by Barry
    I am running several fastcgi servers behind Nginx. I run 3 Nginx workers and 6 fastcgi servers as upstream backends. When I run load tests of 1 req per sec, I can clearly see that on avarage a reply is 0.1sec, but from time to time there are 3.1 sec responses. It is a suspiciously deterministic number and it happens even at very small loads from time to time. Both CPU and Memory are of no issue at all. Any idea where this delay may be comming from? Any suggestions how to debug this? Many thanks, Barry.

    Read the article

  • How to Unban an IP properly with Fail2Ban

    - by psp
    I'm using Fail2Ban on a server and I'm wondering how to unban an IP properly. I know I can work with IPTables directly: iptables -D fail2ban-ssh <number> But is there not a way to do it with the fail2ban-client? In the manuals it states something like: fail2ban-client get ssh actionunban <IP>. But that doesn't work. Also, I don't want to /etc/init.d/fail2ban restart as that would lose all the bans in the list.

    Read the article

  • .NET Reflector & .NET Reflector Pro 6.1 have been released

    - by Bart Read
    .NET Reflector 6.1 and .NET Reflector Pro 6.1 have been released. You can download them from: http://www.red-gate.com/products/reflector/index.htm .NET Reflector is a class browser and disassembler for .NET assemblies. .NET Reflector Pro is a Visual Studio debugging extension that allows you to step through third party and framework assemblies, as if they were built from your own source code. This release fixes several problems that were present in the 6.0 release: Support for using a copy of Reflector.cfg stored alongside Reflector.exe has been re-enabled so users upgrading from 5.x releases will not lose their settings. Fixed unhandled exception on exit of Visual Studio when .NET Reflector add-in used in conjunction with TestDriven.NET add-in. Added better support for dealing with framework assemblies, which only contain meta-data, in the "Referenced Assemblies" folder. Fixed problem where attempted decompilation with CppCliLanguage add-in would lead to display of a page on the Red Gate website. Added option to activate .NET Reflector Pro to .NET Reflector menu in Visual Studio after receiving feedback from a number of users that it was hard to figure out how to activate the product. For more details about the products please visit: http://www.red-gate.com/products/reflector/index.htm

    Read the article

  • SPARC64 VII+ Processor Core License Factor Reduced by 33%

    - by john.shell
    The Oracle processor core license factor has been a popular topic the last few months.  For those partners new to Oracle software licensing, the processor core license factor determines the number licensed CPUs that are required when running Oracle software (those charged on a per-CPU basis) on multi-core processors.My last entry talked about the core factor reduction for our T3 processor.  The core license factor for our newly announced SPARC64 VII+ processor is 0.5, which is a 33% reduction from the 0.75 rate used with our SPARC64 VI and VII processors.What does this mean for our partners?  Increased opportunity.  This change, similar to our T3-based systems, means that our hardware is the preferred platform for Oracle software. Still a little dizzy on the breadth of Oracle's software offering?  Do a simple scan of Oracle's software price lists. Consider this your target market.This change allows you to focus on total solution price or price/performance, not server prices or per core performance (a standard IBM sales tactic). That's the offensive side of the game.  Don't forget your defense.  One of the biggest customer benefits around the M-Series is investment protection.  The combination of a simple processor/board upgrade, along with a reduction in processor core license factor, makes upgrading one of the best financial moves for our customers.    One reminder.  The update to the processor core license factor only applies to the new VII+ processor - NOT the SPARC64 VI or VII processors.  You can find the official table here.

    Read the article

  • Installing an EC2 (EBS based) instance on another AWS account

    - by imaginative
    We're moving from one AWS account to another. I have a couple of EC2 instances I would like to move over. These instances are EBS based. Googling around, I'm seeing all sorts of convoluted answers for how to appropriately launch this EBS based instance on a new account. Surely there has to be a simple way of going about this. As of right now, what I have done so far is backup the EB2 Volume as a snapshot. I then altered the permissions of the snapshot to allow the new AWS account number to be able to access it. I am not sure where to go from here. Do I have to create an image from the EBS snapshot? Upload it to S3? What's the next step(s)?

    Read the article

  • Strange. Asterisk key plays random Windows sound when pressed

    - by Charles
    This is a new one on me. When I press the "asterisk", or * button on my number pad (but not SHIFT+8), Windows makes either a "Exclamation" or "Windows ding" sound. I haven't noticed a pattern to which sound is made. Logitech K200 keyboard No special key mapping software or Logitech software running Realtek sound to stereo through optical cable. Visual Studio 2010, Chrome, Fiddler, WinRAR, Notepad++, and Dropbox running. No unusual behavior otherwise. A solution isn't terribly important but my curiosity is both piqued and stumped. This doesn't normally happen and nothing odd has taken place otherwise. Ideas?

    Read the article

  • Emulate "Go to Dekstop/Home/etc." behavior in OS X via AppleScript

    - by pattulus
    OS X has build in support for going to certain Folders (Home, Utilities, Desktop, etc.) via a Shortcut. I wanted to emulate this behavior for the the Downloads Folder. The only thing that is missing the script below is that it won’t succeed when no window is opened in the Finder (see Error message). tell application "Finder" activate set target of Finder window 1 to folder "Downloads" of folder "username" of disk "Macintosh HD" end tell Error message: error "Finder got an error: Can’t set Finder window 1 to folder \"Downloads\" of folder \"username\" of disk \"Macintosh HD\"." number -10006 from Finder window 1 It great if you know about some kind of 'if-compliement' that triggers opening the Downloads Folder in case there is no window 1 open in the Finder. Thanks in advance.

    Read the article

  • Revocation status of DC can't be verified

    - by DotGeorge
    A Domain Controller within my forest was working fine (as the story usually goes). Then, suddenly, I can't logon with my smart card. Instead, I'm greeted with the following message: The system could not log you on. The revocation status of the domain controller certificate used for smart card authentication could not be determined. I literally have no idea what's happened here. As an attempted quick fix, I removed the root certificate which issued the Smart Card's certificate from the CA of both the client and DC. Then imported a newly exported one from the DC in question. Same issue. I've spotted a number of related articles on Microsoft's forums and a HP support document. Each don't really shed much light as it's a generic error message apparently. Having said all of this, other smart cards (issued from other DCs) work fine. So I have no idea what's up with this one.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Dvorak hotkey remapping in vim, worth it?

    - by Bryan Ward
    I've been trying to learn the dvorak keyboard layout of late and I have been making some good progress this time around. The trouble I am finding now is that all of my hotkeys are all in the wrong places. As a vim user this is particularly troubling. I have found good resources to switch the bindings back so that they are in the places in vim, but I wonder if this is worth it. I also use set -o vi in my ~/.zshrc file so that I can use the familiar bindings in the terminal as well. hjkl navigation is also featured in a number of other applications such as less. For those of you out there who have successfully made the switch, is it worth remapping things to be familiar again, or is it better in the long run to just deal with weirdly placed hotkeys?

    Read the article

  • Dynamic group membership to work around no nested security group support for Active Directory

    - by Bernie White
    My problem is that I have a number of network administration applications like SAN switches that do not support nested groups from Active Directory Domain Services (AD DS). These legacy administration applications use either LDAP or LDAPS. I am fairly sure I can use Active Directory Lightweight Directory Services (AD LDS) and possibly Windows Authorization Manager to work around this issue; however I am not really sure where to start. I want to end up with: A single group that can be queried over LDAP/LDAPS for all it’s direct members LDAP proxy for user name and password credentials to AD DS Easy way to admin the group, ideally the group would aggregate the nested membership in AD DS. a native solution using freely available components from the Windows stack. If you have any suggestions or solutions that you have previously used to solve this issue please let me know.

    Read the article

  • apache performance improvements and maxclients

    - by updog
    I know this has been asked a few (thousand) times around the internet but I was hoping someone whose in the know might be able to comment on my particular setup. I have a web server hosting one site (php/codeigniter) with a wordpress blog in a sub directory. The server has 2GB RAM, 3GHz CPU and I have offloaded the static assets to CloudFlare which is has reduced bandwidth for the actual server by almost 75%. The problem I have is when an email campaign is sent out that links to the site or blog, it slows down. Below is my settings in apache2.conf. Average apache process size is 80M and there is 1.5GB available for apache. <IfModule mpm_prefork_module> StartServers 8 MinSpareServers 5 MaxSpareServers 20 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> I have already setup and installed apc and built some caching into the site and used w3totalcache on the blog. The number of concurrent users is around 2-300 when there is a campaign, are there any further optimisations before

    Read the article

  • Apache out of memory on

    - by Sherif Buzz
    Hi all, I have a VPS with 768 MB RAM and a 1.13 GHZ processor. I run a php/mysql dating site and the performance is excellent and server load is generally very low. Sometimes I place ads on Facebook and at peak times I can get 100-150 clicks within a few seconds - this causes the server to run out of memory : Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp .... And all users receive an error 500 page. I am just wondering if this sounds reasonable or not - to me 100-150 does not seem to be a number that should cause apache to run out of memory. Any advice/recommendations how to diagnose the issue highly appreciated.

    Read the article

  • How to securely generate memorable passwords?

    - by Tim
    Whenever I need new passwords I use some tools to generate those, preferable memorable passwords, but I've been wondering how secure this might actually be. Using The xkcd random number generator is probably pretty bad, cat /dev/random is probably pretty good, but generating memorable passwords seems a bit more tricky. Whenever a program generates a memorable password, it only uses a subset of the total password space available, and it is not clear to me how big this space is. Of course a long password should help in this case, but if the `memorable' part of the program is too predictable, your passwords are not very good in the end. TL;DR: how secure are memorable password generators, given the fact that `memorable' passwords are a subset of total password space? Some tools I know of: pwgen -- seems ok, but passwords are not too memorable Mac Password Assistant - generates memorable passwords but it is unclear to me how this works.

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • nginx terminates connection after 65k bytes

    - by David Wolever
    I've got nginx configured as a front-end to a Python application running under gunicorn, but nginx is terminating connections after about 65k of data have been sent. For example, I've got a view which looks like this: def debug_big_file(request): return HttpResponse("x" * 500000) But when I access that URL through nginx, I only get 65283 bytes: $ curl https://example.com/debug/big-file | wc … curl: (18) transfer closed with outstanding read data remaining 0 1 65283 Note that everything works as expected when accessing gunicorn directly: $ curl http://localhost:1234/debug/big-file | wc … 0 1 500000 The relevant nginx config: location / { proxy_pass http://localhost:1234/; proxy_redirect off; proxy_headers_hash_bucket_size 96; } And nginx version 1.7.0 Some other facts: The number of bytes is consistent from request to request, but it varies based on the content (I first noticed it with a large PNG file, which was cut off after 65,372 bytes, not 65,283) 110k bytes are sent correctly (ie, "x" * 110000 returns all 110,000 bytes), but 120k bytes are not tcpdump suggests that nginx is sending a RST packet to gunicorn:

    Read the article

  • OpenGL ES 2 shaders for drawing buildings and roads like Google Maps does

    - by Pris
    I'm trying to create a shader that'll give me an effect similar to what buildings and roads look like on 3D Google Maps. You can see the effect interactively if you enable WebGL at maps.google.com, and I also found a couple of screenshots that illustrate what I'm trying to achieve: Thing I noticed: There's some kind of transparency thing going on with the roads/ground and the buildings, but not between the buildings themselves. It might be that they're rendering the ground and roads after the buildings with the right blend functions to achieve that effect. If you look closely, you'll see parts of the building profiles have an outline. The roads also have nice clean outlines. There are a lot of techniques for outlining things with shaders... but I'm curious to find out what might have been used in this case considering mobile hardware and a large number of entities with outlines (roads and buildings) I'm assuming that for the lighting, some sort of simple diffuse per-vertex shader is being used for the buildings though I could be wrong. I'm especially curious about the 'look' they achieved with buildings (clean, precise outlines/shading). It reminds me a little of what you'd see when designing stuff with CAD applications like SolidWorks: I'd appreciate any advice on achieving this kind of look with ES 2 shaders.

    Read the article

  • Profile Picture Thumbnails, Following Projects, and Fork Collaboration

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website last week. Profile Picture Thumbnails We have added a way to select a thumbnail from your profile picture, which will start appearing next to usernames across the site.  Managing your thumbnail is simple. From your profile page, choose Edit your profile.  On the left side, you’ll find an intuitive widget for choosing a profile picture, uploading it, and editing your thumbnail image. If you previously uploaded a profile picture, we’ve used that to generate a starter thumbnail. We welcome your suggestions and ideas for areas where seeing user thumbnails would be useful or interesting. Following Projects Based on some feedback we’ve received recently, we have taken several steps to help you discover and follow interesting and popular projects on CodePlex: The homepage now surfaces the top Projects Users are Following from the previous 7 days. When you visit any project homepage, you can see at a glance how many people follow the project. When you visit the People tab for any project, you will see both the project contributors and the 25 most recent project followers. Fork Collaboration We now support enabling collaborators on a fork based on a large number of user requests.  From the Source Code management page for your fork, you will now see the following on the right side: To add a collaborator, type in a username and click Add. All fork collaborators will have the ability to push to the fork and send/cancel pull requests.  To remove a collaborator, hover over user, and click on the X that appears: The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • spawn-fcgi/ fast CGi php crashes without traces in logs, on Gentoo

    - by user39046
    Hello, I recently moved from apache to a Nginx/fastcgi solution, I had it running on a Fedora system and had no problems, but, since i moved all to Gentoo , the Spawn-fCGI / fastcgi php daemon dies, and i can't find out any errors reports on /var/log/messages , so i don't know why this happens. I've seen that fastcgi is somehow different from the fedora distro, on gentoo as it has different conf files and init.d startup scripts, Can someone help me make it more stable? The number of requests that i had isn't any different from the ones I had on fedora, so i use the default conf that comes with the distro..and in about some hours it simply dies... Thank you very much

    Read the article

  • Are there any tools for monitoring individual Apache virtual hosts in real-time?

    - by Dave Forgac
    I'm looking for a way to monitor and record Apache traffic, separated by virtual host. I am currently using Munin to capture this and other data for the entire server however I can't seem to find a way to do this by vhost. This link describes using a module called mod_watch which is apparently no longer in development: http://www.freshnet.org/wordpress/2007/03/08/monitoring-apaches-virtualhost-with-munin/ The file that is listed as being compatible with Apache 2.x is reported to have problems with missing vhosts an reporting data correctly. Does anyone know of a reliable way to determine real-time traffic per vhost? If I can find this it should be easy enough to write a new Munin plugin. Edit: What I'd really like to see is something similar to the Apache server-status scoreboard page with the number of connections / requests separated by virtual host. This would give me the ability to check which vhost may be experiencing a spike in traffic in real time and would also provide the data needed for a Munin module (or some alternative performance monitoring / analysis system.)

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >