Search Results

Search found 20852 results on 835 pages for 'intellij idea'.

Page 291/835 | < Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >

  • mod_fcgid process doesn't respawn

    - by aaronsw
    I have a Python script running on my server as a FastCGI using Apache2 and mod_fcgid. I let it spawn up to five processes. But I soon get messages like these in the Apache logs: [Wed Sep 02 23:16:34 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function [Wed Sep 02 23:16:35 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function and then Apache doesn't seem to recognize that all its processes are dead (I have a max of 5 backends) and refuses to spawn new ones: [Wed Sep 02 23:26:16 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request [Wed Sep 02 23:26:17 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request at which point it refuses to respond to requests from the outside world. This doesn't seem to happen with my other FastCGIs, which all use the same Apache config: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi IPCConnectTimeout 20 MaxProcessCount 5 DefaultMaxClassProcessCount 2 DefaultMinClassProcessCount 1 </IfModule> Any idea what causes it?

    Read the article

  • gitweb on Ubuntu Server as Location/Directory instead of Virtual Host

    - by mbx
    Since DynDNS no longer resolves subdomains for free I have use gitweb on a subdir of the apache2. Usual suspects such as Pro Git suggest something like <VirtualHost *:80> ServerName gitserver DocumentRoot /srv/gitosis/repositories/ <Directory /srv/gitosis/repositories/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgi DirectoryIndex gitweb.cgi </Directory> </VirtualHost> I tried various variations using Location and Directory tags with different attribute combinations without any notable success. My first Idea was close to the following Alias /gitweb /srv/gitosis/repositories <Location /gitweb> AuthType Basic AuthName "gitweb Repository view" AuthUserFile /etc/apache2/gitweb.passwd Require valid-user SSLRequireSSL SetEnv GITWEB_CONFIG /etc/gitweb.conf AddHandler cgi-script cgi DirectoryIndex /usr/lib/cgi-bin/gitweb.cgi </Location> Apache is in the gitosis group, the repositories are readable and executable for that group. So, what is the indended way to get websvn run on Ubuntu 10?

    Read the article

  • IIS7 returns 403.1 (execute access denied) for image file

    - by Kristoffer
    I have a web app running in IIS7 on Windows Server 2008. There is a virtual directory pointing to a shared folder "/Content/Data" on another machine (running Windows Server 2003), as well as a real directory "/Content/Images" on the local machine (web app sub folder). Accessing images in "/Content/Images" is no problem, but when an image (e.g. a JPEG file) in the "/Content/Data" is accessed by a browser, IIS returns this error: HTTP Error 403.1 - Forbidden: Execute access is denied. However, the web app can read and write to / from it. I assume IIS and ASP.NET are running under different user accounts? Does anyone have an idea on what I have to do to make it work? I have set the permissions on the shared folder to Everyone Full Control, with no luck.

    Read the article

  • Fun with RadCaptcha for ASP.NET AJAX and OCR software

    A friend of mine was evaluating OCR software and finally decided to go with FineReader. I was curious what would happen if we put the RadCaptcha control in. Will the advanced OCR manage to decode it or not? At first he showed me a test run with the RadCaptcha demo description, to get an idea of the basic output:    Naturally, the captured description text was no problem - only a few characters were misread but then corrected with the spellcheck. Next, the real test was performed:    These were only a couple of the results, but there is no need to post the rest of the tests - none of the RadCaptcha images were recognized by the OCR software. Here are the CaptchaImage settings used in the tests: Background Noise Level: Low /default value Line Noise Level: Low /default value Font Warp Factor: Low /Medium is default value...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Some SharePoint NDA Information

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Many years ago, at the last to last to last MVP summit, Microsoft was kind enough to share with us what they were thinking wayyyyyyyyyyyy ahead! I specially remember John Durant talking about the specific enhancements planned for SharePoint 2010 development experience. If you haven’t seen John Durant talking on stage, the guy has more enthusiasm than tiger woods in Amsterdam! The energy of his presentations is simply amazing. So, I pulled out my phone, and I snapped a picture! And, I emailed that picture to everyone in the MVP land, and Microsoft land, saying “We have evidence”, i.e. here are the promises that were made, and dammit we’ll see by the time you release SP2010 how many of these do you actually release. Here is the picture ladies and gentlemen -     It’s a good karate chop action shot isn’t it? Of course, we were all immediately warned not to share any of this seriously strictly NDA information at the time. Well, now that the information is out in the world, I can finally share now, this small tidbit of how far ahead Microsoft is thinking in their plans. Frankly, I wouldn’t be surprised, if today that they have a very clear idea what SharePoint vNext will be all about, or should I say vNextvNext? Have fun! Comment on the article ....

    Read the article

  • SQL SERVER – Installing AdventureWorks Sample Database – SQL in Sixty Seconds #010 – Video

    - by pinaldave
    SQL Server has so many enhancements and features that quite often I feel like playing with various features and try out new things. I often come across situation where I want to try something new but I do not have sample data to experiment with. Also just like any sane developer I do not try any of my new experiments on production server. Additionally, when it is about new version of the SQL Server, there are cases when there is no relevant sample data even available on development server. In this kind of scenario sample database can be very much handy. Additionally, in many SQL Books and online blogs and articles there are scripts written by using AdventureWork database. The often receive request that where people can get sample database as well how to restore sample database. In this sixty seconds video we have discussed the same. You can get various resources used in this video from http://bit.ly/adw2012. More on Errors: SQL SERVER – Install Samples Database Adventure Works for SQL Server 2012 SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012 SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification SQLAuthority News – Download Microsoft SQL Server 2012 RTM Now I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Virtualbox two networks slow

    - by Petr Marek
    I am running an Ubuntu server guest on Win 7 guest, and am running a webrick server (RoR dev). If I have just a host-only network, everything works fine and the browser response is instant. However, if I add a second network (NAT), so that the server can connect to the internet (for various updates etc.), the host-to-guest access gets really slow. I can't use the bridge connection. I am using the port 3000 (RoR Webrick server) and connecting to the guest via internet browser on this port (eg http://192.168.56.102:3000). Any idea, what could be causing this? If I ping the IP from host console, I get < 0ms. Here are the settings (relevant info is in english; Povoleno vše is Everything is allowed):

    Read the article

  • SQL SERVER – 2012 – Summary of All the Analytic Functions – MSDN and SQLAuthority

    - by pinaldave
    SQL Server 2012 (RC0 Available here) has introduced new analytic functions. These functions were long awaited and I am glad that they are here. Previously when any of this function was needed people use to write long T-SQL code to simulate that and now no need of the same. Having available native function also helps performance as well readability. In last few days I have written many articles on this subject on my blog. The goal was make these complex analytic functions easy to understand and make it widely accepted. As this new functions are available and as awareness spreads we should start using the new functions. Here is the quick list of the new function and relevant MSDN site. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK I also enjoyed three different puzzles during the course of this series which gave clear idea to the SQL Server 2012 analytic functions. SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example This series will be always my dear series as during this series I had went through very unique experience of my book going out of stock and becoming available after 48 hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fortigate - Accessing a Virtual Server address from several interfaces

    - by Jeremy G
    I am setting up a new application in its own DMZ on our Fortigate 300C firewalls. I have defined a load-balancing configuration for part of the application, and this works fine for traffic coming in from our internal network. However, I would also like this application to be reachable from other DMZs, for inter-application traffic, and from the SSL VPN interface. I can't seem to define the required policy, and it seems this is due to Virtual Servers being bound to the client interface on the Fortigate rather than the server interface (and so my virtual IP is not accessible from any of these other interfaces) Does anyone have an idea how I might go about this ? I guess I could create other virtual IPs for each interface, but this gets complicated to handle as clients need to change the address they use depending on how they are connecting. Thanks, Jeremy G

    Read the article

  • Outlook receives winmail.txt attachment instead of Excel, PowerPoint or Word attachments from Lotus notes senders

    - by Philippe
    Ok so the title pretty much says it all. We are offering a Hosted Exchange solution for our customer. Everything is working fine except for one customer complaining that he is receiving winmail.dat or winmail.txt attachments instead of the actual Word Excel or PowerPoint attachments he should be receiving, only when these messages come from a specific European senders, that is using Lotus Notes. I know that usually the problem is coming from Outlook senders to other mail clients, but this is not the first they inform me of this but I can't find anything on the matter so far. Has anyone ever gotten and solved this problem? If not, does anyone have any idea regarding this? I had solved this problem a few months ago, by upgrading Outlook to SP2 and then uninstalling it using the Service Pack removing tool of Microsoft. It seems that only the latest SP1 version could work but not the SP2. The problem is that now nothing is working at all. Thank you for your help, Philippe

    Read the article

  • How to troubleshoot a remote wmi query/access failure?

    - by Roman
    Hi I'm using Powershell to query a remote computer in a domain for a wmi object, eg: "gwmi -computer test -class win32_bios". I get this error message: Value does not fall within the expected range Executing the query local under the same user works fine. It seems to happen on both windows 2003 and also 2008 systems. The user that runs the shell has admin rights on the local and remote server. I checked wmi and dcom permissions as far as I know how to do this, they seem to be the same on a server where it works, and another where it does not. I think it is not a network issue, all ports are open that are needed, and it also happens within the same subnet. When sniffing the traffic we see the following errors: RPC: c/o Alter Cont Resp: Call=0x2 Assoc Grp=0x4E4E Xmit=0x16D0 Recv=0x16D0 Warning: GssAPIMechanism is not found, either caused by not reassembled, conversation off or filtering. And an errormessage from Kerberos: Kerberos: KRB_ERROR - KDC_ERR_BADOPTION (13) The option code in the packet is 0x40830000 Any idea what I should look into?

    Read the article

  • Profile Picture Thumbnails, Following Projects, and Fork Collaboration

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website last week. Profile Picture Thumbnails We have added a way to select a thumbnail from your profile picture, which will start appearing next to usernames across the site.  Managing your thumbnail is simple. From your profile page, choose Edit your profile.  On the left side, you’ll find an intuitive widget for choosing a profile picture, uploading it, and editing your thumbnail image. If you previously uploaded a profile picture, we’ve used that to generate a starter thumbnail. We welcome your suggestions and ideas for areas where seeing user thumbnails would be useful or interesting. Following Projects Based on some feedback we’ve received recently, we have taken several steps to help you discover and follow interesting and popular projects on CodePlex: The homepage now surfaces the top Projects Users are Following from the previous 7 days. When you visit any project homepage, you can see at a glance how many people follow the project. When you visit the People tab for any project, you will see both the project contributors and the 25 most recent project followers. Fork Collaboration We now support enabling collaborators on a fork based on a large number of user requests.  From the Source Code management page for your fork, you will now see the following on the right side: To add a collaborator, type in a username and click Add. All fork collaborators will have the ability to push to the fork and send/cancel pull requests.  To remove a collaborator, hover over user, and click on the X that appears: The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • About HDD enclosure

    - by kmitnick
    hey guys, how r u doin? I have this 3.5" IDE enclosure, and it works great, I mean I love the idea of enclosures ( not the power feeding thing :), btw can't I just insert a chargable battery to feed the power when I am unable to find an electricity block), anyway my question is, when I finish the usage of the enclosure I safely remove it when using Windows or umount when working with Linux, and after that I got confused whether to turn it off or no? when I turn it off, the HDD suddenly stop spinning as if power failure not as when it was an internal and normally shuted down the pc. So is it ok to turn it off the way I've just said??? regards, ~Abed

    Read the article

  • How to create tunnel to utilize for telnet connection.

    - by Z12
    The scenario is as follows: Machine A is located behind client firewall. The machine runs telnetd. This is Linux machine with Python 2.5.4 installed. I do not know the IP addy of the router and firewall is not open incoming. outgoing firewall is open. Machine B (Windows machine) is a server with well known IP address. I can install any programs I want on either machine. The idea is that I want Machine A to open a socket to machine B. Then I want to hold that socket and use to run a telnet session from Machine B to Machine A telnetd server. Is there any freeware that does this? Thoughts? Thanks!

    Read the article

  • DVD-RW Drive not recognizing a DVD+R disc

    - by unknown (yahoo)
    Hi everyone, I am having trouble burning DVD+R disc's. My OS is vista and i have used this burner and these same discs in the past. I haven't had the need to do so in months and now that i come back to create some backups my DVD/RW drive doesn't recognize a brand new DVD+R disc. These disc are the same ones i have used in the past(Same Pack even). Anyone have any idea what this might be. Maybe a vista upgrade or something that i downloaded in the last few months that could have thrown something off? Thanks in advance.

    Read the article

  • Exchange 2007 CCR: Logs not replicating to passive node partition

    - by yum_tacos4u
    In my environment I have setup Exchange 2007 in an CCR cluster, mirroring our main servers to a set of servers in passive mode. One of the partitions on the passive node that I have setup for the logs for Exchange 2007 has faulted, causing the partition to be unreadable. I have replaced the partition on the passive node, and setup the drive to mirror the one in active mode, but the logs are not replicating since the change. Is there anyway to force the replication of the new drive for the logs to the new partition? Any idea why the logs are not replicating? Any help or comments is appreciated, and thanks in advance.

    Read the article

  • How can I fix apt-get autoremove wanting to uninstall most of my packages?

    - by Stefano
    I did change my packages in synaptic from manually installed to Automatically (they were not manually installed but automatically). Now they are marked for Autoremove. I tested it with sudo apt-get autoremove and the result is shown below (a reduced version because its almost all packages). I remember last year I had same issue and solved it via Ubuntu forums but the forum is down and I cannot reach the post! Anyone has any idea how to fix this? sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: unity-asset-pool unity-greeter unity-lens-applications unity-lens-files unity-lens-music unity-lens-photos unity-lens-video unity-scope-gdrive unity-scope-musicstores unity-scope-video-remote unity-services unity-tweak-tool unity-webapps-amazoncloudreader unity-webapps-common unity-webapps-facebookmessenger unity-webapps-gmail unity-webapps-googledocs unity-webapps-googleplus unity-webapps-launchpad unity-webapps-linkedi xserver-xorg-input-wacom xserver-xorg-video-all xserver-xorg-video-ati xserver-xorg-video-cirrus xserver-xorg-video-fbdev xserver-xorg-video-intel xserver-xorg-video-mach64 xserver-xorg-video-mga xserver-xorg-video-modesetting xserver-xorg-video-neomagic xserver-xorg-video-nouveau xserver-xorg-video-openchrome xserver-xorg-video-qxl xserver-xorg-video-r128 xserver-xorg-video-radeon xserver-xorg-video-s3 xserver-xorg-video-savage xserver-xorg-video-siliconmotion xserver-xorg-video-sis xserver-xorg-video-sisusb xserver-xorg-video-tdfx xserver-xorg-video-trident xserver-xorg-video-vesa xserver-xorg-video-vmware xul-ext-unity xul-ext-webaccounts xul-ext-websites-integration y-ppa-manager yad zenity zenity-common zip 0 upgraded, 0 newly installed, 1440 to remove and 0 not upgraded. After this operation, 3,853 MB disk space will be freed. Do you want to continue [Y/n]?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • httpd not starting with systemd on F17

    - by malfy
    Title says it all. This is a fresh Fedora 17 system running on a Xen hypervisor. No idea why it won't start [root@box~]  uname -a Linux box.localhost 3.5.4-2.fc17.i686.PAE #1 SMP Wed Sep 26 22:10:23 UTC 2012 i686 i686 i386 GNU/Linux [root@box~]  cat /etc/redhat-release Fedora release 17 (Beefy Miracle) [root@box~]  systemctl enable httpd.service ln -s '/usr/lib/systemd/system/httpd.service' '/etc/systemd/system/multi-user.target.wants/httpd.service' [root@box~]  systemctl start httpd.service Job failed. See system journal and 'systemctl status' for details. [root@box~]  systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled) Active: failed (Result: exit-code) since Fri, 19 Oct 2012 22:43:37 -0500; 3s ago Process: 18225 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=226/NAMESPACE) CGroup: name=systemd:/system/httpd.service Oct 19 22:43:37 box.localhost systemd[18225]: Failed at step NAMESPACE spawning /usr/sbin/httpd: No such file or directory [root@box~]  ls -al /usr/sbin/httpd -rwxr-xr-x 1 root root 343496 Apr 30 04:56 /usr/sbin/httpd

    Read the article

  • With WPF and Silverlight against cancer

    - by Laurent Bugnion
    MVPs are well known for their good heart (like the GeekGive initiative shows) and Client App Dev MVP Gregor Biswanger is no exception. At the latest MVP summit (beginning of March 2011), he took over a DVD about WPF 4 and Silverlight 4 and asked a few Microsoft superstars to sign it. Right now, the DVD is auctioned on eBay and of course the proceeds will go to a charitable work: The German League against Cancer (Deutsche Krebshilfe). The post is in German and English (scroll down for the English text). This sounds like a great idea, and considering who signed it, it is going to be a real collectible: Scott Hanselman (Principal Program Manager Lead in Server and Tools Online) Tim Heuer (Program Manager for Microsoft Silverlight) Rob Relyea (Principal Program Manager Lead - Client Platform WPF & Silverlight) Pete Brown (Developer Division Community Program Manager - Windows Client) Eric Fabricant (Program Manager WPF) Jeff Wilcox (Silverlight Senior SDE) Jeffrey R Ferman (SDET Visual Studio Client Dev Tools) Chan Verbeck (Expression Blend Team) Yaniv Feinberg (Expression Blend Team) Douglas Olson (Director Dev Expression) Samuel W. Bent (Principal Software Design Engineer WPF) John Papa (Technical Evangelist for Silverlight) So if you feel that you could do a generous gesture, go ahead and take a look at the auction, and talk about it around you. Let’s prove again that geeks rule, also when it comes to giving to a good cause! Cheers! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • Bluetooth device won't re-add after being removed (Windows 7)

    - by HericDenis
    I've got a SBH52 headset, it works fine on my android phone, then I tried to use it with my Windows 7 PC, then it won't work because Microsoft Windows dosen't have native support for A2DP connections and one of the answers on the previous link said that removing the device from "devices and printers", running this software and then re-add the device would solve the problem. But the big problem is that they won't pair again: I tried, hopelessly, the suggested fixes from Windows, it won't work. By the way, it is Windows 7 (x64), it just looks like Windows 8. I had this problem once many years ago with other PC and Bluetooth device, I remember it had something to do with drivers that were not removed and I tried to delete them from somewhere, and as far as I remember it worked, but no idea how. In short, how can I pair it again with my PC?

    Read the article

< Previous Page | 287 288 289 290 291 292 293 294 295 296 297 298  | Next Page >