Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 287/832 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • /var/run/httpd.pid missing...

    - by user38043
    Recently one of our web servers httpd stopped working and I haven't been able to find the problem. Today I sat down and went through every directory in the httpd.conf and have found an issue. the /var/run/httpd.pid is missing from the folder. All other files are there and seem to be fine. I cannot create a new file with the same name in vi and I have no idea what could have caused this. I imagine it was caused by a cold reboot at some stage as no other extraordinary processes have been run on this server at the time it went down. I am running CentOS 3. How can I reinstate this file?

    Read the article

  • Custom Profile Provider with Web Deployment Project

    - by Ben Griswold
    I wrote about implementing a custom profile provider inside of your ASP.NET MVC application yesterday. If you haven’t read the article, don’t sweat it.  Most of the stuff I write is rubbish anyway. Since you have joined me today, though, I might as well offer up a little tip: you can run into trouble, like I did, if you enable your custom profile provider inside of an application which is deployed using a Web Deployment Project.  Everything will run great on your local machine and you’ll probably take an early lunch because you got the code running in no time flat and the build server is happy and all tests pass and, gosh, maybe you’ll just cut out early because it is Friday after all.  But then the first user hits the integration machine and, that’s right, yellow screen of death. Lucky you, just as you’re walking out the door, the user kindly sends the exception message and stack trace: Value cannot be null. Parameter name: type Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Stack Trace: [ArgumentNullException: Value cannot be null. Parameter name: type] System.Activator.CreateInstance(Type type, Boolean nonPublic) +2796915 System.Web.Profile.ProfileBase.CreateMyInstance(String username, Boolean isAuthenticated) +76 System.Web.Profile.ProfileBase.Create(String username, Boolean isAuthenticated) +312 User error?  Not this time. Damn! One hour later… you notice the harmless “Treat as library component (remove the App_Code.compiled file)” setting on the Output Assemblies Tab of your Web Deployment Project. You have no idea why, but you uncheck it.  You test and everything works great both locally and on the integration machine.  Application users think you’re the best and you’re still going to catch the last half hour of happy hour.  Happy Friday.

    Read the article

  • Ubuntu mount hard drive confusing

    - by Fresheyeball
    I'm new to linux and have a home server set up running ubuntu. In the ui its very easy to mount my additional internal harddrives. I just double click on them. Since I have made this server headless, I now need to mount via the command line. How can I replicate the very simple double click gui behavior? So far all the information I've found is very complex. Ubuntu auto generated folders for each hd in under /media and I can see the harddrives under /dev but have no idea which is which as the hardware is identical between them. I also don't know how they are formated. Thanks in advance for any advice you have.

    Read the article

  • Cannot apply unity --reset after modifying files

    - by Alex Cline
    So I have an idea of what I did wrong, I am just not sure how to fix it. I used the Unity Glass mod: http://www.omgubuntu.co.uk/2012/07/unity-glass-offers-refined-new-look-for-the-unity-launcher After removing it, I cannot reset unity and it does not work. Even after purging Unity and reinstalling it, I cannot seem to replace the missing files. $unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1c00027 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/arcline/.compiz/session/10b624e5c8f98c5325134625607758338300000051770001" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done (compiz:7038): Gtk-WARNING **: Theme parsing error: gnome-panel.css:28:11: Not using units is deprecated. Assuming 'px'. (compiz:7038): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Segmentation fault (core dumped)

    Read the article

  • Files Corrupted on System Restore

    - by Yar
    I restored my OSX today by copying the system over from a backup. Most things seem to be working, but every single GIT repo gives pretty much the same error fatal: object 03b45161eb27228914e690e032ca8009358e9588 is corrupted I have tried chowning, doing everything as sudo or root... I have no idea what to try next. This would be a normal git question except that it's on many repos. Ideas? Note: I'm using git 1.7.0.3 and I was probably using 1.7.0 before. Edit: Tried with 1.7.0.2 and it made no difference. Edit: Even when copying any of the repos I get this strange message cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/Pickers/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted

    Read the article

  • local app opening instead of ssh forwarded app over x

    - by The Journeyman geek
    i have a custom install of ubuntu 9.10 - xorg intel and its deps, icewm, xde and swiftfox from the swiftfox repos. I'm trying to start a ssh forwarded session of swiftfox from another system - which has the plain vanilla firefox version in the repos- with ssh -x [ipaddress] and then starting swiftfox from command line. When i start it though, it opens up the local copy of firefox instead of the copy of swiftfox on the other box. I have NO idea what's wrong...swiftfox dosen't open on the remote box, i am definately on the remote boxes terminal, and there's no way whatsoever it should open a local copy. I'm wondering what's wrong

    Read the article

  • Rackmount temperature/humidity control

    - by Evan Plaice
    This may seem like a strange question because it involves a non-traditional approach. What I'm looking for is a standard rackmount cooling/humidity control module. The idea is to build a portable server rack (in a case) that can be deployed to the field but limit the cooling/temperature control requirements to just the case that the server gear is contained in. I understand that the chiller may warm it's surrounding environment so, as an additional approach, it will be possible to have a separate case for the chiller alone. Do these exist? What are they called? Where can I find one?

    Read the article

  • Setup Linksys 3200 remote access

    - by Greg
    I'm trying to setup remote access for my linksys 3200 so that I can configure it through the WAN port. I have turned on remote access, however when I try to connect I get a 404 error. The settings I have are: When I try to access xxx.xxx.xxx.xxx:9999 I just get a 404 error. I have allowed RDP access to a computer behind the router and this works fine on the same IP address. Any idea's on what else I have to do to allow remote management access? UPDATE: I tried changing the port to 80 and it works. Change it back to any other number and it doesn't work. Modem is setup with a DMZ to the router's IP. Why does it only work on port 80? BTW I can't use port 80 because there is a website hosted behind the router.

    Read the article

  • Gateway SX2300-01u CPU/PSU issues

    - by Vlad
    I have a 2-yr-old Gateway SX2300-01u with an AMD Phenom X3 8550 that I feel has about a year or so of life left, but I am having a couple of hardware issues that I have not been able to resolve. First, the power supply fan sounds like much louder than before and the PSU itself is really hot. The PSU model (Liteon PS 5221-06) is not available at a reasonable price; are there any good alternatives? Could I replace just the PSU fan? Also, the CPU fan failed sometime back, but my replacement, which supposedly fits the MB (Socket AM2+) doesn't actually fit properly. Any idea why?

    Read the article

  • Nginx Fastcgi performance issues

    - by Barry
    I am running several fastcgi servers behind Nginx. I run 3 Nginx workers and 6 fastcgi servers as upstream backends. When I run load tests of 1 req per sec, I can clearly see that on avarage a reply is 0.1sec, but from time to time there are 3.1 sec responses. It is a suspiciously deterministic number and it happens even at very small loads from time to time. Both CPU and Memory are of no issue at all. Any idea where this delay may be comming from? Any suggestions how to debug this? Many thanks, Barry.

    Read the article

  • How do I turn a router into a wireless bridge

    - by Rob Cowell
    I'm trying to get my HD satellite receiver connected to the internet - it has an Ethernet port on the back but my networking equipment is all upstairs. I had the idea of connecting a spare wireless router to the box with an ethernet cable and getting that wireless router to talk to my "main" wireless router (the one with the ADSL connection) to supply internet access. I believe this entails getting the router to work as a "wireless bridge", but I don't know how to do this. Currently, the ADSL line is hooked up to a NETGEAR DG834G. The other two "spare" wireless routers I have to act as the bridge are :- Huawei HG520b Netgear DGN2000 BT Homehub I'd prefer not to change the "main" router (cos I'm used to its web admin UI) - does anyone know a way I can achieve the connectivity I require with the equipment I have?

    Read the article

  • How do I turn a router into a wireless bridge

    - by Rob Cowell
    I'm trying to get my HD satellite receiver connected to the internet - it has an Ethernet port on the back but my networking equipment is all upstairs. I had the idea of connecting a spare wireless router to the box with an ethernet cable and getting that wireless router to talk to my "main" wireless router (the one with the ADSL connection) to supply internet access. I believe this entails getting the router to work as a "wireless bridge", but I don't know how to do this. Currently, the ADSL line is hooked up to a NETGEAR DG834G. The other two "spare" wireless routers I have to act as the bridge are :- Huawei HG520b Netgear DGN2000 BT Homehub I'd prefer not to change the "main" router (cos I'm used to its web admin UI) - does anyone know a way I can achieve the connectivity I require with the equipment I have?

    Read the article

  • Error logging with PHP and mod_fcgid

    - by nbv4
    I have a mediawiki install that is acting up. Whenever I try to save an article, it goes to a blank screen. All though if I refresh that blank screen, it will go on to work. I have no idea why it's doing that, but it seems to be that it's a problem with PHP. I guess by default errors don't get logged when using mod_fcgid, because I can't find an error log anywhere. I tried enabling logging in the /etc/php5/cgi/php.ini, but that didn't do anything. How do I achieve php error logging? I'm using ubuntu 9.10.

    Read the article

  • What free Remote Desktop (server) solutions are there?

    - by Tao
    I know Ubuntu comes with a "Remote Desktop" option that appears to be a straightforward VNC server, and I'm trying to understand the alternatives. Here are the possibilities I've heard about so far: VNC VNC + SSH Tunnelling NX Server, free edition FreeNX NeatX X2Go X11 Forwarding over SSH xrdp I'm coming at this from a Windows user's perspective: To the best of my experience, RDP (aka Terminal Services) is a reasonably secure (barring mitm/server spoofing), efficient desktop sharing protocol with well-supported clients, that can be exposed to the internet when necessary without major fears of intrusion. To the best of my knowledge straight VNC is none of those things, which is where I get confused - why wouldn't a better desktop sharing technology be developed or used in the open-source world? I know VNC can be wrapped with SSH, but that seems beyond the reach of a casual user. X11 forwarding over SSH may be more or less efficient, I have no idea, but is definitely even more complicated, and doesn't (as far as I know) give you access to already-running stuff (no desktop sharing as such, just remote application running). So, I'd like any feedback/preferences amongst these or any other "Free" desktop sharing options, using these criteria and/or any others: Security (esp. for access across internet) Efficiency (bandwidth usage, responsiveness, etc) Free-ness, as in Speech (not sure where RDP or FreeNX lie for this) Free-ness, as in Beer (are there any commercial solutions with usable dependable free offerings?) Ease of use (server and client side) Cross-OS Client availability Cross-OS Server availability Support for independent sessions and shared (and/or "Console") sessions Ongoing support/maintenance/development Thanks!

    Read the article

  • how change nginx temp & log folder or disable logging completely

    - by Ehsan Khodarahmi
    I'm running nginx 1.3.5 under windows seven, I need to execute nginx directly from a read-only media (CD or DVD), but when I want to run it, it fails with this error: nginx: [alert] could not open error log file: CreateFile() "logs/error.log" fail ed (5: Access is denied) 2012/08/28 13:52:46 [emerg] 5604#2864: CreateDirectory() "J:\nginx-1.3.5/temp/client_body_temp" failed (5: Access is denied) where J is my CD-ROM drive letter. I've changed nginx.conf to disable logging completely, but seems anyway it still tries to build a file named 'error.log' in '/logs' folder & some extra temporary contents in '/temp' folder at the startup, so I want to change 'logs' & 'temp' directory path to windows temp folder (%temp%), but I dont have any idea that how can I do it. Also I want to know why nginx still creates 'logs/error.log' after disableing error logging ?

    Read the article

  • gitweb on Ubuntu Server as Location/Directory instead of Virtual Host

    - by mbx
    Since DynDNS no longer resolves subdomains for free I have use gitweb on a subdir of the apache2. Usual suspects such as Pro Git suggest something like <VirtualHost *:80> ServerName gitserver DocumentRoot /srv/gitosis/repositories/ <Directory /srv/gitosis/repositories/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgi DirectoryIndex gitweb.cgi </Directory> </VirtualHost> I tried various variations using Location and Directory tags with different attribute combinations without any notable success. My first Idea was close to the following Alias /gitweb /srv/gitosis/repositories <Location /gitweb> AuthType Basic AuthName "gitweb Repository view" AuthUserFile /etc/apache2/gitweb.passwd Require valid-user SSLRequireSSL SetEnv GITWEB_CONFIG /etc/gitweb.conf AddHandler cgi-script cgi DirectoryIndex /usr/lib/cgi-bin/gitweb.cgi </Location> Apache is in the gitosis group, the repositories are readable and executable for that group. So, what is the indended way to get websvn run on Ubuntu 10?

    Read the article

  • IIS7 returns 403.1 (execute access denied) for image file

    - by Kristoffer
    I have a web app running in IIS7 on Windows Server 2008. There is a virtual directory pointing to a shared folder "/Content/Data" on another machine (running Windows Server 2003), as well as a real directory "/Content/Images" on the local machine (web app sub folder). Accessing images in "/Content/Images" is no problem, but when an image (e.g. a JPEG file) in the "/Content/Data" is accessed by a browser, IIS returns this error: HTTP Error 403.1 - Forbidden: Execute access is denied. However, the web app can read and write to / from it. I assume IIS and ASP.NET are running under different user accounts? Does anyone have an idea on what I have to do to make it work? I have set the permissions on the shared folder to Everyone Full Control, with no luck.

    Read the article

  • mod_fcgid process doesn't respawn

    - by aaronsw
    I have a Python script running on my server as a FastCGI using Apache2 and mod_fcgid. I let it spawn up to five processes. But I soon get messages like these in the Apache logs: [Wed Sep 02 23:16:34 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function [Wed Sep 02 23:16:35 2009] [warn] (103)Software caused connection abort: mod_fcgid: ap_pass_brigade failed in handle_request function and then Apache doesn't seem to recognize that all its processes are dead (I have a max of 5 backends) and refuses to spawn new ones: [Wed Sep 02 23:26:16 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request [Wed Sep 02 23:26:17 2009] [notice] mod_fcgid: /var/www/hacks.og.theinfo.org/picker.fcgi total process count 5 >= 5, skip the spawn request at which point it refuses to respond to requests from the outside world. This doesn't seem to happen with my other FastCGIs, which all use the same Apache config: <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi IPCConnectTimeout 20 MaxProcessCount 5 DefaultMaxClassProcessCount 2 DefaultMinClassProcessCount 1 </IfModule> Any idea what causes it?

    Read the article

  • SQL SERVER – Installing AdventureWorks Sample Database – SQL in Sixty Seconds #010 – Video

    - by pinaldave
    SQL Server has so many enhancements and features that quite often I feel like playing with various features and try out new things. I often come across situation where I want to try something new but I do not have sample data to experiment with. Also just like any sane developer I do not try any of my new experiments on production server. Additionally, when it is about new version of the SQL Server, there are cases when there is no relevant sample data even available on development server. In this kind of scenario sample database can be very much handy. Additionally, in many SQL Books and online blogs and articles there are scripts written by using AdventureWork database. The often receive request that where people can get sample database as well how to restore sample database. In this sixty seconds video we have discussed the same. You can get various resources used in this video from http://bit.ly/adw2012. More on Errors: SQL SERVER – Install Samples Database Adventure Works for SQL Server 2012 SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012 SQLAuthority News – SQL Server 2012 – Microsoft Learning Training and Certification SQLAuthority News – Download Microsoft SQL Server 2012 RTM Now I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Some SharePoint NDA Information

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Many years ago, at the last to last to last MVP summit, Microsoft was kind enough to share with us what they were thinking wayyyyyyyyyyyy ahead! I specially remember John Durant talking about the specific enhancements planned for SharePoint 2010 development experience. If you haven’t seen John Durant talking on stage, the guy has more enthusiasm than tiger woods in Amsterdam! The energy of his presentations is simply amazing. So, I pulled out my phone, and I snapped a picture! And, I emailed that picture to everyone in the MVP land, and Microsoft land, saying “We have evidence”, i.e. here are the promises that were made, and dammit we’ll see by the time you release SP2010 how many of these do you actually release. Here is the picture ladies and gentlemen -     It’s a good karate chop action shot isn’t it? Of course, we were all immediately warned not to share any of this seriously strictly NDA information at the time. Well, now that the information is out in the world, I can finally share now, this small tidbit of how far ahead Microsoft is thinking in their plans. Frankly, I wouldn’t be surprised, if today that they have a very clear idea what SharePoint vNext will be all about, or should I say vNextvNext? Have fun! Comment on the article ....

    Read the article

  • Fun with RadCaptcha for ASP.NET AJAX and OCR software

    A friend of mine was evaluating OCR software and finally decided to go with FineReader. I was curious what would happen if we put the RadCaptcha control in. Will the advanced OCR manage to decode it or not? At first he showed me a test run with the RadCaptcha demo description, to get an idea of the basic output:    Naturally, the captured description text was no problem - only a few characters were misread but then corrected with the spellcheck. Next, the real test was performed:    These were only a couple of the results, but there is no need to post the rest of the tests - none of the RadCaptcha images were recognized by the OCR software. Here are the CaptchaImage settings used in the tests: Background Noise Level: Low /default value Line Noise Level: Low /default value Font Warp Factor: Low /Medium is default value...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Virtualbox two networks slow

    - by Petr Marek
    I am running an Ubuntu server guest on Win 7 guest, and am running a webrick server (RoR dev). If I have just a host-only network, everything works fine and the browser response is instant. However, if I add a second network (NAT), so that the server can connect to the internet (for various updates etc.), the host-to-guest access gets really slow. I can't use the bridge connection. I am using the port 3000 (RoR Webrick server) and connecting to the guest via internet browser on this port (eg http://192.168.56.102:3000). Any idea, what could be causing this? If I ping the IP from host console, I get < 0ms. Here are the settings (relevant info is in english; Povoleno vše is Everything is allowed):

    Read the article

  • SQL SERVER – 2012 – Summary of All the Analytic Functions – MSDN and SQLAuthority

    - by pinaldave
    SQL Server 2012 (RC0 Available here) has introduced new analytic functions. These functions were long awaited and I am glad that they are here. Previously when any of this function was needed people use to write long T-SQL code to simulate that and now no need of the same. Having available native function also helps performance as well readability. In last few days I have written many articles on this subject on my blog. The goal was make these complex analytic functions easy to understand and make it widely accepted. As this new functions are available and as awareness spreads we should start using the new functions. Here is the quick list of the new function and relevant MSDN site. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK I also enjoyed three different puzzles during the course of this series which gave clear idea to the SQL Server 2012 analytic functions. SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example This series will be always my dear series as during this series I had went through very unique experience of my book going out of stock and becoming available after 48 hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fortigate - Accessing a Virtual Server address from several interfaces

    - by Jeremy G
    I am setting up a new application in its own DMZ on our Fortigate 300C firewalls. I have defined a load-balancing configuration for part of the application, and this works fine for traffic coming in from our internal network. However, I would also like this application to be reachable from other DMZs, for inter-application traffic, and from the SSL VPN interface. I can't seem to define the required policy, and it seems this is due to Virtual Servers being bound to the client interface on the Fortigate rather than the server interface (and so my virtual IP is not accessible from any of these other interfaces) Does anyone have an idea how I might go about this ? I guess I could create other virtual IPs for each interface, but this gets complicated to handle as clients need to change the address they use depending on how they are connecting. Thanks, Jeremy G

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >