Search Results

Search found 19871 results on 795 pages for 'commit log'.

Page 71/795 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • "Fatal Error" message during boot process

    - by Denja
    I'm running Ubuntu 10.10 with ATI proprietary FGLRX graphic drivers When I boot, I can see this very quick message appearing; I don't have time to read it: **Fatal Error ..................................**(&@something i cant read) I searched through the log file in /var/log/ in order to find what is wrong and I did find something in the /var/log/Xorg.1.log: 21:31:08 [ 15.734] (--) using VT number 1 [ 204.647] Fatal server error: [ 204.647] xf86OpenConsole: VT_WAITACTIVE failed: Interrupted system call [ 204.647] [ 204.647] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 204.647] Please also check the log file at "/var/log/Xorg.1.log" for additional information. [ 204.647] But this is already Xorg.1.log. And there is a Xorg.0.log also & Xorg.0.log.old but it doesnt have any error in it. My system seems to work properly and it seems its not affected by this But how do I correct this message? Any suggestions?

    Read the article

  • Fatal Error when booting

    - by Denja
    I'm running Ubuntu 10.10 with ATI proprietary FGLRX graphic drivers When I boot, I can see this very quick message appearing; I don't have time to read it: **Fatal Error ..................................**(&@something i cant read) I searched through the log file in /var/log/ in order to find what is wrong and I did find something in the /var/log/Xorg.1.log: 21:31:08 [ 15.734] (--) using VT number 1 [ 204.647] Fatal server error: [ 204.647] xf86OpenConsole: VT_WAITACTIVE failed: Interrupted system call [ 204.647] [ 204.647] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 204.647] Please also check the log file at "/var/log/Xorg.1.log" for additional information. [ 204.647] But this is already Xorg.1.log. And there is a Xorg.0.log also & Xorg.0.log.old but it doesnt have any error in it. My system seems to work properly and it seems its not affected by this But how do I correct this message? Any suggestions?

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • PS using Get-WinEvent with FilterXPath and datetime variables?

    - by Jordan W.
    I'm grabbing a handful of events from an event log in chronological order don't want to pipe to Where want to use get-winevent After I get the Event1, I need to get the 1st instance of another event that occurs some unknown amount of time after Event1. then grab Event3 that occurs sometime after Event2 etc. Basically starting with: $filterXML = @' <QueryList> <Query Id="0" Path="System"> <Select Path="System">*[System[Provider[@Name='Microsoft-Windows-Kernel-General'] and (Level=4 or Level=0) and (EventID=12)]]</Select> </Query> </QueryList> '@ $event1=(Get-WinEvent -ComputerName $PCname -MaxEvents 1 -FilterXml $filterXML).timecreated Give me the datetime of Event1. Then I want to do something like: Get-WinEvent -LogName "System" -MaxEvents 1 -FilterXPath "*[EventData[Data = 'Windows Management Instrumentation' and TimeCreated -gt $event1]]" Obviously the timecreated part bolded there doesn't work but I hope you get what I'm trying to do. any help?

    Read the article

  • How to capture input parameters from within stored procedure (SQL Server 2005)?

    - by Duncan
    I would like to create a generic logging solution for my stored procedures, allowing me to log the values of input parameters. Currently I am doing this more or less by hand and I am very unhappy with this approach. Ideally, I would like to say something like the following: "given my spid, what are my input parameters and their values?" This is the same information exposed to me when I run SQL Profiler -- the stored procedure's name, all input params and all input VALUES are listed for me. How can I get my hands on these values from within a stored procedure? Thanks; Duncan

    Read the article

  • C# Sharp Windows Application prevents Windows from shutting down / logging off

    - by user299711
    I have written a C# Windows Forms application, not a service (it is only used when the user is logged in and has a graphical user interface) that has a background thread running in an infinite loop. When I try shutting down Windows (7) however, it tells me the program is preventing it from shutting down or logging off and asks me whether I want to force a shutdown. Now, is there any possibility for my program to become aware (get a handler) of Windows trying to quit it or to log off? So, what I need is to make the application realize when Windows tries to quit. Thanks in advance.

    Read the article

  • I'm really offtopic. But I've got a really good reason.

    - by lost
    Is there anyway Encryption on an unidentified file can be broken(file in question: config file and log files from ardamax keylogger). These files date back all the way to 2008. I searched everywhere, nothing on slashdot, nothing on google. Ardamax Keyviewer? Should I just write to Ardamax? I am at a loss of what to do. I feel comprimised. Anyone managed to decrpyt files with Crypto-analysis?

    Read the article

  • Initiate a PHP class where class name is a variable

    - by ed209
    I need some help with an error I have not encountered before and can't seem to find anywhere. In a PHP mvc framework (just from a tutorial) I have the following: // Initiate the class $className = 'Controller_' . ucfirst($controller); if (class_exists($className)) { $controller = new $className($this->registry); } $className is showing the correct class name (case is also correct). But when I run it I get this in the apache error log (no php error) [Wed Mar 31 10:34:12 2010] [notice] child pid 987 exit signal Segmentation fault (11) Process id is different on every call. I am running PHP 5.3.0 on os x 10.6. This site seems to work on 5.2.11 on another Mac. Not really sure where to go next to debug it. I guess it could be an apache setting as much as a php bug or a problem with the code... any suggestions on where to look next?

    Read the article

  • How can I make the Python logging output to be colored?

    - by airmind
    Some time ago I saw a Mono application with colored output, probably because of it's log system, because all the messages were standardized. Now, Python has the logging module, and it let you specify a lot of options or customize it entirely, so I'm imagining that something like that would be possible too with Python, however I could not find it anywhere. Is there any way to make the Python logging module to output in color? What I want is for error messages to appear in red, for instance. Debug messages in blue or yellow, and so on. Of course this would probably only work on Linux, with compatible terminals (most modern terminals are), but I could fallback to the original logging output if color is not supported. Any ideas?

    Read the article

  • How do I change the default format of log messages in python app engine?

    - by dazed-n-confused
    I would like to log the module and classname by default in log messages from my request handlers. The usual way to do this seems to be to set a custom format string by calling logging.basicConfig, but this can only be called once and has already been called by the time my code runs. Another method is to create a new log Handler which can be passed a new log Formatter, but this doesn't seem right as I want to use the existing log handler that App Engine has installed. What is the right way to have extra information added to all log messages in python App Engine, but otherwise use the existing log format and sink?

    Read the article

  • python logparse search specific text

    - by krisdigitx
    hi, I am using this function in my code to return the strings i want from reading the log file, I want to grep the "exim" process and return the results, but running the code gives no error, but the output is limited to three lines, how can i just get the output only related to exim process.. #output: {'date': '13', 'process': 'syslogd', 'time': '06:27:33', 'month': 'May'} {'date': '13', 'process': 'exim[23168]:', 'time': '06:27:33', 'month': 'May'} {'May': ['syslogd']} #function: def generate_log_report(logfile): report_dict = {} for line in logfile: line_dict = dictify_logline(line) print line_dict try: month = line_dict['month'] date = line_dict['date'] time = line_dict['time'] #process = line_dict['process'] if "exim" in line_dict['process']: process = line_dict['process'] break else: process = line_dict['process'] except ValueError: continue report_dict.setdefault(month, []).append(process) return report_dict

    Read the article

  • Unexpected htaccess behaviour (mod_rewrite and apache)

    - by avastreg
    Yeah, mod_rewrite is driving me crazy. Here is the problem: my htaccess RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [L,QSA] when i try to access the page advantix (so address was www.mywebsite.com/advantix), i'm being redirected to advantix/?url=advantix Looking at the access log, i have a suspicious 301 in the middle "GET /advantix HTTP/1.1" 301 335 "-" "Mozilla/5.0" "GET /advantix/?url=advantix HTTP/1.1" 200 186 "-" "Mozilla/5.0" There is one important detail: advantix is a directory. So, if i comment that rule, advantix goes to the folder and list the files. Why it applies automatically the / if there's a folder matching? I don't want to reach the folder, i want to reach index.php?url=advantix with a call to advantix. I have the rewriteLogs too, but they didn't help more. My vhost conf has Directory tag with Options All, if helps, i don't know much about that.

    Read the article

  • How to concatenate multiple lines of log file into single variable in batch file?

    - by psych
    I have a log file containing a stack trace split over a number of lines. I need to read this file into a batch file and remove all of the lines breaks. As a first step, I tried this: if exist "%log_dir%\Log.log" ( for /F "tokens=*" %%a in ("%log_dir%\Log.log") do @echo %%a ) My expectation was that this would echo out each line of the log file. I was then planning to concatenate these lines together and set that value in a variable. However, this code doesn't do what I would expect. I have tried changing the value of the options for delims and tokens, but the only output I can get is the absolute path to the log file and nothing from the contents of this file. How can I set a variable to be equal to the lines of text in a file with the line breaks removed?

    Read the article

  • Drupal Error After Logging In

    - by Kim
    Hello everyone. I'm kinda new to using drupal and i'm just wondering why I kind of get this error on my new site. See i have this website under WampServer running drupal6-16. Everytime I log in with my pre-created admin account 'admin01' pass: 'admin01' i get redirected to the WampServer localhost which appears to be unusual since the header does not have the WampServer logo. I already tried creating a new drupal website with the same database and the same thing happens. Also, I tried creating another website with a new database but I copied the other website's theme and other contents and the same thing happens. Help me please. I am losing my grip on this. :( Note: I have the same website running on one PC and i am just trying to run it on another PC by copying all its contents. The original copy is working perfectly but I can't seem to get the hook on my new copies to work on other PCs.

    Read the article

  • Help on choosing which SQL Server 2008 scale-out solution to pick (replication, ...)

    - by usr
    I am currently crossing the jungle of SQL Server scale-out technologies like replication, log-shipping, mirroring... I have the following constraints on my choice: I want the read-only load to be spread accross the primary and the secondary (mirror, subscriber) server Write load can be sent directly to the primary server The solution should be nearly maintainance free. Schema changes should just replicate to the secondary server (attention: replication has some serious constraints here as it seems) Written data should be accessible very quickly (in under 1s, but better would be instantaneously) on the secondary server On server failure I can tollerate up to one hour of data loss easily. I am more concerned with easy scalability Here are some options for what I could pick: http://msdn.microsoft.com/en-us/library/bb510414.aspx. Any experience you could share?

    Read the article

  • How can I get the Forever to write to a different log file every day?

    - by user1438940
    I have a cluster of production servers running a Node.JS app via Forever. As far as I can tell, my options for log files are as follows: Let Forever do it on its own, in which case it will log to ~/.forever/XXXX.log Specify one specific log file for the entire life of the process What I'd like to do, however, is have it log to a different file every day. eg. 20121027.log, 20121028.log, etc. Is this possible? If so, how can it be done?

    Read the article

  • Does the tempdb Log file get Zero Initialized at Startup?

    - by Jonathan Kehayias
    While working on a problem today I happened to think about what the impact to startup might be for a really large tempdb transaction log file.  Its fairly common knowledge that data files in SQL Server 2005+ on Windows Server 2003+ can be instant initialized, but the transaction log files can not.  If this is news to you see the following blog posts: Kimberly L. Tripp | Instant Initialization - What, Why and How? In Recovery... | Misconceptions around instant file initialization In Recovery…...(read more)

    Read the article

  • SQL SERVER FIX : ERROR : 4214 BACKUP LOG cannot be performed because there is no current database b

    I recently got following email from one of the reader.Hi Pinal,Even thought my database is in full recovery mode when I try to take log backup I am getting following error.BACKUP LOG cannot be performed because there is no current database backup. (Microsoft.SqlServer.Smo)How to fix it?Thanks,[name and email removed as requested]Solution / Fix:This error can [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How can I log key presses in Game Maker?

    - by skeletalmonkey
    I'm trying to create a log of a players actions as they play a game of Spelunky. The easiest I've found to do this is to log what keys are pressed at each frame. What I don't know how to do is how to integrate this with the Game Maker source code of Spelunky. Is there a specific way to create a script that is checked every frame/tick (don't know the right term) and a command to find what buttons are pressed?

    Read the article

  • PHP create huge errors file on my server feof() fread()

    - by Nik
    I have a script that allows users to 'save as' a pdf, this is the script - <?php header("Content-Type: application/octet-stream"); $file = $_GET["file"] .".pdf"; header("Content-Disposition: attachment; filename=" . urlencode($file)); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Description: File Transfer"); header("Content-Length: " . filesize($file)); flush(); // this doesn't really matter. $fp = fopen($file, "r"); while (!feof($fp)) { echo fread($fp, 65536); flush(); // this is essential for large downloads } fclose($fp); ? An error log is being created and gets to be more than a gig in a few days the errors I receive are- [10-May-2010 12:38:50] PHP Warning: filesize() [function.filesize]: stat failed for BYJ-Timetable.pdf in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: Cannot modify header information - headers already sent by (output started at /home/byj/public_html/pdf_server.php:10) in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: fopen(BYJ-Timetable.pdf) [function.fopen]: failed to open stream: No such file or directory in /home/byj/public_html/pdf_server.php on line 12 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 The line 13 and 15 just continue on and on... I'm a bit of a newbie with php so any help is great. Thanks guys Nik

    Read the article

  • How do I log in to the ubuntu 10.04 server using grub2 from a remote computer/terminal with GUI running

    - by Steve Cornall
    I upgraded my server from ubuntu 8.04 to 10.04. It now uses the grub2 bootloader. In 8.04 from the grub log-in page I could select the option log in to server and thereby connect through SSH directly to my server and have the GUI running. I can currently log in to my upgraded server now from any of my 8.04 machines using grub1 with the GUI running. I cannot log in that way from a machine that uses grub2 and ubuntu 10.04. I want to upgrade my entire network to ubuntu 10.04 but cannot until I know how to log in to the network from grub2 with the gui open. I have exhausted all my ideas as to the solution to this problem. Any help would be most appreciated. Thank-you

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • How to pull one commit at a time from a remote git repository?

    - by Norman Ramsey
    I'm trying to set up a darcs mirror of a git repository. I have something that works OK, but there's a significant problem: if I push a whole bunch of commits to the git repo, those commits get merged into a single darcs patchset. I really want to make sure each git commit gets set up as a single darcs patchset. I bet this is possible by doing some kind of git fetch followed by interrogation of the local copy of the remote branch, but my git fu is not up to the job. Here's the (ksh) code I'm using now, more or less: git pull -v # pulls all the commits from remote --- bad! # gets information about only the last commit pulled -- bad! author="$(git log HEAD^..HEAD --pretty=format:"%an <%ae>")" logfile=$(mktemp) git log HEAD^..HEAD --pretty=format:"%s%n%b%n" > $logfile # add all new files to darcs and record a patchset. this part is OK darcs add -q --umask=0002 -r . darcs record -a -A "$author" --logfile="$logfile" darcs push -a rm -f $logfile My idea is Try git fetch to get local copy of the remote branch (not sure exactly what arguments are needed) Somehow interrogate the local copy to get a hash for every commit since the last mirroring operation (I have no idea how to do this) Loop through all the hashes, pulling just that commit and recording the associated patchset (I'm pretty sure I know how to do this if I get my hands on the hash) I'd welcome either help fleshing out the scenario above or suggestions about something else I should try. Ideas?

    Read the article

  • Can I specify the files to commit in subversion in a file rather than on the command line?

    - by René Nyffenegger
    I have renamed (with svn move) a lot of files in a subversion project. Now, I am trying to commit these on Window's cmd.exe. It seems that I hit a limit (probably by cmd.exe) in that the number of files is too long for the command line to swallow. Now, I thought and hoped that I could list the files to commit in a seperate file that I could specify with the commit command (something like svn ci --files-to-commit=renamed-files.txt -m "Renamed a lot of files" Yet, either such an option does not exist or I am unable to find this. Unfortunately, I cannot do a svn ci . as I have done other changes in the project as well. Neither can I do a svn ci *pattern-of-renamed-files* since this would only check in the added files, not the deleted ones. Before I start checking in the files with smaller chunks of files to check in (and thus increase the revision number uneccesserily without giving a hint as to the 'atomicity' of the operation) I thought I ask if this is indeed impossible to do.

    Read the article

  • Does Github.com have to create a merge commit when you merge from a fork ?

    - by Nishant
    I cloned the master and started doing he my work . Due to permissions I push the branch to my fork . I then sent a pull request to my master and someone with permission does the merge . I notice that Github.com creates a merge commit snapshot which to me looks like just a diff of the entire changes which is actually not necessary but helpful in the sense I can just look at merge commit to see the entire diff . I can see the same sha has as my own branch - hence it looks like the merge is an extra commit which probably aint nexeccary since its a fast forward ? master - a myfork(computer) - a->b->c myfork(github) - a->b->c Pull request myfork - master (which it says I can automatically merge) shows the entire diff and then when I merge it , it shows up as master - a->b->c-d . The d is a merge commit which I think it not really required because it is a fast forward ? Can someone explain why does this happen ? I think this is the same scenario if I rebase master if master had gone ahead , but that has not happened . Master is still at when I merge .

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >