Search Results

Search found 43168 results on 1727 pages for 'sql log'.

Page 93/1727 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • access_log item w/out IP. Starts with "::1 - - [<date>]"

    - by Meltemi
    Looking at our Apache log I see normal requests like: 174.133.xxx.xxx - - [20/May/2010:17:36:44 -0700] "GET /index.html HTTP/1.1" 200 2004 but every so often i get a cluster of these w/out an IP address. ::1 - - [20/May/2010:18:47:21 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:22 -0700] "OPTIONS * HTTP/1.0" 200 - ::1 - - [20/May/2010:18:47:23 -0700] "OPTIONS * HTTP/1.0" 200 - what do they mean and curious what causes them?

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Svn log - svn: '.' is not a working copy

    - by fampinheiro
    I'm getting "svn: '.' is not a working copy" when i use the svn log command. I know that i need a working copy for the log command to work but can this be done directly on a repository? My goal is to display the information (change history) of a repository. I think updating the working copy whenever i need the log information is not a good solution. Is there an alternative solution to this or updating a working copy every time i need to log is the only way to go? Thanks in advance.

    Read the article

  • Glassfish v3 Domain Output Log is not ending after the netbeans debug is stopped

    - by Nassign
    I would like to know if anybody is experiencing the problem on endless if log that is too long when Netbeans Debugging session is stopped? We use java logging in our web application but if I stop the debug, suddenly in the Output Glassfish v3 Domain window, there are a lot of log that is being dump. The log would last so long, that I need to click the Stop the Server because if I don't the Netbeans logger would consume too much memory that makes it too slugish later. I am not sure how long until the log stop but it last for minutes. Any ideas here?

    Read the article

  • How to log python exception ?

    - by Maxim Veksler
    Hi, Coming from java, being familiar with logback I used to do try { ... catch (Exception e) { log("Error at X", e); } I would like the same functionality of being able to log the exception and the stacktrace into a file. How would you recommend me implementing this? Currently using boto logging infrastructre, boto.log.info(...) I've looked at some options and found out I can access the actual exception details using this code: import sys try: 1/0 except: exc_type, exc_value, exc_traceback = sys.exc_info() traceback.print_exception(exc_type, exc_value, exc_traceback) I would like to somehow get the string print_exception() throws to stdout so that I can log it. Thank you, Maxim.

    Read the article

  • Alter the cruisecontrol log with XSLTLogPublisher

    - by daramarak
    We build our project from CruiseControl. We have a ant script running some exec commands, and the output of this is placed in the CDATA part of that message, and its priority is set to "info". Now I have created an XSL that modifies this priority depending on the CDATA output, so it creates a new logfile that I want CruiseControl to use. I tried to do this by using XSLTLogPublisher tag like this <xsltlogpublisher directory="${log.dir}" outfile="${logname}" xslfile="${transform}" /> But it doesn't seem to work. No error is logged from cruisecontrol no changes are done in the log file, might it be that cruisecontrol have problems transforming its own log, or are the publishers invoked after the actual log is published as a web page? Can anyone provide some sort of solution to my problem?

    Read the article

  • Export Failed Element Log(s) after deliver?

    - by cogmios
    When a deliver has been performed I can rightlick an entry in the GUI version/element log and it displays me a popup with the element log. Quite handy. I now have a delivery with about 25 failed ones and a couple of hundred OK's. Sadly I can not sort on the column "status" so I have to make screenshots wherever I find a [!] to sit together with the specific team to find out if it is really ok to not deliver those. It would however be handy to have this list of element logs from a deliver so that I do not have to make screenshots or copy the contents of the popup box one by one but just have a list of the failed ones with the element log. Is there a way to export the element logs from a deliver (so the the ones that show up when you rightclick and choose "Display Element Log") and/or only the ones that gave failures?

    Read the article

  • Writing exceptions in multihreaded windows service to event log

    - by Ziplin
    I have a multithreaded windows service that will unpredictably stop running once every 24 hours or so. I am writing to the event log and that's going through just fine, but whenever the service crashes there are no messages in the event log (even that the service stopped, despite having AutoLog=true). Is there a way to have uncaught exceptions written straight to the log, even if they aren't in the original thread?

    Read the article

  • Chrome Extension - Console Log not firing

    - by coffeemonitor
    I'm starting to learn to make my own Chrome Extensions, and starting small. At the moment, I'm switching from using the alert() function to console.log() for a cleaner development environment. For some reason, console.log() is not displaying in my chrome console logs. However, the alert() function is working just fine. Can someone review my code below and perhaps tell me why console.log() isn't firing as expected? manifest.json { "manifest_version": 2, "name": "Sandbox", "version": "0.2", "description": "My Chrome Extension Playground", "icons": { "16": "imgs/16x16.png", "24": "imgs/24x24.png", "32": "imgs/32x32.png", "48": "imgs/48x48.png" }, "background": { "scripts": ["js/background.js"] }, "browser_action": { "default_title": "My Fun Sandbox Environment", "default_icon": "imgs/16x16.png" }, "permissions": [ "background", "storage", "tabs", "http://*/*", "https://*/*" ] } js/background.js function click(e) { alert("this alert certainly shows"); console.log("But this does not"); } // Fire a function, when icon is clicked chrome.browserAction.onClicked.addListener(click); As you can see, I kept it very simple. Just the manifest.json and a background.js file with an event listener, if the icon in the toolbar is clicked. As I mentioned, the alert() is popping up nicely, while the console.log() appears to be ignored.

    Read the article

  • Microsoft Tech-Ed North America 2010 - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practi

    - by ssqa.net
    It is just a week to go for Tech-Ed North America 2010 in New Orleans, this time also I'm speaking at this conference on the subject - SQL Server Upgrade, 2000 - 2005 - 2008: Notes and Best Practices from the Field... more from here .. It is a coincedence that this is the 2nd time the same talk has been selected in Tech-Ed North America for the topic I have presented in SQLBits before....(read more)

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Attention users running SQL Server 2008 & 2008 R2!

    - by AaronBertrand
    In April and May, Microsoft released cumulative updates for SQL Server 2008 and 2008 R2 (I blogged about them here and here ). They are: CU #11 for 2008 SP3 (10.00.5840) ( KB #2834048 ) CU #12 for 2008 R2 SP1 (10.50.2874) ( KB #2828727 ) CU #6 for 2008 R2 SP2 (10.50.4279) ( KB #2830140 ) Sometime after that, looks like the next day, both downloads were pulled, allegedly due to an index corruption issue (if you believe the commentary on the Release Services blog post for CU #6 ) or due to an issue...(read more)

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • Exploding maps in Reporting Services 2008 R2

    - by Rob Farley
    Kaboom! Well, that was the imagery that secretly appeared in my mind when I saw “USA By State Exploded” in the list of installed maps in Report Builder 3.0 – part of the spatial offering of SQL Server Reporting Server 2008 R2. Alas, it just means that the borders are bigger. Clicking on it showed me. Unfortunately, I’m not interested in maps of the US. None of my clients are there (at least, not yet – feel free to get in touch if you want to change this ‘feature’ of my company). So instead, I’ve recently been getting hold of some data for Australian areas. I’ve just bought some PostCode shapes for South Australia, and will use this in demos for conferences and for showing clients how this kind of report can really impact their reporting. One of the companies I was talking about getting shape files sent me a sample. So I chose the “ESRI shapefile” option you see above, and browsed to my file. It appeared in the window like this: Australians will immediately recognise this as the area around Wollongong, just south of Sydney. Well, apart from me. I didn’t. I had to put a Bing Maps layer behind it to work that out, but that’s not for this post. The thing that I discovered was that if I selected the Exploded USA option (but without clicking Next), and then chose my shape file, then my area around Wollongong would be exploded too! Huh! I think this is actually a bug, but a potentially useful one! Some further investigation (involving creating two identical reports, one with this exploded view, one without), showed that the Exploded View is done by reducing the ScaleFactor property of the PolygonLayer in the map control. The Exploded version has it below 1. If you set to above one, your shapes overlap. I discovered this by accident… I guess I hadn’t looked through all the PolygonLayer options to work out what they all do. And because this post is about Reporting, it can qualify for this month’s T-SQL Tuesday, hosted by Aaron Nelson (@sqlvariant). Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL University: Parallelism Week - Part 3, Settings and Options

    - by Adam Machanic
    Congratulations! You've made it back for the the third and final installment of Parallelism Week here at SQL University . So far we've covered the fundamentals of multitasking vs. parallel processing and delved into how parallel query plans actually work . Today we'll take a look at the settings and options that influence intra-query parallelism and discuss how best to set things up in various situations. Instance-Level Configuration Your database server probably has more than one logical processor....(read more)

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • SQL Down Under podcast 60 with SQL Server MVP Adam Machanic

    - by Greg Low
    I managed to get another podcast posted over the weekend. Late last week, I managed to get a show recorded with Adam Machanic. Adam's always fascinating. In this show, he's talking about what he's found regarding increasing query performance using parallelism. Late in the show, he gives his thoughts on a number of topics related to the upcoming SQL Server 2014.Enjoy!The show is online now: http://www.sqldownunder.com/Podcasts 

    Read the article

  • SQL Server Boolean (bit datatype)

    - by Derek D.
    In SQL Server, boolean values can be represented using the bit datatype. Bit values differ from boolean values in that a bit can actually be one of three values 1, 0, or NULL; while booleans can only either be true or false. When assigning bits, it is best to use 1 or zero [...]

    Read the article

  • T-SQL Tuesday # 16 : This is not the aggregate you're looking for

    - by AaronBertrand
    This week, T-SQL Tuesday is being hosted by Jes Borland ( blog | twitter ), and the theme is " Aggregate Functions ." When people think of aggregates, they tend to think of MAX(), SUM() and COUNT(). And occasionally, less common functions such as AVG() and STDEV(). I thought I would write a quick post about a different type of aggregate: string concatenation. Even going back to my classic ASP days, one of the more common questions out in the community has been, "how do I turn a column into a comma-separated...(read more)

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • This Year's SQL Christmas Card

    - by Mike C
    This year's Christmas Card is similar to last year's. I used the geometry data type again for a spatial data design. Just download the attachment, unzip the .SQL script and run it in SSMS. Then look at the Spatial Data preview tab for the result. Also don't forget to visit http://www.noradsanta.org/ if your kids want to track Santa. Merry Christmas, Happy Holidays and have a great new year!...(read more)

    Read the article

  • June 2013 Cumulative Updates for SQL Server 2008 R2

    - by AaronBertrand
    Well, surely at least partly in response to the CU6 mess I reported earlier today , and partly because they were due, Microsoft has released new cumulative updates that contain - among other things - updated code that avoids the symptom introduced with earlier updates (though this regression fix doesn't seem to appear in the KB articles - unless by "corruption" they meant ridiculous size increase). SQL Server 2008 R2 Service Pack 1 Cumulative Update # 13 KB Article: KB #2855792 5 fixes listed at...(read more)

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >