Search Results

Search found 5176 results on 208 pages for 'max fraser'.

Page 127/208 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Auto_increment values in InnoDB?

    - by Timmy
    I've been using InnoDB for a project, and relying on auto_increment. This is not a problem for most of the tables, but for tables with deletion, this might be an issue: AUTO_INCREMENT Handling in InnoDB particularly this part: AUTO_INCREMENT column named ai_col: After a server startup, for the first insert into a table t, InnoDB executes the equivalent of this statement: SELECT MAX(ai_col) FROM t FOR UPDATE; InnoDB increments by one the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. This is a problem because while it ensures that within the table, the key is unique, there are foreign keys to this table where those keys are no longer unique. The mysql server does/should not restart often, but this is breaking. Are there any easy ways around this?

    Read the article

  • Configuring Team City internal.properties to increase git fetch memory

    - by 78lro
    When pulling from GIT my Team City install is getting an out of memory error. According to the Team City documentation I should be able to increase the memory assigned to the git fetch process, by setting the value for teamcity.git.fetch.process.max.memory to something greater than the default 512MB. http://confluence.jetbrains.net/display/TCD65/Git+%28JetBrains%29#Git%28JetBrains%29-InternalProperties Problem is there does not appear to be an internal.properties file in the location specified. I have tried creating one in the TeamCity/conf/internal.properties as suggested here: http://devnet.jetbrains.net/thread/302596 But I still get the out of memory issue when Team City tries to pull from github thx

    Read the article

  • IMAP Idle Timeout

    - by Paul
    Lets say I am using IMAP IDLE to monitor changes in a mail folder. The IMAP spec says that IDLE connections should only stay alive for 30 minutes max, but it is recommended that a lower number of minutes is selected - say 20 minutes, then cancel the idle and restart. I am wondering what would happen if the mail contents changed between the idle canceling, and the new idle being created. An email could potentially be missed. Given that RECENT is a bit vague, this could lead to getting a message list before the old idle ends, and a new idle starts. But this is almost the same as polling every 20 minutes, and defeats some of the benefit of idle. Alternatively, a new idle session could be started prior to terminating the expiring one. But in any case, I think this problem has already been solved so here I am asking for recommendations. Thanks, Paul

    Read the article

  • SSRS 2005: How do I make available varbinary data for download in a report?

    - by Angelo
    Hi, SSRS newbie question here... I have a table where one column is varbinary(max) data. I would like to make a report that makes this data available for download as a hyperlink so the user can just click on the item and get a file download dialog for the binary data. In this particular case, the binary data happens to be the content of old pdf files, but that shouldn't matter. I tried searching around but I can't find any pointers on how to do this. It seems to me that it should be possible. There are ways to display images in a report using varbinary data, so it makes sense that one should be able to make arbitrary binary data downloadable on a report, right?

    Read the article

  • mySQL auto increment problem: Duplicate entry '4294967295' for key 1

    - by Josh
    I have a table of emails. The last record in there for an auto increment id is 3780, which is a legit record. Any new record I now insert is being inserted right there. However, in my logs I have the occasional: Query FAIL: INSERT INTO mail.messages (timestamp_queue) VALUES (:time); Array ( [0] => 23000 [1] => 1062 [2] => Duplicate entry '4294967295' for key 1 ) Somehow, the autoincrement jumped up to the INT max of 4294967295 Why on god's green earth would this get jumped up so high? I have no inserts with an id field. The show status for that table, Auto_increment table now reads: 4294967296 How could something like this occur? I realize the id field should perhaps be a big int, but the worry I have is that somehow this thing jumps back up. Josh

    Read the article

  • Is Interop.Domino dll thread safe ?

    - by anup-24
    Hi , I am using Interop.Domino dll version 1.2 in c# application, and using multithreading to access multiple NSF file at same time by creating new session for each thread created (Max 5 threads at a time). For the large NSF files, I was getting the Notes error like memory segment overflow. To overcome this problem, i used Marshal.ReleaseComObject(object) to release the necessary Notesdocument, and NotesView object where ever possible. Now, the issues is like, i am able to access 2 NSF files but the rest threads are going in dll exceptions as few Notes object are getting null. Kindly provide me some help.... Thanks for help.

    Read the article

  • Looping to provide multiple lines in linechart (django-googlecharts)

    - by mighty_bombero
    Hi, I'm trying to generate some charts using django-googlecharts. This works fine for rather static data but in one case I would like to render a different number of lines, based on a variable. I tried this: {% chart %} {% for line in line_data %} {% chart-data line %} {% endfor %} {% chart-size "390x200" %} {% chart-type "line" %} {% chart-labels days %} {% endchart %} Line data is a list containing lists. The template code fails with "Caught an exception while rendering: max() arg is an empty sequence". I guess the problem is that I try to loop over templatetags. What approach could be used here? Or am I completely missing something? Is this doable using inclusion tags? Thanks for your help.

    Read the article

  • Multi-statement Table Valued Function vs Inline Table Valued Function

    - by AndyC
    ie: CREATE FUNCTION MyNS.GetUnshippedOrders() RETURNS TABLE AS RETURN SELECT a.SaleId, a.CustomerID, b.Qty FROM Sales.Sales a INNER JOIN Sales.SaleDetail b ON a.SaleId = b.SaleId INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.ShipDate IS NULL GO versus: CREATE FUNCTION MyNS.GetLastShipped(@CustomerID INT) RETURNS @CustomerOrder TABLE (SaleOrderID INT NOT NULL, CustomerID INT NOT NULL, OrderDate DATETIME NOT NULL, OrderQty INT NOT NULL) AS BEGIN DECLARE @MaxDate DATETIME SELECT @MaxDate = MAX(OrderDate) FROM Sales.SalesOrderHeader WHERE CustomerID = @CustomerID INSERT @CustomerOrder SELECT a.SalesOrderID, a.CustomerID, a.OrderDate, b.OrderQty FROM Sales.SalesOrderHeader a INNER JOIN Sales.SalesOrderHeader b ON a.SalesOrderID = b.SalesOrderID INNER JOIN Production.Product c ON b.ProductID = c.ProductID WHERE a.OrderDate = @MaxDate AND a.CustomerID = @CustomerID RETURN END GO Is there an advantage to using one over the other? Is there certain scenarios when one is better than the other or are the differences purely syntactical? I realise the 2 example queries are doing different things but is there a reason I would write them in that way? Reading about them and the advantages/differences haven't really been explained. Thanks

    Read the article

  • Getting SHOUTcast metadata on the Mac

    - by Fernando Valente
    I'm creating an application in Objective-C and I need to get the metadata from a SHOUTcast stream. I tried this: NSURL *URL = [NSURL URLWithString:@"http://202.4.100.2:8000/"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:URL]; [request addValue:@"1" forHTTPHeaderField:@"icy-metadata"]; [request addValue:@"Winamp 5/3" forHTTPHeaderField:@"User-Agent"]; [request addValue:@"audio/mpeg" forHTTPHeaderField:@"Content-Type"]; [NSURLConnection connectionWithRequest:request delegate:self]; I would have to get the headers from this request in order to get the information, right? Unfortunately it keeps returning these headers: Date = "17 Apr 2010 21:57:14 -0200"; "Max-Age" = 0; What I'm doing wrong?

    Read the article

  • Google Maps API v3: Can I setZoom after fitBounds?

    - by chris
    I have a set of points I want to plot on an embedded Google Map (API v3). I'd like the bounds to accommodate all points unless the zoom level is too low (i.e., zoomed out too much). My approach has been like this: var bounds = new google.maps.LatLngBounds(); // extend bounds with each point gmap.fitBounds(bounds); gmap.setZoom( Math.max(6, gmap.getZoom()) ); This doesn't work. The last line "gmap.setZoom()" doesn't change the zoom level of the map if called directly after fitBounds. Is there a way to get the zoom level of a bounds without applying it to the map? Other ideas to solve this?

    Read the article

  • Embedded Linux or eCos ?

    - by mawg
    One way to look at it - embedded Linux starts with desktop Linux & ditches the parts not needed for embedded systems (is this actually true?), whereas eCos is designed from the ground up for embedded systems. Now, assume an ARM processor, probably ARM 7 - does performance make a difference? Actually, we talking a very low load system, max 500 transactions a day. Any advantages of one over the other (or FreeRTOS, etc)? Stability, maturity, performance, development tools, anything else? All that I can think of is that if I am certain that I will never port to another o/s, then if I go with embedded Linux, I don't need an o/s abstraction layer to allow me to do unit testing on host (desktop Linux box). Any thoughts or comments? Thanks.

    Read the article

  • Error about 'invalid JSON' with couchDB view but the json's fine...

    - by Chris Huang-Leaver
    I am trying to setup the following view on CouchDB { "_id":"_design/id", "_rev":"1-9be2e55e05ac368da3047841f301203d", "language":"javascript", "views":{ "by_id":{ "map" : "function(doc) { emit(doc.id, doc)}" },"from_user_id":{ "map" : "function(doc) { if (doc.from_user_id) {emit(doc.from_user_id, doc)}}"}, "from_user":{ "map" : "function(doc) { if (doc.from_user) {emit(doc.from_user, doc)}}"}, "to_user_id":{ "map" : "function(doc) {if (doc.to_user_id){ emit(doc.to_user_id, doc)}}"}, "to_user":{ "map" : "function(doc) {if (doc.to_user){ emit(doc.to_user, doc)}}" }, "max_id":{ "map" : "function(doc) { if (doc.id) {emit(doc._id, eval(doc.id))}}", "reduce" :"function(key,value) { a = value[0]; for (i=1; i <value.length; ++i){a = Math.max(a,value[i])} return a}" } } } when I try to 'PUT' this using curl: curl -X PUT -d keys.json $CDB/_design/id {"error":"bad_request","reason":"invalid UTF-8 JSON"} I know it's not invalid JSON, because I tested it using the 'json' library built into Python 2.6, it loads fine. JS screw ups give me the error 'must evaluate to a function' What else might be wrong with it?

    Read the article

  • Is there any super fast algorithm for finding LINES on picture?

    - by Ole Jak
    So I have Image like this I need some super fast algorithm for finding all straight lines on it. I want to give to algorithm parameters like min length and max line distortion. I want to get relative to picture pixel coords start and end points of lines. So on this picture to find all lines between dalles and thouse 2 black lines on top. So I need algorithm for super fast finding straight lines of different colors on picture. Is there any such algorithm? (super duper fast=)

    Read the article

  • Acordex Image viewer throws out of memory exception in CITRIX environment

    - by neha
    We have a .net 2.0 application. In the .aspx page we are calling the java applet using . This applet is calling the Acordex Image viewer. In the production environment users are facing "out of memory" or "insufficient memory" issues when users try to open the image or magnify an image in Acordex viewer. Strangely when the users logout and login again they are able to see the same image without any errors. The website is hosted in a CITRIX environment. We have access to this environment but we are not able to reproduce this issue on the test servers or the local machines. We dont know what is causing this issue. What should we do to troubleshoot the issue? Do we have to increase the memory allotted to the users in CITRIX? The RAM is around 4 gb. Number of simultaneous users - 10-13. image size is max 2 mb Following is the code used to call Acordex image viewer:

    Read the article

  • BroadCast Messages to multiple client on the same machine

    - by Christopher Chase
    I need to have a server on one machine that broadcasts a message sever second or so, Kind of like a service discovery. The message needs to be received by multiple client processes that could be on the same machine or different machines. Im using delphi7, with indy 9.0.18 where im stuck is if i should be using UDP or IP MultiCast or if its even possible... Ive managed to get it to work with IP Multi Cast with one client per machine, but even after many trys with different bindings.. max/min ports etc, i cant seem to find a solution.

    Read the article

  • MySql order by problem

    - by Sergio
    Hello. I want to list messages that received specific user from other users group by ID's and ordered by last message received. If I use this query: SELECT MAX(id), fromid, toid, message FROM pro_messages WHERE toid=00003 GROUP BY fromid I do not get last message sent from user "fromid" to user "toid" but the first message sent. Can I do that in some other way or I need to do it with two queries or join tables? id - message id fromid - id of user who sent message toid - id of user who receive message (in this case user 00003)

    Read the article

  • Problem passing json into jquery graph(flot)

    - by Adam McMahon
    I trying to retrieve some json to pass into a flot graph. I know that json is right because I hard coded it to check, but I'm pretty sure that I'm not passing right because It's not showing up. Here's the javascript: var total = $.ajax({ type: "POST", async: false, url: "../api/?key=xxx&api=report&crud=return_months&format=json" }).responseText; //var total = $.evalJSON(total); var plot = $.plot($("#placeholder"),total); here's the json: [ { data: [[1,12], [2,43], [3,10], [4,17], ], label: "E-File"}, { data: [[1,25], [2,35], [3,3], [4,5], ], label: "Bank Products" }, { data: [[1,41], [2,87], [3,30], [4,29], ], label: "All Returns" } ], {series: {lines: { show: true },points: { show: true }}, grid: { hoverable: true, clickable: true }, yaxis: { min: 0, max: 100 }, xaxis: { ticks: [[1,"January"],[2,"February"],[3,"March"],[4,"April"],[5,"May"],[6,"June"],[7,"July"],[8,"August"],[9,"September"],[10,"October"],[11,"November"],[12,"December"]] }}

    Read the article

  • JPA entities -- org.hibernate.TypeMismatchException

    - by shane lee
    Environment: JDK 1.6, JEE5 Hibernate Core 3.3.1.GA, Hibernate Annotations 3.4.0.GA DB:Informix Used reverse engineering to create my persistence entities from db schema [NB:This is a schema in work i cannot change] Getting exception when selecting list of basic_auth_accounts org.hibernate.TypeMismatchException: Provided id of the wrong type for class ebusiness.weblogic.model.UserAccounts. Expected: class ebusiness.weblogic.model.UserAccountsId, got class ebusiness.weblogic.model.BasicAuthAccountsId Both basic_auth_accounts and user_accounts have composite primary keys and one-to-one relationships. Any clues what to do here? This is pretty important that i get this to work. Cannot find any substantial solution on the net, some say to create an ID class which hibernate has done, and some say not to have a one-to-one relationship. Please help me!! /** * BasicAuthAccounts generated by hbm2java */ @Entity @Table(name = "basic_auth_accounts", schema = "ebusdevt", catalog = "ebusiness_dev", uniqueConstraints = @UniqueConstraint(columnNames = { "realm_type_id", "realm_qualifier", "account_name" })) public class BasicAuthAccounts implements java.io.Serializable { private BasicAuthAccountsId id; private UserAccounts userAccounts; private String accountName; private String hashedPassword; private boolean passwdChangeReqd; private String hashMethodId; private int failedAttemptNo; private Date failedAttemptDate; private Date lastAccess; public BasicAuthAccounts() { } public BasicAuthAccounts(UserAccounts userAccounts, String accountName, String hashedPassword, boolean passwdChangeReqd, String hashMethodId, int failedAttemptNo) { this.userAccounts = userAccounts; this.accountName = accountName; this.hashedPassword = hashedPassword; this.passwdChangeReqd = passwdChangeReqd; this.hashMethodId = hashMethodId; this.failedAttemptNo = failedAttemptNo; } public BasicAuthAccounts(UserAccounts userAccounts, String accountName, String hashedPassword, boolean passwdChangeReqd, String hashMethodId, int failedAttemptNo, Date failedAttemptDate, Date lastAccess) { this.userAccounts = userAccounts; this.accountName = accountName; this.hashedPassword = hashedPassword; this.passwdChangeReqd = passwdChangeReqd; this.hashMethodId = hashMethodId; this.failedAttemptNo = failedAttemptNo; this.failedAttemptDate = failedAttemptDate; this.lastAccess = lastAccess; } @EmbeddedId @AttributeOverrides( { @AttributeOverride(name = "realmTypeId", column = @Column(name = "realm_type_id", nullable = false, length = 32)), @AttributeOverride(name = "realmQualifier", column = @Column(name = "realm_qualifier", nullable = false, length = 32)), @AttributeOverride(name = "accountId", column = @Column(name = "account_id", nullable = false)) }) public BasicAuthAccountsId getId() { return this.id; } public void setId(BasicAuthAccountsId id) { this.id = id; } @OneToOne(fetch = FetchType.LAZY) @PrimaryKeyJoinColumn @NotNull public UserAccounts getUserAccounts() { return this.userAccounts; } public void setUserAccounts(UserAccounts userAccounts) { this.userAccounts = userAccounts; } /** * BasicAuthAccountsId generated by hbm2java */ @Embeddable public class BasicAuthAccountsId implements java.io.Serializable { private String realmTypeId; private String realmQualifier; private long accountId; public BasicAuthAccountsId() { } public BasicAuthAccountsId(String realmTypeId, String realmQualifier, long accountId) { this.realmTypeId = realmTypeId; this.realmQualifier = realmQualifier; this.accountId = accountId; } /** * UserAccounts generated by hbm2java */ @Entity @Table(name = "user_accounts", schema = "ebusdevt", catalog = "ebusiness_dev") public class UserAccounts implements java.io.Serializable { private UserAccountsId id; private Realms realms; private UserDetails userDetails; private Integer accessLevel; private String status; private boolean isEdge; private String role; private boolean chargesAccess; private Date createdTimestamp; private Date lastStatusChangeTimestamp; private BasicAuthAccounts basicAuthAccounts; private Set<Sessions> sessionses = new HashSet<Sessions>(0); private Set<AccountGroups> accountGroupses = new HashSet<AccountGroups>(0); private Set<UserPrivileges> userPrivilegeses = new HashSet<UserPrivileges>(0); public UserAccounts() { } public UserAccounts(UserAccountsId id, Realms realms, UserDetails userDetails, String status, boolean isEdge, boolean chargesAccess) { this.id = id; this.realms = realms; this.userDetails = userDetails; this.status = status; this.isEdge = isEdge; this.chargesAccess = chargesAccess; } @EmbeddedId @AttributeOverrides( { @AttributeOverride(name = "realmTypeId", column = @Column(name = "realm_type_id", nullable = false, length = 32)), @AttributeOverride(name = "realmQualifier", column = @Column(name = "realm_qualifier", nullable = false, length = 32)), @AttributeOverride(name = "accountId", column = @Column(name = "account_id", nullable = false)) }) @NotNull public UserAccountsId getId() { return this.id; } public void setId(UserAccountsId id) { this.id = id; } @OneToOne(fetch = FetchType.LAZY, mappedBy = "userAccounts") public BasicAuthAccounts getBasicAuthAccounts() { return this.basicAuthAccounts; } public void setBasicAuthAccounts(BasicAuthAccounts basicAuthAccounts) { this.basicAuthAccounts = basicAuthAccounts; } /** * UserAccountsId generated by hbm2java */ @Embeddable public class UserAccountsId implements java.io.Serializable { private String realmTypeId; private String realmQualifier; private long accountId; public UserAccountsId() { } public UserAccountsId(String realmTypeId, String realmQualifier, long accountId) { this.realmTypeId = realmTypeId; this.realmQualifier = realmQualifier; this.accountId = accountId; } @Column(name = "realm_type_id", nullable = false, length = 32) @NotNull @Length(max = 32) public String getRealmTypeId() { return this.realmTypeId; } public void setRealmTypeId(String realmTypeId) { this.realmTypeId = realmTypeId; } @Column(name = "realm_qualifier", nullable = false, length = 32) @NotNull @Length(max = 32) public String getRealmQualifier() { return this.realmQualifier; } public void setRealmQualifier(String realmQualifier) { this.realmQualifier = realmQualifier; } @Column(name = "account_id", nullable = false) public long getAccountId() { return this.accountId; } public void setAccountId(long accountId) { this.accountId = accountId; } Main Code for classes are:

    Read the article

  • How to join dynamic sql statement in variable with normal statement

    - by Oliver
    I have a quite complicated query which will by built up dynamically and is saved in a variable. As second part i have another normal query and i'd like to make an inner join between these both. To make it a little more easier here is a little example to illustrate my problem. For this little example i used the AdventureWorks database. Some query built up dynamically (Yes, i know here is nothing dynamic here, cause it's just an example.) DECLARE @query AS varchar(max) ; set @query = ' select HumanResources.Employee.EmployeeID ,HumanResources.Employee.LoginID ,HumanResources.Employee.Title ,HumanResources.EmployeeAddress.AddressID from HumanResources.Employee inner join HumanResources.EmployeeAddress on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID ;'; EXEC (@query); The normal query i have select Person.Address.AddressID ,Person.Address.City from Person.Address Maybe what i'd like to have but doesn't work select @query.* ,Addresses.City from @query as Employees inner join ( select Person.Address.AddressID ,Person.Address.City from Person.Address ) as Addresses on Employees.AddressID = Addresses.AddressID

    Read the article

  • How to do regex HTML tag replace in SQL Server?

    - by timmerk
    I have a table in SQL Server 2005 with hundreds of rows with HTML content. Some of the content has HTML like: <span class=heading-2>Directions</span> where "Directions" changes depending on page name. I need to change all the <span class=heading-2> and </span> tags to <h2> and </h2> tags. I wrote this query to do content changes in the past, but it doesn't work for my current problem because of the ending HTML tag: Update ContentManager Set ContentManager.Content = replace(Cast(ContentManager.Content AS NVARCHAR(Max)), 'old text', 'new text') Does anyone know how I could accomplish the span to h2 replacing purely in T-SQL? Everything I found showed I would have to do CLR integration. Thanks!

    Read the article

  • RadGrid cannot pass sortExpression to ObjectDataSourceControl

    - by Jeff
    I have a Telerik RadGrid that bound to a datasource object. They are configured to support custom paging, sorting. For paging, only the data of a page is retrieved from the database. Before sorting, it works fine. The select method of the datasource is like public List<xxx> Select(string sortExpression, int maximumRows, int startRowIndex) {} Before sorting the sortExpression is empty, which is expected. But after use click sort, in the OnSortCommand event handler of Radgrid, the SortExpression is correct, indicating RadGrid has caputre user's sorting correctly. protected void OnSort(object source, GridSortCommandEventArgs e) { Console.WriteLine(e.SortExpression); // correct } But what is strange is that the RadGrid does not pass parameter to DataSource correctly this time. sortExpression is still empty, maximumRows became int.Max, and startRowIndex is 0. The the sorting in still render correctly, but grid ask datasource to get all data and do the sorting locally. Is this bug of RadGrid or my configuration is wrong?

    Read the article

  • bigger UITableViewCellStyleValue1 detailTextLabel

    - by Ziga Kranjec
    I am trying to make a UITableView. Table cells are done in style UITableViewCellStyleValue1. When text in either textLabel or detailTextLabel is too long, it gets shortened with ellipsis... This happens for both labels; the problem really occurs, when both labels are too long. What is a preferred way to disable this or make detailTextLabel slightly wider? I want detailTextLabel to always show the entire label (it will be max 6 chars long, so it will fit); textLabel is fine as it is.

    Read the article

  • PHPExcel set specific headers for file format

    - by Asif
    Hi, While googling I found two different sets of headers that need to be set when outputting excel generated in different file format. for e.g. For Type "Excel5" headers are: header("Pragma: public"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download");; header("Content-Disposition: attachment;filename=$filename"); header("Content-Transfer-Encoding: binary "); For Type "Excel2007" headers are: header('Content-Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'); header('Content-Disposition: attachment;filename="myfile.xlsx"'); header('Cache-Control: max-age=0'); My question: is there need to set up different headers for each file type as there are other file types also CSV, HTML and PDF?

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • apache syslog-ng error logs and access logs

    - by uzumaki naruto
    I am trying to send all my apache logs to syslog-ng(on remote machine) which in turn writes to a file. so I configure syslog-ng this way source s_apache { unix-stream("/var/log/apache_log.socket" max-connections(512) keep-alive(yes)); }; filter f_apache { match("error"); }; destination df_custom { file("/var/log/custom.log"); }; log { source(s_apache); filter(f_apache); destination(df_custom); }; and add the following line to apache2.conf ErrorLog "|/usr/bin/logger -t 'apache' -u /var/log/apache_log.socket" but only logs being written to "/var/log/custom.log" are [Mon Jul 13 17:24:36 2009] [notice] caught SIGTERM, shutting down and [Mon Jul 13 17:26:11 2009] [notice] Apache/2.2.11 (Ubuntu) configured -- resuming normal operations I want all logs to be sent to custom.log..... Please help me.... where am I going wrong?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >